In the latest episode of “What Could Possibly Go Wrong?”, major academic publishers have been caught in the awkward position of needing to retract dozens of research papers. The reason? Apparently, artificial intelligence has been moonlighting as a scientific author, though with all the finesse of a student who skimmed the textbook on the way to class. According to a fresh study published in JAMA, nearly 70 papers retracted in 2023 came with telltale signs of being generated or minimally assisted by AI.
Among the standout red flags were phrases such as “insert data here” and the bold appearance of “Lorem ipsum”, which is Latin for “something went terribly wrong”. What was once assumed to be layout filler by distracted editors turned out to be the AI’s best shot at pretending to write science. Elsevier and Springer Nature, two stalwarts of respectable publishing, found themselves in the uncomfortable position of retracting over 40 of these papers combined. The rest were nudged off the research cliff by smaller, though no less embarrassed, journals.
The use of generative AI tools like ChatGPT is already the subject of growing unease in the academic world, especially since these tools have a penchant for inventing citations the way magicians pull rabbits from hats. Unfortunately, the rabbits in this analogy have no ears, no names, and often no documented existence. Lead author Bhargava Reddy from Northwestern University, who did not rely on ChatGPT to write his findings, noted that this trend presents a genuine credibility problem for science, which traditionally prefers its facts to be factual.
Among the most entertaining findings were the designations of authors with a PhD in Pure Imagination. One paper listed “good research paper” as its title and helpfully stated “please insert abstract here” in lieu of an actual abstract. This might have been passed off as performance art if it weren’t for the fact that these papers somehow made it beyond the editorial gates in the first place. Granted, some suspect the gates may have been left slightly ajar due to pressure on journals to churn out content at the speed of a caffeinated squirrel.
Elsevier and Springer Nature have since dressed themselves in the robes of rectitude, upgrading their submission guidelines and peer review processes to avoid future embarrassment. Though history suggests that where there’s a will and an autocomplete function, there’s inevitably a way. Both publishers say they are now deploying AI themselves to catch these AI-generated submissions, introducing the rather poetic image of robots catching other robots pretending to be humans pretending to be scientists.
Meanwhile, honest researchers struggling to get legitimate work published might be dismayed to find that an AI can fabricate an entire paper that clears editorial review, complete with nonsensical graphs and citations to articles that never were. It seems that modern publishing, like a confused magician, has mistaken illusion for reality once again.
Turns out even in academia, the call is coming from inside the inbox.

