Join the movement to end censorship by Big Tech. StopBitBurning.com needs donations and support.
BOMBSHELL: 200,000 “Science Papers” in academic journal database PubMed may have been AI-generated with errors, hallucinations and false sourcing
By sdwells // 2025-07-07
Mastodon
    Parler
     Gab
 
A groundbreaking new study published in Science Advances has exposed the growing and controversial role of artificial intelligence (AI) in academic publishing. Researchers from the University of Tübingen in Germany have revealed that a staggering number of scientific papers—potentially hundreds of thousands—may have been generated entirely or partially with the help of AI language models, such as ChatGPT.
  • Massive AI Use in Scientific Writing: A new study published in Science Advances estimates that between 5% and 40% of biomedical research abstracts may have been written with the help of AI, suggesting that over 200,000 papers annually on PubMed could be AI-generated.
  • Language Fingerprints Reveal AI Use: Researchers from the University of Tübingen identified 454 words frequently overused by large language models (LLMs), such as "garnered," "encompassing," and "burgeoning," to detect potential AI-written texts in academic journals.
  • Bizarre and Blatant Examples: Some AI-generated papers were so poorly edited they included phrases like "I'm very sorry... as I am an AI language model" and hallucinated content, including fake sources and even a journal article featuring an AI-created image of a rat with oversized genitals.
  • Academic Integrity at Risk: As researchers begin to mask their AI usage and edit their writing to avoid detection, the paper warns this trend could have a more profound impact on scientific publishing than even major global events like the COVID-19 pandemic.

Bombshell Research Finds a Staggering Number of Scientific Papers Were AI-Generated

Using linguistic analysis, the team identified 454 words that are commonly overused by AI-generated content, such as “garnered,” “encompassing,” and “burgeoning.” By scanning biomedical research abstracts in the PubMed database for these telltale terms, they found that anywhere from 13.5% to 40% of the abstracts were likely AI-assisted. Given that PubMed indexes roughly 1.5 million papers each year, this translates to a conservative estimate of at least 200,000 AI-influenced papers annually. These findings highlight a significant shift in scientific writing practices and raise pressing concerns about the integrity and authenticity of academic work. While some researchers attempt to conceal their use of AI, others are surprisingly careless. One notorious example cited by Arizona State University computer scientist Subbarao Kambhampati showed a paper with a direct admission from the AI: “I’m very sorry, but I don’t have access to real-time information or patient-specific data as I am an AI language model.” Such blunders, while blatant, are not the norm. Many AI-generated passages are subtle enough to go unnoticed, especially if users make small edits to avoid detection. Some even use commands like "regenerate response"—an option in tools like ChatGPT—to refine or disguise output. The academic watchdog blog Retraction Watch has documented numerous examples, including a paper about millipedes that featured entirely fabricated references and was briefly retracted—only to reappear on another academic platform with the same false citations. More egregious cases include absurd AI hallucinations, such as a published paper that included a comically inaccurate image of a rat with grotesquely oversized genitals, which ultimately led to the paper’s retraction. These examples underscore the broader issue of scientific credibility being undermined by poorly monitored AI usage. Ironically, in response to these developments, some researchers have started modifying their writing styles to avoid sounding like AI. Terms commonly associated with chatbots are being phased out by human authors attempting to preserve the perceived authenticity of their work. The Tübingen researchers caution that the widespread, and often unacknowledged, use of AI tools could fundamentally alter the landscape of scientific publishing. In fact, they suggest this shift may rival or even surpass the impact of major global events like the COVID-19 pandemic in terms of its influence on academic communication. Despite the mounting evidence, coauthor Dmitry Kobak expressed disbelief at how carelessly AI is being used in critical academic writing. “I would think for something as important as writing an abstract of your paper,” he told The New York Times, “you would not do that.” Yet, the research suggests otherwise—and the implications for science are profound. Tune your internet dial to NaturalMedicine.news for more tips on how to use natural remedies for preventative medicine and for healing, instead of succumbing to Big Pharma products and services that are soon to be run by AI algorithms written by pharma shills and slugs that want you dying and dead. Sources for this article include: NaturalNews.com Futurism.com    
Mastodon
    Parler
     Gab