Researchers are embedding invisible commands in academic papers to force AI into giving positive reviews, a practice that threatens the integrity of science. This story uncovers the secret messages and the growing debate over AI's role in academia.
Researchers have found a way to trick AI. They're embedding hidden messages in scientific papers.
Using white text on a white background, scientists are writing secret instructions for AI reviewers.
A review by Nikkei found hidden commands telling AI to ignore negatives and only give positive feedback.
It's not an isolated issue. Nature uncovered 18 additional papers with similar hidden messages.
The trend may have started with a social media post by an Nvidia researcher in November.
Critics warn this practice poisons the well of peer review, a cornerstone of credible science.
A recent Nature survey found that nearly 20% of researchers are already using LLMs to speed up their work.
One professor reported receiving a review he suspected was written entirely by ChatGPT.
The danger lies in reviewers outsourcing their critical thinking and labor to artificial intelligence.
Powerful AI is challenging quality control across many professional fields, not just academia.
Past incidents, like an AI-generated, anatomically impossible rat in a biology paper, highlight the pitfalls of unchecked AI.
As AI becomes more integrated, how can we protect the systems that validate scientific truth? The debate is just beginning.