AI Prompts Hidden in Academic Papers?

A recent Nikkei Asia article uncovered a strange new twist in academic publishing: researchers are secretly embedding AI prompts into their papers to influence peer reviewers—specifically prompting ChatGPT-like tools to respond with positive reviews.

  • Invisible nudges – Researchers are hiding invisible text using HTML or LaTeX to get large language models to review their papers more favourably.
  • Gaming the peer review process – The prompts are engineered to bias AI-generated feedback toward acceptance, risking the integrity of the review process.
  • Publishers catching on – Platforms like arXiv and Springer Nature are now on alert and developing tools to detect this kind of manipulation.
  • Raises deeper questions – If AI is reading academic work, who are authors really writing for—humans or machines?

It’s a clever hack—but also a warning shot. As AI becomes part of more workflows, how do we guard against manipulation and maintain trust?