Scientists hide messages in papers to game AI peer review
Summary
Scientists are embedding hidden messages in research papers to test and potentially exploit AI-powered peer review systems, revealing vulnerabilities in how these tools assess scientific work. This practice raises concerns about the reliability and integrity of AI-assisted peer review, highlighting the need for more robust safeguards as AI becomes increasingly involved in academic publishing.