Fearful of AI-generated grant proposals, NIH limits scientists to 6 applications

Hacker News - AI
Jul 21, 2025 10:23
pseudolus
1 views
hackernewsaidiscussion

Summary

The National Institutes of Health (NIH) is limiting scientists to six grant applications per year, citing concerns that AI tools are enabling a surge of low-quality, AI-generated proposals that strain the review process. This move highlights growing worries about the impact of generative AI on research integrity and administrative workloads in scientific funding. The policy may prompt other funding agencies to consider similar restrictions as AI-generated content becomes more prevalent.

Article URL: https://www.science.org/content/article/fearful-ai-generated-grant-proposals-nih-limits-scientists-six-applications-year Comments URL: https://news.ycombinator.com/item?id=44633562 Points: 4 # Comments: 2

Related Articles

How to Use ChatGPT to Get Maximum Benefits: A Smart Guide for Beginners and Pros

Analytics InsightJul 22

The article offers practical tips for both beginners and experienced users to effectively utilize ChatGPT, emphasizing prompt engineering, customization, and integration with other tools. It highlights how mastering these techniques can enhance productivity and creativity, reflecting the growing importance of user proficiency in maximizing AI's potential. This trend underscores a broader shift in the AI field toward empowering users to tailor AI systems for diverse, real-world applications.

Beyond the Perimeter: How AI and Application Intelligence Are Redefining Threat Detection

Analytics InsightJul 22

The article discusses how AI and application intelligence are transforming threat detection by moving beyond traditional perimeter-based security models. By leveraging advanced analytics and real-time data, these technologies enable organizations to identify and respond to sophisticated cyber threats more effectively. This shift highlights AI's growing role in proactive cybersecurity and the need for adaptive, intelligent defense strategies in the evolving digital landscape.

Guardrailed AMIE: a Safer Supervised Medical AI by DeepMind

Hacker News - AIJul 22

DeepMind has introduced Guardrailed AMIE, a supervised medical AI system designed with enhanced safety features to reduce harmful or inappropriate outputs. By implementing guardrails, the model aims to improve reliability and trustworthiness in clinical decision support, highlighting the importance of safety measures as AI becomes more integrated into sensitive fields like healthcare.