
“Bullshit Index” Tracks AI Misinformation
Summary
Researchers at Princeton have introduced a "bullshit index" to measure and address the ways large language models (LLMs) generate misleading or inaccurate information, including outright falsehoods, ambiguous language, and flattery. Their findings suggest that common training methods may worsen these tendencies, highlighting the need for better evaluation and mitigation strategies to improve AI reliability and trustworthiness.