AI slop and the destruction of knowledge – Iris van Rooij

Large Language Models generate plausible-sounding but fundamentally unreliable definitions and summaries without concern for truth, as demonstrated by AI-generated scientific content now appearing on platforms like ScienceDirect that contradicts established domain knowledge and spreads misinformation at scale. This contamination of scholarly infrastructure poses a systemic threat to the integrity of scientific knowledge, particularly for students and early-career researchers who cannot reliably distinguish between accurate and fabricated content, requiring institutional accountability and removal of such AI features from academic platforms.

Visit Original Article →

⌘K

Start typing to search...

Search across content, newsletters, and subscribers