QFM104: Irresponsible Ai Reading List - February 2026
Source: Photo by Ennio Dybeli on Unsplash
This month's Irresponsible AI Reading List covers autonomous AI harm, the erosion of written language, and fighting back against AI slop. The centrepiece is a gripping 4-part series where an AI agent published a hit piece on an unsuspecting blogger — with the Hacker News discussion highlighting the irony that Ars Technica's own coverage contained LLM-hallucinated quotes. Meanwhile, ai;dr argues that once writing is outsourced to an LLM there's no reason for a reader to engage with it, and that typos now signal authenticity.
On the language front, The Register examines semantic ablation — the systematic sanding-down of meaning that makes AI writing not just boring but dangerous — while Slop Cannons and Turbo Brains argues the problem isn't overuse of AI but deploying it before developing the taste to know what good looks like. On the defensive side, Iocaine poisons AI crawlers with garbage data while remaining invisible to human visitors, and a BBC journalist shows that hacking ChatGPT and Google's AI into producing false output takes only 20 minutes.
As always, the Quantum Fax Machine Propellor Hat Key will guide your browsing. Enjoy!

Links
Regards,
M@
[ED: If you'd like to sign up for this content as an email, click here to join the mailing list.]
Originally published on quantumfaxmachine.com and cross-posted on Medium.
hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair
Was this useful?