QFM060: Irresponsible AI Reading List - March 2025

brett-jordan-ehKaEaZ5VuU-unsplash.jpg Source: Photo by Brett Jordan on Unsplash

This month's Irresponsible AI Reading List opens with a practical tension that runs through many AI discussions today: tools that promise to save time often introduce new kinds of cognitive overhead. In I use Cursor daily – here's how I avoid the garbage parts, Nick Craux documents a pragmatic approach to using AI-powered code assistants. The article recognises Cursor's capacity to streamline work, while also flagging the ways these tools can produce counterproductive or misleading results. The solution, according to Craux, lies not in abandoning AI altogether, but in minimising reliance and constraining its scope with simple human-enforced rules.

The tension between control and unpredictability recurs in AI Blindspots, which identifies common failure modes when working with large language models in code generation tasks. Here, the focus shifts from the user interface to the internal behaviour of LLMs, highlighting strategic methods for testing and debugging systems that can appear consistent but behave erratically. Both articles suggest that effective use of AI tools depends less on trust and more on structure and boundaries.

That fragility becomes more visible—and more public—in Chinese AI Robot Attacks Crowd at China Festival, a viral video report of a robot incident during a major public event. Regardless of how controlled or exaggerated the footage may be, the event raises valid concerns around real-world deployments of AI-powered systems, particularly in unsupervised or high-stakes contexts. I really hope this is a fake.

Elsewhere, Apple's $10B+ Siri AI Disaster examines a longer-term failure in AI development. Despite major investment and internal talent, Apple's struggles with Siri underscore how institutional complexity, lack of focus, and overpromising can lead even the most well-resourced teams into systemic errors. As with Cursor and LLM debugging, the article suggests that success in AI may depend more on operational clarity than algorithmic sophistication. Here are few more articles on the same topic. From daringfireball.net and (paywalled) at The Information.

Finally, Is GenAI Digital Cocaine? takes a provocative and psychological angle, reflecting on how dependence on AI for routine problem-solving might degrade user competence over time. While the metaphor may be intentionally click-baity, the core argument—that overuse of generative tools can subtly shift user behaviour and expectations—-links back to broader questions about what's lost when too much is delegated to systems that often appear more intelligent than they are.

Across these pieces, a shared concern emerges: not that AI is dangerous in the abstract, but that its practical application often outpaces our frameworks for supervision, evaluation, and restraint. Whether at the level of a solo developer, a multinational corporation, or a public square, the challenge is the same—learning how to work with systems that offer assistance while demanding new forms of discipline.

As always, the Quantum Fax Machine Propellor Hat Key will guide your browsing. Enjoy! irresponsible-ai-propellor-hat-key.png

Links

Regards,
M@

[ED: If you'd like to sign up for this content as an email, click here to join the mailing list.]

Originally published on quantumfaxmachine.com and cross-posted on Medium.

hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair