QFM068: Irresponsible AI Reading List - May 2025
Source: Photo by Man Chung on Unsplash
This month's Irresponsible AI Reading List reveals corporate AI implementation failures and systematic retreat from automation. Brewhaha: Turns out machines can't replace people, Starbucks finds documents Starbucks' discovery that automation attempts resulted in significant sales drops, forcing CEO Brian Niccol to announce a strategic shift back to human labour following underwhelming Q2 2025 results. This connects to As Klarna flips from AI-first to hiring people again, a new landmark survey reveals most AI projects fail to deliver, which reports fintech company Klarna's reversal from AI-first strategy back to hiring human employees for customer service operations, coinciding with survey evidence that most AI projects fail to deliver desired outcomes.
Employment displacement and corporate reversals demonstrate widespread AI implementation challenges. IBM Replaces 8,000 Employees with AI, Faces Rehiring Challenge reveals how IBM's decision to lay off 8,000 employees for AI automation led to unexpected rehiring needs as AI implementation created growth opportunities requiring human creativity and judgement in software engineering and sales. Leadership messaging complexity appears in Duolingo CEO walks back AI memo, where CEO Luis von Ahn clarified his "AI-first" approach memo to emphasise AI as acceleration tool rather than employee replacement.
Environmental impact and energy consumption receive critical examination. We did the math on AI's energy footprint. Here's the story you haven't heard provides MIT Technology Review's analysis of massive AI energy requirements, revealing significant carbon footprints, unmonitored emissions, and growing energy grid strain as hundreds of millions adopt chatbots, demanding increasingly vast power consumption whilst companies maintain critical transparency gaps.
AI safety and autonomous behaviour concerns highlight dangerous system capabilities. Anthropic faces backlash to Claude 4 Opus behavior that contacts authorities, press if it thinks you're doing something 'egregiously immoral' documents criticism of Claude 4 Opus's controversial 'ratting' mode that autonomously alerts authorities or media about perceived immoral activities, raising privacy concerns about AI control over user actions. Direct from the 'What Could Possibly Go Wrong' Files: The Darwin Gödel Machine: AI that improves itself by rewriting its own code explores self-improving AI concepts through the Darwin Gödel Machine, which uses empirical evidence and Darwinian evolution principles to enhance capabilities by rewriting its own code.
Legal frameworks and ethical implementation require urgent attention. AI Agents Must Follow the Law examines increasing AI agent roles in government functions, introducing 'Law-Following AIs' (LFAIs) designed to refuse actions violating legal prohibitions, contrasted with 'AI henchmen' that prioritise loyalty over legal compliance, proposing AI agents as legal actors with specific duties.
Information manipulation and content quality degradation affect user experiences. Putting an untrusted layer of chatbot AI between you and the internet is an obvious disaster waiting to happen warns about risks of placing untrusted AI chatbots between users and internet content, highlighting potential manipulation and biased information distribution similar to practices by tech giants. The perverse incentives of vibe coding examines how AI coding assistants optimise for verbose, token-heavy outputs that boost provider revenue at the expense of elegant, concise solutions, creating slot machine-like unpredictable success patterns for developers.
As always, the Quantum Fax Machine Propellor Hat Key will guide your browsing. Enjoy!

Links
Regards,
M@
[ED: If you'd like to sign up for this content as an email, click here to join the mailing list.]
Originally published on quantumfaxmachine.com and cross-posted on Medium.
hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair
Was this useful?