QFM056: Irresponsible AI Reading List February 2025
Everything that I found interesting last month about the irresponsible use of AI.
Tags: qfm, irresponsible, ai, reading, list, february, 2025
Source: Photo by Brett Jordan on Unsplash
We start this month’s Irresponsible AI Reading List with Baldur Bjarnason’s The LLMentalist Effect offering a compelling comparison between LLMs and cold reading techniques used by psychics. The article examines why we perceive intelligence in these systems when they’re merely applying statistical patterns—creating what amounts to a technological illusion that convinces users of a non-existent understanding. This psychological mechanism helps explain the widespread belief in AI capabilities that may not actually exist.
This theme of perception versus reality continues in Edward Zitron’s The Generative AI Con, which criticises the disconnect between AI industry hype and practical utility. Zitron questions whether the millions of reported users translate to meaningful integration in our daily lives or sustainable business models, suggesting the AI industry may be building on unstable financial foundations.
The discourse around AI’s future capabilities receives critical examination in Jack Morris’s Please Stop Talking About AGI, which argues that our fixation on hypothetical future superintelligence diverts attention from pressing present-day AI challenges. Morris suggests this misdirection of focus hampers our ability to address the immediate ethical considerations of existing AI technologies.
Miles Brundage adds practical perspective in Some Very Important Things (That I Won’t Be Working On This Year), outlining critical AI domains requiring attention: AI safety awareness, technical infrastructure for AI agents, economic impacts, regulatory frameworks, and AI literacy. Each represents a vital area where engagement is needed to navigate both challenges and opportunities in our evolving AI landscape.
The most concerning applications of AI emerge in Juan Sebastián Pinto’s The Guernica of AI, which draws alarming parallels between historical warfare and contemporary AI-powered military technologies. Drawing from personal experience at Palantir, Pinto examines how AI surveillance systems are reshaping modern conflict and civilian life through pervasive data control, raising profound ethical questions about the human consequences of militarised AI.
The dual nature of AI technologies is further explored in Modern-Day Oracles or Bullshit Machines?, which characterises LLMs as both transformative accessibility tools and potential misinformation vectors. The article compares these systems to revolutionary inventions like the printing press while warning of their unprecedented capacity to propagate falsehoods, emphasising the need for critical digital literacy in navigating their outputs.
As always, the Quantum Fax Machine Propellor Hat Key will guide your browsing. Enjoy!
The LLMentalist Effect: How Chat-Based Large Language Models Replicate the Mechanisms of a Psychic’s Con: The article explores the intriguing parallels between the perception of intelligence in chat-based Large Language Models (LLMs) and a psychic’s con using cold reading techniques. Author Baldur Bjarnason examines why some people believe that LLMs possess intelligence when in fact, they rely on statistical modeling rather than genuine understanding or reasoning. Much like a psychic, LLMs provide plausible responses based on patterns, creating an illusion of intelligence that convinces users of their smartness, despite their outputs being statistically generic.
#LLM
#AI
#CognitiveBias
#TechIllusions
#Psychics
The Generative AI Con: In this article Edward Zitron critiques the vast investment and media hype surrounding generative AI, particularly focusing on OpenAI’s ChatGPT. Zitron argues that while ChatGPT and similar technologies have user numbers in the millions, these figures do not reflect genuine profitability or meaningful integration into daily lives. He questions the sustainability of the AI industry’s financial model, highlighting the discrepancy between the claimed potential of AI products and their practical utility.
#GenerativeAI
#TechCritique
#OpenAI
#AIIndustry
#TechBubble
Please Stop Talking About AGI: This article by Jack Morris discusses the growing infatuation with Artificial General Intelligence (AGI) in both public discourse and media. Morris argues that the constant chatter about AGI diverts attention away from pressing AI challenges we should be focusing on today and suggests that this focus on future AGI distracts from the practical advancements and ethical considerations of current AI technologies.
#AGI
#AI
#ArtificialIntelligence
#TechDiscourse
#Ethics
Some Very Important Things (That I Won’t Be Working On This Year): The article by Miles Brundage discusses five critical AI-related topics that he won’t be focusing on this year, despite their importance. These topics include AI safety awareness, technical infrastructure for AI agents, economic impacts of AI, the EU AI Act, and AI literacy. Each topic is a call to action for more engagement and understanding to navigate the challenges and leverage the opportunities they present in the evolving AI landscape.
#AI
#ArtificialIntelligence
#TechPolicy
#AIEthics
#FutureOfWork
The Guernica of AI: Juan Sebastián Pinto delivers a poignant warning about the dangers of militarised AI technologies. Drawing parallels between the historical bombing of Guernica and the current conflict in Gaza, Pinto explores how AI-powered tech companies like Palantir are shaping modern warfare and public life through pervasive surveillance and data control. He reflects on his experience at Palantir, emphasising the ethical concerns and human consequences of AI applications in both military and civilian contexts.
#AI
#Surveillance
#WarTech
#Guernica
#Palantir
Modern-Day Oracles or Bullshit Machines?: The article delves into the dual nature of Large Language Models (LLMs) like ChatGPT, describing them as both transformative technologies and potential sources of misinformation. It explains how these systems can significantly enhance accessibility in computing, akin to revolutionary inventions like the printing press. Yet, it warns of the unprecedented scale at which they might spread misinformation, urging readers to understand and harness these tools effectively to thrive in the modern digital age.
#AI
#ChatGPT
#TechFuture
#Innovation
#DigitalLiteracy
Regards,
M@
[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]
Originally published on quantumfaxmachine.com and cross-posted on Medium.
hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair |