QFM008: Irresponsible Ai Reading List - February 2024
Source: Image by DALL-E 2
Quantum Fax Machine
Here is everything I found interesting about Irresponsible AI during February 2024.
From the disappointment of the Glasgow Willy Wonka experience, which serves as a metaphor for the sometimes stark difference between expectations and reality in the digital age, to the recall of GM's Cruise driverless cars after a pedestrian accident, these articles collectively underscore the complexities and unintended consequences of integrating advanced technologies into everyday life.
A common theme across the list is the tension between technological advancement and ethical responsibility. Whether it's Google pausing its Gemini AI's ability to generate images due to diversity errors, concerns over privacy and consent with facial recognition technology at the University of Waterloo, or the unpredictability of AI as evidenced by ChatGPT's temporary lapse into nonsensical responses, each story reflects the critical need for more reliable, interpretable, and ethically grounded technological solutions.
Interestingly, amidst these cautionary tales, a counter-narrative emerges from a professor dismissing the fear-mongering around AI as irresponsible, reminding readers of the importance of balanced perspectives on the potential and pitfalls of artificial intelligence.
See the Slideshare version of the post, or read on.
Enjoy!

Links
Cruise is recalling 950 driverless cars nationwide after an incident where a robotaxi failed to stop in time, hitting and dragging a pedestrian in San Francisco. The recall, sparked by concerns over the collision detection system, marks a significant setback for GM's Cruise, amidst growing scrutiny over its autonomous vehicle technology.
Late on Wednesday 21st of February, it seemed like ChatGPT briefly lost its mind. According to [OpenAI](https://status.openai.com/incidents/ssg8fh7sfyz3) the the incident involved a bug introduced during an optimisation attempt that affected how ChatGPT processes language, leading to nonsensical responses. This issue was quickly identified and resolved. Gary Marcus [discussed](https://garymarcus.substack.com/p/chatgpt-has-gone-berserk) the incident, highlighting the unpredictable nature of AI systems and the importance of developing more interpretable, maintainable, and debuggable technologies. He framed the incident as a wakeup call for the need for trustworthy AI, emphasising the challenges of ensuring AI safety and stability. As always, the Hacker News comments on [each](https://news.ycombinator.com/item?id=39450669) [article](https://news.ycombinator.com/item?id=39462087) are enlightening.
This article discusses the growing concern over disinformation and fake news, highlighting historical anecdotes of art forgery to illustrate the ease and potential dangers of spreading falsehoods. It emphasises the importance of discernment in an era where technology makes creating and spreading fake information easier than ever, warning of the serious consequences disinformation could have on society and democracy.
The "Glasgow Willy Wonka Experience", intended as an immersive chocolate celebration, was cancelled and refunds issued after attendees, including children left in tears, encountered a lacklustre setup in a sparsely decorated warehouse, far from the promised magical environment. This story went from niche to mainstream very quickly and is perhaps emblematic of how far away from reality _some_ of the outputs of generative AI can be. Here's how people on [Threads](https://www.threads.net/@culturecrave/post/C31vAj_rUII){:target="_blank"}, [Xitter](https://twitter.com/AlsikkanTV/status/1762235022851948668){:target="_blank"}, and the [BBC](https://www.bbc.co.uk/news/uk-scotland-glasgow-west-68431728){:target="_blank"} saw it. And here's how [The House of Illuminati](https://houseofilluminati.com/){:target="_blank"} (creators) saw it. You be the judge.
Smart vending machines at the University of Waterloo are to be removed after students raised privacy concerns over an error message suggesting the use of facial recognition technology without their consent.
Google has halted the ability of its Gemini AI to create images of people due to errors in accurately representing historical figures and diversity, leading to the generation of misleading representations. The company is working on improving the feature for re-release.
Instead writing poems about the company's incompetence and using swear words. Also related: [someone managed to find out the System Prompt that they were using](https://twitter.com/RandomRocker/status/1748357898843963649){:target="_blank"}.
The article discusses the viewpoint of a professor who believes that the widespread alarmism and fear-mongering about artificial intelligence (AI) are irresponsible. He argues that while AI technology, such as generative AI, is advancing, it is far from posing an existential threat to humanity and emphasises the importance of responsible development and ethical use of AI.
Regards,
M@
[ED: If you'd like to sign up for this content as an email, click here to join the mailing list.]
Originally published on quantumfaxmachine.com and cross-posted on Medium.
hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair
Was this useful?