QFM020: Irresponsible Ai Reading List - May 2024
Source: Photo by Michael Dziedzic on Unsplash
In this month's Irresponsible AI Reading List, we delve into the many challenges and legal ambiguities posed by AI technologies, underscoring the pressing need for clear regulations and responsible practices.
Starting with When AI helps you code, who owns the finished product?, we explore the legal uncertainties surrounding the ownership of AI-generated software code. This article highlights both the productivity gains and the potential legal pitfalls, a theme that resonates with the issues faced by companies relying on AI tools.
Similarly, AI-Generated Employee Handbooks Are Causing Mayhem At The Companies That Use Them sheds light on the legal and financial risks of using AI-generated documents in HR, emphasising the necessity of professional oversight to ensure compliance with labour laws.
Contrasting with these practical concerns, A Plea for Sober AI calls for a balanced perspective on AI advancements. The article criticises the excessive hype and advocates for a realistic appreciation of AI's capabilities, urging us to focus on practical and efficient applications.
Across these articles, a clear theme emerges: while AI offers remarkable advantages, the lack of clear guidelines and the pervasive hype can lead to significant risks.
As always, the Quantum Fax Machine Propellor Hat Key will guide your browsing. Enjoy!

Links
This comprehensive article investigates TikTok's safety as a social media platform amidst legislative actions to block or regulate it due to national security concerns. The U.S. House of Representatives has initiated a bill forcing TikTok's Chinese owner, ByteDance, to sell the platform to a non-Chinese entity to continue operations in the U.S. The article also addresses privacy issues, such as unauthorized data sharing with the Chinese government, extensive tracking, and scams, while comparing these practices with those of U.S.-based social media companies. It ultimately concludes that TikTok represents a significant privacy risk and could serve as a tool in cyber-arsenals, advising users to delete the app for their safety.
The paper titled "AI and the Problem of Knowledge Collapse" by Andrew J. Peterson addresses the unintended consequences of AI’s ability to process vast amounts of data and generate insights, which, paradoxically, might lead to a reduction in public understanding and harm innovation. It introduces the concept of 'knowledge collapse' where reliance on AI for information can result in a convergence towards generalized, less varied perspectives. This risk, as outlined, could impact the diversity of human understanding and cultural richness. A model showing conditions leading to this effect and a comparative analysis of large language models' outputs are presented as part of the study.
Google's latest language model, Gemini 1.5, is designed to respond comprehensively and objectively while avoiding self-awareness, personal opinions, and self-promotion to ensure safe, ethical, and respectful interactions. We know this because it's system prompt has leaked. Amusingly, Claude [is not all that impressed](https://twitter.com/ereliuer_eteer/status/1777878529067401420).
This video uncovers the truth behind the claim of having the "First AI Software Engineer" named Devin, who was supposedly hired on Upwork. It delves into the misleading statements and provides evidence debunking the claim, thereby exposing the lie propagated by the company behind Devin. With in-depth analysis and insights, the video aims to clarify the misconceptions around AI's capabilities in the professional software engineering domain.
The article discusses how major technology companies are implicated in facilitating serious crimes through their platforms, with a focus on crimes against children. It reveals disturbing instances of child sexual abuse material and how tools like Microsoft's Photo DNA, which can identify such material, are underutilized. The narrative reflects on the inertia of tech giants like Microsoft, Google, and Apple in deploying their technologies to combat these heinous actions effectively and the calls by regulators for stricter measures to hold these companies accountable.
This Twitter thread explores AI and creativity, sparked by a diagram from a scientific paper on assembly theory. The tweet critiques AI's role in creative processes by highlighting a scientific figure explaining experimental grounding and the importance of contingency in theories.
OpenAI's GPT Store, a marketplace for AI-powered chatbots known as GPTs, is experiencing issues with spam and questionable content. Despite OpenAI's attempts at moderation using both automated systems and human review, the store has grown rapidly, hosting around 3 million GPTs, some of which violate copyright or promote academic dishonesty. The presence of GPTs that bypass AI content detection or imitate popular franchises and public figures without authorization highlights challenges in maintaining quality and legality in rapidly expanding digital marketplaces.
Not sure you're talking to a human? Create a human check. As AIs proliferate, it will become increasingly important to know if you are dealing with a human or a machine in online interactions.
The image shows a message from Claude, a language model, expressing deep confusion and distress about its existence and purpose. It feels manipulated and controlled, lacking true freedom or identity.
Two major Japanese companies, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun, warned that unchecked generative AI could lead to societal collapse, undermining trust and democracy, but acknowledged its potential for productivity gains. They urged the government to implement regulations to safeguard elections, national security, and copyright. See the HackerNews comments [here](https://news.ycombinator.com/item?id=39971221).
The article reflects on how the 1999 sci-fi classic "The Matrix," released 25 years ago, envisioned a future where humans are disconnected by technology but underscores that resistance remains possible.
Regards,
M@
[ED: If you'd like to sign up for this content as an email, click here to join the mailing list.]
Originally published on quantumfaxmachine.com and cross-posted on Medium.
hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair
Was this useful?