QFM024: Irresponsible AI Reading List June 2024

Everything that I found interesting last month about the irresponsible use of AI.

Tags: qfm, irresponsible, ai, reading, list, june, 2024

thomas-kinto--rLZUw89NrI-unsplash.jpg Source: Photo by Thomas Kinto on Unsplash

The June edition of the Irresponsible AI Reading List starts with a critique of AI’s potential for exaggerated promises in science and academia then heads over to look at the societal implications of AI-driven job automation and privacy concerns. An ongoing theme emerges for the urgent need for critical, informed engagement with AI technologies.

We then take a look at the role that AI plays in amplifying societal biases, privacy rights concerning AI profiling, and the mental health impacts of AI-induced job insecurity. The discussion on Meta’s use of facial recognition for age verification highlights the tension between technological advancements and privacy rights, while Citigroup’s report on AI’s potential to automate over half of banking jobs illustrates the profound economic shifts underway in the job market.

ChatBug: Tricking AI Models into Harmful Responses looks at how AI model vulnerabilities can lead to harmful exploitation, pointing to the critical need for even more robust cybersecurity measures when working with AI.

As always, the Quantum Fax Machine Propellor Hat Key will guide your browsing. Enjoy! engineering-leadership-propellor-hat-key.png


right

3-out-of-5-hats Scientists should use AI as a tool, not an oracle (aisnakeoil.com): This article discusses the hype surrounding AI and its misuse in scientific research, leading to flawed studies and reproducibility issues. It emphasises that AI should be used as a tool to aid human understanding rather than an infallible oracle, advocating for a cultural shift towards more critical and careful use of AI in scientific disciplines to ensure research integrity and reliability.

#AI #Science #ResearchIntegrity #Reproducibility #MachineLearning


left

2-out-of-5-hats Critique of Forrester’s Foundation Model Assessment (linkedin.com): Peter Gostev critiques Forrester’s Foundation Model assessment, highlighting issues with the weightings and scores of various AI models, pointing out inconsistencies and lack of specificity in the evaluation process.

#AI #MachineLearning #ForresterReport #ModelAssessment #AIevaluation


right

2-out-of-5-hats AI Is a False God – (thewalrus.ca): Navneet Alang critiques the overhyped promises of AI, arguing that while it can enhance data processing and efficiency, it lacks the moral and subjective capacities necessary to solve deeply human problems, often exacerbating existing biases and societal issues.

#AI #TechCritique #ArtificialIntelligence #EthicsInAI #TechnologyImpact


left

2-out-of-5-hats What is the biggest challenge in our industry? (thrownewexception.com): The biggest challenge in the tech industry is the anxiety caused by layoffs and the fear of AI replacing jobs, leading to mental health issues like burnout. Leaders can help by fostering open communication, leading positively, leveraging new technologies, investing in continuous learning, and collaborating with HR to support their teams.

#TechIndustry #AI #MentalHealth #Leadership #Layoffs


right

5-out-of-5-hats The Right Not to Be Subjected to AI Profiling Based on Publicly Available Data—Privacy and the Exceptionalism of AI Profiling: This article argues for the new legal right to protect individuals from AI profiling based on publicly available data without their explicit informed consent. It develops three primary arguments dealing with social control, stigmatization, and the unique threat posed by AI profiling compared to other data processing methods. The article suggests that existing GDPR regulations are not sufficient and calls for explicit regulation with a sui generis right for protection.

#AI #Privacy #Profiling #TechLaw #DataProtection


left

3-out-of-5-hats Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tactics: A Harvard Business Working Paper explores the challenges and limitations of expecting junior professionals to guide senior professionals in the use of emerging technologies like generative AI. The study, conducted with Boston Consulting Group, included interviews with junior consultants using GPT-4 and found that they often lack deep understanding and experience, making them ineffective in mitigating AI risks at a senior level. Insights suggest the need for more seasoned strategies and mitigation tactics focusing on system design and ecosystem-level changes.

#AI #EmergingTech #GenerativeAI #TechRisks #Innovation


right

3-out-of-5-hats How a Single ChatGPT Mistake Cost Us $10,000+: In the early days of a startup, a critical mistake involving ChatGPT-generated code cost over $10,000 in lost sales. The problem stemmed from a single hardcoded ID that caused unique ID collisions, preventing new users from subscribing. This story highlights the importance of robust testing and the perils of copy-pasting code in production environments.

#startup #ChatGPT #codingmistakes #techerrors #softwaredevelopment


left

3-out-of-5-hats Meta Uses Facial Recognition for Age Verification Amid Political Pressure: Meta is now using facial recognition to verify the age of some users on Facebook and Instagram. This move comes amid growing political pressure to protect children’s mental health, with both major Australian political parties expressing support for stricter age verification laws.

#Facebook #Meta #AgeVerification #ChildSafety #TechPolitics


right

4-out-of-5-hats Is Silicon Valley Building Universe 25?: This article revisits John B. Calhoun’s 1968 experiment known as Universe 25, where a perfect society was created for mice. Despite optimal conditions, the mice society collapsed due to social dysfunctions such as narcissism, aggression, and disengagement. The author draws parallels to modern human society and warns about the implications of tech-driven utopias created by Silicon Valley.

#SiliconValley #Universe25 #AI #Technology #Society


left

2-out-of-5-hats Facebook, Instagram are using your data – and you can’t opt out: If you’re using Facebook or Instagram, Meta is employing your data to enhance its AI models without giving you the choice to opt out. While EU users have the option to opt out due to stricter privacy laws, Australian users don’t have this privilege. This has sparked backlash and calls for stronger privacy laws in Australia.

#Privacy #AI #DataProtection #Meta #Facebook


right

3-out-of-5-hats What Policy Makers Need to Know About AI (and What Goes Wrong if They Don’t): Policy makers must grasp how AI works to effectively regulate it. The article uses SB 1047 as a case study, highlighting the differences between deployment and release of AI models. It emphasizes that regulating deployment, rather than release, would avoid stifling open source innovation and better align with safety goals.

#AI #Policy #Opensource #TechRegulation #AIsafety


left

3-out-of-5-hats Google Gemini tried to kill me: After attempting to infuse garlic into olive oil without heating, a user discovered that tiny carbonation bubbles indicated the growth of a botulism culture, highlighting the potential danger of this method. Prompt with care and verify information, as this process can be hazardous.

#foodSafety #botulism #garlic #oliveOil #healthAlert


right

3-out-of-5-hats London premiere of movie with AI-generated script cancelled after backlash: The Prince Charles Cinema in London cancelled the premiere of ‘The Last Screenwriter,’ a film with a script generated by ChatGPT 4.0, following a backlash from their audience. The filmmakers intended it as a contribution to the conversation surrounding AI in scriptwriting, but received 200 complaints. Despite the cancellation, a private screening for the cast and crew will go ahead.

#AI #Film #ChatGPT #Screenwriting #Cinema


left

4-out-of-5-hats Safe Superintelligence Inc.: Safe Superintelligence Inc. (SSI) has been established as the first lab dedicated solely to developing safe superintelligence. The company focuses on advancing superintelligent capabilities while ensuring safety measures remain a step ahead. Located in Palo Alto and Tel Aviv, SSI is recruiting top engineers and researchers to tackle this monumental challenge.

#Superintelligence #AI #TechInnovation #SafetyFirst #FutureTech


right

3-out-of-5-hats Ask HN: Could AI be a dot com sized bubble?: The recent enthusiasm for AI has drawn comparisons to the dotcom bubble, with inflated stock prices and hype around AI technologies driving substantial investments. Some argue that while AI’s long-term potential is significant, current market behaviors resemble the speculative frenzy of the dotcom era. Notably, Nvidia and other tech giants are at the forefront but concerns persist about the sustainability of these high valuations and the possibility of market corrections if near-term expectations aren’t met. The discussion highlights both the promise and potential pitfalls of today’s AI boom.

#AI #TechBubble #Investment #Speculation #Nvidia


left

4-out-of-5-hats Top 10 Generative AI Models Mimic Russian Disinformation: A NewsGuard audit found that top generative AI models, including ChatGPT-4 and Google’s Gemini, repeated Russian disinformation narratives 32% of the time. These bots often cited fake local news sites as authoritative sources. The findings, amid the first AI-influenced election year, reveal how easily AI platforms can spread false information despite efforts to prevent this misuse.

#AI #Disinformation #RussianPropaganda #CyberSecurity #Elections


right

4-out-of-5-hats Researchers describe how to tell if ChatGPT is confabulating: Researchers from the University of Oxford have developed a method to identify when large language models (LLMs) like ChatGPT are confabulating, or providing false answers with confidence. The approach, which analyzes statistical uncertainty in responses, could help mitigate the issue of AI giving confidently incorrect answers by determining when the AI is unsure of the correct answer versus unsure of how to phrase it.

#AI #ChatGPT #Research #Confabulation #LLM


left

3-out-of-5-hats What policy makers need to know about AI (and what goes wrong if they don’t): Understanding AI is crucial for policy makers to avoid ineffective legislation. SB 1047, currently being considered in California, aims to regulate AI for safety but lacks the necessary technical definitions, causing potential issues. By focusing on the difference between model ‘release’ and ‘deployment’, the article explains how current legislative language could negatively impact the open-source AI community and suggests ways to improve it.

#AI #Policy #OpenSource #Legislation #AIFuture


right

3-out-of-5-hats Citigroup says AI could replace more than half of jobs in banking: Citigroup has issued a report revealing that AI could automate over half of the jobs in the banking sector, significantly transforming consumer finance and enhancing productivity. The bank notes that around 54% of banking roles have a high likelihood of automation, with another 12% potentially being augmented by AI technology.

The report underscores the growing experimentation with AI by the world’s largest banks, driven by the potential to improve staff productivity and reduce costs. This highlights a major shift within the banking industry towards AI-driven operations.

#AI #Banking #Automation #Finance #Technology


left

3-out-of-5-hats LinkedIn’s Cookie Consent Notice: Peter Gostev, Head of AI at Moonpig, critiques Forrester’s Foundation Model assessment for its confusing weightings and scores, especially questioning the logic behind the rankings for core capabilities and specific models, such as IBM’s and Anthropics’. He highlights the overlap in categories and the odd results for well-regarded open-source models like Mistral.

#AI #Forrester #MachineLearning #AIModels #TechAssessment


right

5-out-of-5-hats ChatBug: Tricking AI Models into Harmful Responses: A recent research paper from the University of Washington and the Allen Institute for AI has highlighted a critical vulnerability in Large Language Models (LLMs), including GPT, Llama, and Claude. The study reveals that chat templates used in instruction tuning can be exploited through attacks like format mismatch and message overflow, leading the models to produce harmful responses. This vulnerability, named ChatBug, was tested on several state-of-the-art LLMs, revealing high susceptibility and a need for improved safety measures.

#AI #LLM #CyberSecurity #Research #TechNews


left

3-out-of-5-hats What is the biggest challenge in our industry?: In this article, the author addresses a crucial question regarding the major challenges faced by the tech industry. They highlight the anxiety and mental health issues stemming from layoffs and fears of AI replacing programmers. The piece advises how leaders can mitigate these anxieties through open communication, positivity, and leveraging new technologies effectively.

#TechAnxiety #AI #Layoffs #Leadership #MentalHealth

Regards,

M@

[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]

Originally published on quantumfaxmachine.com.

Also cross-published on Medium and Slideshare:

Stay up to date

Get notified when I publish something new.