Skip to main content

Klein on the climate crisis and AIs role

 


Naomi Klein is the bestselling author of No Logo and The Shock Doctrine and Professor of Climate Justice and Co-director of the Centre for Climate Justice at the University of British Columbia. In a recent Guardian article, 'AI machines aren’t ‘hallucinating’. But their makers are' Klein sets out an argument exploding the tech hypesters myth peddling, covering climate, governance, trust in tech corporations, and AIs promise to save us from drudgery. 

Last year, the top tech companies spent a record $70m to lobby Washington – more than the oil and gas sector – and that sum, Bloomberg News notes, is on top of the millions spent “on their wide array of trade groups, non-profits and thinktanks. -Klein

The context of the tech companies lobbying power, together with the familiarity all policy makers and legislators will have with the brand names involved - using some these corporate products on a daily basis - is vital to frame the likely effects of the lobbying efforts. 

The summary of Kleins argument disputing the claim AI will assist in addressing climate change is as follows:

  • AI boosters are quick to acknowledge the fallibility of their machines, but they also promote the idea that these machines are on the cusp of sparking an evolutionary leap for humanity.
  • This is a dangerous hallucination, as it ignores the fact that AI is still in its early stages of development and is heavily influenced by the biases of its creators.
  • In order for AI to be truly beneficial to humanity, it needs to be developed and deployed in a way that is aligned with our values and goals.
  • Unfortunately, the current economic and social order is built on the extraction of wealth and profit, which is likely to lead to the use of AI for further dispossession and despoilation.
  • Klein argues that the utopian hallucinations about AI are being used to cover up the largest and most consequential theft in human history.
  • The wealthy tech companies are unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products.
  • This is illegal, as it violates the copyrights of the artists and creators whose work was used to train these AI models.
  • A movement of artists is challenging this theft, and they are calling for the tech companies to pay artists for their work.
  • AI is often touted as a solution to the climate crisis, but this is a false promise.
  • Klein points out that we already know what we need to do to address climate change: reduce emissions, leave carbon in the ground, and tackle overconsumption.
  • Klein argues that the reason we have not taken these steps is not because we do not know what to do, but because doing so would challenge the current economic system, which is based on the extraction of resources and the consumption of goods.
  • Klein concludes that AI is not a solution to the climate crisis, but rather a symptom of it.
It is good to gradually see critical responses that don't just argue about GPT from a technical standpoint, but rather from a context of this late state capitalism we have floundered into, the pressing issues we know that must be solved and the 'realities' and 'hallucinations' that are projected upon us that delay action.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.