Skip to main content

Klein on the climate crisis and AIs role

 


Naomi Klein is the bestselling author of No Logo and The Shock Doctrine and Professor of Climate Justice and Co-director of the Centre for Climate Justice at the University of British Columbia. In a recent Guardian article, 'AI machines aren’t ‘hallucinating’. But their makers are' Klein sets out an argument exploding the tech hypesters myth peddling, covering climate, governance, trust in tech corporations, and AIs promise to save us from drudgery. 

Last year, the top tech companies spent a record $70m to lobby Washington – more than the oil and gas sector – and that sum, Bloomberg News notes, is on top of the millions spent “on their wide array of trade groups, non-profits and thinktanks. -Klein

The context of the tech companies lobbying power, together with the familiarity all policy makers and legislators will have with the brand names involved - using some these corporate products on a daily basis - is vital to frame the likely effects of the lobbying efforts. 

The summary of Kleins argument disputing the claim AI will assist in addressing climate change is as follows:

  • AI boosters are quick to acknowledge the fallibility of their machines, but they also promote the idea that these machines are on the cusp of sparking an evolutionary leap for humanity.
  • This is a dangerous hallucination, as it ignores the fact that AI is still in its early stages of development and is heavily influenced by the biases of its creators.
  • In order for AI to be truly beneficial to humanity, it needs to be developed and deployed in a way that is aligned with our values and goals.
  • Unfortunately, the current economic and social order is built on the extraction of wealth and profit, which is likely to lead to the use of AI for further dispossession and despoilation.
  • Klein argues that the utopian hallucinations about AI are being used to cover up the largest and most consequential theft in human history.
  • The wealthy tech companies are unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products.
  • This is illegal, as it violates the copyrights of the artists and creators whose work was used to train these AI models.
  • A movement of artists is challenging this theft, and they are calling for the tech companies to pay artists for their work.
  • AI is often touted as a solution to the climate crisis, but this is a false promise.
  • Klein points out that we already know what we need to do to address climate change: reduce emissions, leave carbon in the ground, and tackle overconsumption.
  • Klein argues that the reason we have not taken these steps is not because we do not know what to do, but because doing so would challenge the current economic system, which is based on the extraction of resources and the consumption of goods.
  • Klein concludes that AI is not a solution to the climate crisis, but rather a symptom of it.
It is good to gradually see critical responses that don't just argue about GPT from a technical standpoint, but rather from a context of this late state capitalism we have floundered into, the pressing issues we know that must be solved and the 'realities' and 'hallucinations' that are projected upon us that delay action.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

AI Agents and the Latest Silicon Valley Hype

In what appears to be yet another grandiose proclamation from the tech industry, Google has released a whitepaper extolling the virtues of what they're calling "Generative AI agents". (https://www.aibase.com/news/14498) Whilst the basic premise—distinguishing between AI models and agents—holds water, one must approach these sweeping claims with considerable caution. Let's begin with the fundamentals. Yes, AI models like Large Language Models do indeed process information and generate outputs. That much isn't controversial. However, the leap from these essentially sophisticated pattern-matching systems to autonomous "agents" requires rather more scrutiny than the tech evangelists would have us believe. The whitepaper's architectural approaches—with their rather grandiose names like "ReAct" and "Tree of Thought"—sound remarkably like repackaged versions of long-standing computer science concepts, dressed up in fashionable AI clot...