Skip to main content

Klein on the climate crisis and AIs role

 


Naomi Klein is the bestselling author of No Logo and The Shock Doctrine and Professor of Climate Justice and Co-director of the Centre for Climate Justice at the University of British Columbia. In a recent Guardian article, 'AI machines aren’t ‘hallucinating’. But their makers are' Klein sets out an argument exploding the tech hypesters myth peddling, covering climate, governance, trust in tech corporations, and AIs promise to save us from drudgery. 

Last year, the top tech companies spent a record $70m to lobby Washington – more than the oil and gas sector – and that sum, Bloomberg News notes, is on top of the millions spent “on their wide array of trade groups, non-profits and thinktanks. -Klein

The context of the tech companies lobbying power, together with the familiarity all policy makers and legislators will have with the brand names involved - using some these corporate products on a daily basis - is vital to frame the likely effects of the lobbying efforts. 

The summary of Kleins argument disputing the claim AI will assist in addressing climate change is as follows:

  • AI boosters are quick to acknowledge the fallibility of their machines, but they also promote the idea that these machines are on the cusp of sparking an evolutionary leap for humanity.
  • This is a dangerous hallucination, as it ignores the fact that AI is still in its early stages of development and is heavily influenced by the biases of its creators.
  • In order for AI to be truly beneficial to humanity, it needs to be developed and deployed in a way that is aligned with our values and goals.
  • Unfortunately, the current economic and social order is built on the extraction of wealth and profit, which is likely to lead to the use of AI for further dispossession and despoilation.
  • Klein argues that the utopian hallucinations about AI are being used to cover up the largest and most consequential theft in human history.
  • The wealthy tech companies are unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products.
  • This is illegal, as it violates the copyrights of the artists and creators whose work was used to train these AI models.
  • A movement of artists is challenging this theft, and they are calling for the tech companies to pay artists for their work.
  • AI is often touted as a solution to the climate crisis, but this is a false promise.
  • Klein points out that we already know what we need to do to address climate change: reduce emissions, leave carbon in the ground, and tackle overconsumption.
  • Klein argues that the reason we have not taken these steps is not because we do not know what to do, but because doing so would challenge the current economic system, which is based on the extraction of resources and the consumption of goods.
  • Klein concludes that AI is not a solution to the climate crisis, but rather a symptom of it.
It is good to gradually see critical responses that don't just argue about GPT from a technical standpoint, but rather from a context of this late state capitalism we have floundered into, the pressing issues we know that must be solved and the 'realities' and 'hallucinations' that are projected upon us that delay action.

Comments

Popular posts from this blog

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about ...

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in ...

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).  ...