Skip to main content

Klein on the climate crisis and AIs role

 


Naomi Klein is the bestselling author of No Logo and The Shock Doctrine and Professor of Climate Justice and Co-director of the Centre for Climate Justice at the University of British Columbia. In a recent Guardian article, 'AI machines aren’t ‘hallucinating’. But their makers are' Klein sets out an argument exploding the tech hypesters myth peddling, covering climate, governance, trust in tech corporations, and AIs promise to save us from drudgery. 

Last year, the top tech companies spent a record $70m to lobby Washington – more than the oil and gas sector – and that sum, Bloomberg News notes, is on top of the millions spent “on their wide array of trade groups, non-profits and thinktanks. -Klein

The context of the tech companies lobbying power, together with the familiarity all policy makers and legislators will have with the brand names involved - using some these corporate products on a daily basis - is vital to frame the likely effects of the lobbying efforts. 

The summary of Kleins argument disputing the claim AI will assist in addressing climate change is as follows:

  • AI boosters are quick to acknowledge the fallibility of their machines, but they also promote the idea that these machines are on the cusp of sparking an evolutionary leap for humanity.
  • This is a dangerous hallucination, as it ignores the fact that AI is still in its early stages of development and is heavily influenced by the biases of its creators.
  • In order for AI to be truly beneficial to humanity, it needs to be developed and deployed in a way that is aligned with our values and goals.
  • Unfortunately, the current economic and social order is built on the extraction of wealth and profit, which is likely to lead to the use of AI for further dispossession and despoilation.
  • Klein argues that the utopian hallucinations about AI are being used to cover up the largest and most consequential theft in human history.
  • The wealthy tech companies are unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products.
  • This is illegal, as it violates the copyrights of the artists and creators whose work was used to train these AI models.
  • A movement of artists is challenging this theft, and they are calling for the tech companies to pay artists for their work.
  • AI is often touted as a solution to the climate crisis, but this is a false promise.
  • Klein points out that we already know what we need to do to address climate change: reduce emissions, leave carbon in the ground, and tackle overconsumption.
  • Klein argues that the reason we have not taken these steps is not because we do not know what to do, but because doing so would challenge the current economic system, which is based on the extraction of resources and the consumption of goods.
  • Klein concludes that AI is not a solution to the climate crisis, but rather a symptom of it.
It is good to gradually see critical responses that don't just argue about GPT from a technical standpoint, but rather from a context of this late state capitalism we have floundered into, the pressing issues we know that must be solved and the 'realities' and 'hallucinations' that are projected upon us that delay action.

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in