Skip to main content

Harari on AI and the future of humanity


I have seen a few discussions and lectures from Harari on the subject of AI, this though may be the best so far. The questions are pointed, which certainly helps. Harari tends to bring a different perspective to the debate on AI safety, which is of value. It's well worth watching the whole video, below is a snippet. 


Harari: So, we need to know three things about AI. First of all, AI is still just a tiny baby. We haven't seen anything yet. Real AI, deployed into the world, not in a laboratory or in science fiction, is only about 10 years old.

If you look at the wonderful scenery outside, with all the plants and trees, and think about biological evolution, the evolution of life on Earth took something like 4 billion years. It took 4 billion years to reach these plants and to reach us, human beings.

Now, AI is at the stage of, I don't know, amoebas. It's like 4 billion years ago, and the first living organisms are crawling out of the organic soup. ChatGPT and all these wonders are the amoebas of the AI world.

What would a T-Rex look like, and how long would it take for the AI amoebas to evolve into T-Rexes? It won't take billions of years. Maybe it will only take a few decades or a few years.

That's because the evolution of AI is on a completely different timescale than the evolution of organic beings. AI itself works on a different timescale. AI is always on. Computers in general are always on. Humans and other organic organisms live, exist, and develop in cycles. We need to rest sometimes. AI never needs to rest.

Now, the other two things we need to know about AI are that:

It is the first technology ever that can make decisions by itself.

I hear a lot of people saying, "Oh, all these worries about AI. Every time there is a new technology, people worry about it, and afterwards it's okay. Like when people invented writing, printing presses, and airplanes, they were so worried, and in the end it was okay. This will be the same."

But it's not the same. No previous technology in history could make decisions. Even an atom bomb actually empowered humans, because an atom bomb can destroy a city, but it cannot decide which city to bomb. You always need a human to make that decision. AI is the first technology that can make decisions by itself, even about us.

Increasingly, when we apply to a bank for a loan, it is an AI that makes the decision about us. So it takes power away from us.

AI is the first technology ever that can create new ideas.

The printing press, radio, and television broadcast and spread ideas created by the human brain, by the human mind. They cannot create a new idea. For example, Gutenberg printed the Bible in the middle of the 15th century. The printing press printed as many copies of the Bible as Gutenberg instructed it to, but it did not create a single new page. It had no ideas of its own about the Bible, such as whether it is good or bad, or how to interpret it.

AI can create new ideas, and can even write a new Bible. Throughout history, religions have dreamed about having a book written by a superhuman intelligence, by a non-human entity. Every religion claims that its book is the only true book, and that all the other books are written by humans, but our book is different. Our book came from some superhuman intelligence.

In a few years, there might be religions that are actually correct. Just think about a religion whose holy book is written by an AI. That could be a reality in a few years.


Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in