Skip to main content

Eliezer Yudkowsky on Alignment and can it be regulated for?


Yudkowsky is one of the leading figures on matters of AI Alignment, this is a one hour discussion from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University. He's recently conducted a TED talk on the subjects raised here, but, at greater length and depth.

Early in the discussion Yudkowsky states:

'Just this very day... China released it's own preliminary set of regulations or something for AI models, it's actually stricter than what we've got. Possibly it was written by somebody who didn't quite understand how this works because it's things like all of the data that you're training it on needs to be like honest and accurate! So possibly regulations that are not factual.'

This is one of the significant issues with regulation as a means of controlling AI development. It requires levels of expertise in governance not often seen, it requires laws to be fit for purpose, and any laws should not be so reactive to current technologies that they miss out on what will occur by the time legislation has been passed. 

It's often pointed out that we can do this; look at the examples on Human Cloning, there's been a global consensus on stopping research into this. But, that is a relatively easy area to legislate for, it's clear in the goals it sets and by and large can be monitored. What are the equivalent obvious goals for regulation on AI research and practice?



 

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in