Skip to main content

Eliezer Yudkowsky on Alignment and can it be regulated for?


Yudkowsky is one of the leading figures on matters of AI Alignment, this is a one hour discussion from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University. He's recently conducted a TED talk on the subjects raised here, but, at greater length and depth.

Early in the discussion Yudkowsky states:

'Just this very day... China released it's own preliminary set of regulations or something for AI models, it's actually stricter than what we've got. Possibly it was written by somebody who didn't quite understand how this works because it's things like all of the data that you're training it on needs to be like honest and accurate! So possibly regulations that are not factual.'

This is one of the significant issues with regulation as a means of controlling AI development. It requires levels of expertise in governance not often seen, it requires laws to be fit for purpose, and any laws should not be so reactive to current technologies that they miss out on what will occur by the time legislation has been passed. 

It's often pointed out that we can do this; look at the examples on Human Cloning, there's been a global consensus on stopping research into this. But, that is a relatively easy area to legislate for, it's clear in the goals it sets and by and large can be monitored. What are the equivalent obvious goals for regulation on AI research and practice?



 

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.

The tech utopia of endless leisure time is here: goodbye jobs

  'AI eliminated nearly 4,000 jobs in May' so it's reported by hallenger, Gray & Christmas, Inc. Following on from reports by IBM et al that thousands of job cuts will occur due to AI replacement, there is no need to wait for the utopia of AI allowing humans more leisure time, as that's already here, in the form of redundancies, if we are to accept the reports findings. 'With the exception of Education, Government, Industrial Manufacturing, and Utilities, every industry has seen an increase in layoffs this year.' What's particularly notable is that it's the Tech sector that's the most affected from job cuts in the US economy: 'The Technology sector announced the most cuts in May with 22,887, for a total of 136,831 this year, up 2,939% from the 4,503 cuts announced in the same period last year. The Tech sector has now announced the most cuts for the sector since 2001, when 168,395 cuts were announced for the entire year. ' Another reason ...