Yudkowsky is one of the leading figures on matters of AI Alignment, this is a one hour discussion from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University. He's recently conducted a TED talk on the subjects raised here, but, at greater length and depth.
Early in the discussion Yudkowsky states:
'Just this very day... China released it's own preliminary set of regulations or something for AI models, it's actually stricter than what we've got. Possibly it was written by somebody who didn't quite understand how this works because it's things like all of the data that you're training it on needs to be like honest and accurate! So possibly regulations that are not factual.'
This is one of the significant issues with regulation as a means of controlling AI development. It requires levels of expertise in governance not often seen, it requires laws to be fit for purpose, and any laws should not be so reactive to current technologies that they miss out on what will occur by the time legislation has been passed.
It's often pointed out that we can do this; look at the examples on Human Cloning, there's been a global consensus on stopping research into this. But, that is a relatively easy area to legislate for, it's clear in the goals it sets and by and large can be monitored. What are the equivalent obvious goals for regulation on AI research and practice?
Comments
Post a Comment