Skip to main content

Do you gamble? In the quest to become 'Gods' this gamble is critical


"Paul Christiano runs the Alignment Research Center, a non-profit research organization whose mission is to align future machine learning systems with human interests. Paul previously ran the language model alignment team at OpenAI, the creators of ChatGPT." 

The start of this video on the Bankless channel is shocking, especially given the position that Christiano previously held: 

'overall maybe you are getting like 50/50 chance of doom shortly after you have AS systems that are at human level.'

Christiano doesn't get too much more optimistic, as you can imagine from the above prediction:

'My default picture is like we have time to react in terms of the nature of AI systems changing, their capabilities changing. With luck we have some various kinds of smaller catastrophes occurring in advance, but I think that one of the bad things about the actual catastrophe we are worried about, does have these dynamics similar to like a human coup or revolution, where we don't have little baby coups and we see like the rate of how these coups occur, they may just go like straight to... the ship has sailed once AIs start taking over...I think you will probably have like years between people say 'that looks like a takeover risk' and when an actual takeover occurs. And that's pretty good and that's why I'm a lot more optimistic... I'm like, well, I think people are wrong about the rate of progress, I think people will be able to see things that can generally be recognised as pretty concerning, prior to the actual catastrophe.'    

On Alignment Christiano continues:

'The problem is really hard... Three categories that I would probably think about in terms of addressing this are like technical measures that can reduce the risk of takeover. measurements that can inform us  about the risk of takeover and understand what are the relevant dynamics that can also involve policy intervention.

Should the predictions prove even vaguely accurate then the questions arise about our responses. What is the state of our current systems and institutions to act decisively, in a coordinated and timely manner in proportion to the identified threats, in a short timeline? 

My own considerations lead me to hypothesis that  any 'intelligence' from an AGI will be an alien intelligence, it will certainly mimic a human, but emotional intelligence will be lacking, social intelligence non existent, intrapersonal, or interpersonal intelligence  very limited. This is a distinct danger that we, as a species, fall for the mimicry in the desire to find a 'god'.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.

AI Agents and the Latest Silicon Valley Hype

In what appears to be yet another grandiose proclamation from the tech industry, Google has released a whitepaper extolling the virtues of what they're calling "Generative AI agents". (https://www.aibase.com/news/14498) Whilst the basic premise—distinguishing between AI models and agents—holds water, one must approach these sweeping claims with considerable caution. Let's begin with the fundamentals. Yes, AI models like Large Language Models do indeed process information and generate outputs. That much isn't controversial. However, the leap from these essentially sophisticated pattern-matching systems to autonomous "agents" requires rather more scrutiny than the tech evangelists would have us believe. The whitepaper's architectural approaches—with their rather grandiose names like "ReAct" and "Tree of Thought"—sound remarkably like repackaged versions of long-standing computer science concepts, dressed up in fashionable AI clot...