Skip to main content

The Incentive to Deceive


Rob Miles is one of the better explainers of AI on YouTube, he's detailed, he rarely holds back on calling out elephants, and, importantly for broadcast media, he's personable. He's also has a long, in YouTube terms, track record of covering Alignment issues. As a PhD student he's particularly adept at explaining the complexities of Alignment issues. In this video he gives a fine explanation of the reward training in LLM's both implying and stating the issues that ensue from such training approaches, including the policies to please humans, and the utility of such models to deceive. 

Two parts near the end of the video caught my attention:

'This is potentially fairly dangerous, there are certain type of goals that are instrumentally valuable for a wide range of different terminal goals, in the sense that, you can't get what you want if you're turned off, you can't get what you want if you're modified, you probably want to gain power and influence."

'Reinforcement Learning From Human Feedback, is a powerful Alignment technique, in a way, but it does not solve the problem...extremely powerful systems trained in this way, I don't think they'd be safe.'

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.