Skip to main content

The Alignment problem: should we treat AI like domestic dogs?

 


Deep Agential Diversity is a term used by Luise Muller in her paper “Domesticating Artificial Intelligence” to describe the property of social systems that contain human as well as nonhuman agents. In such systems, agents cooperate and work together in a number of different constellations and differ categorically in their agential capabilities, vulnerabilities, and moral standing. Muller argues that this diversity is “deep” because the differences between humans and AI agents are not just a matter of degree, but of kind. She suggests that we need to develop normative theories that are adequate for social systems that are populated by different kinds of agents exhibiting heterogeneity in abilities, autonomy, moral capability, moral status and vulnerability. 

“And because of that, we lack the methodological tools to understand social systems that are characterized by what I want to call deep agential diversity. The term denotes the property of social systems that contain human as well as nonhuman agents. Within social systems characterized by deep agential diversity, agents cooperate and work together in a number of different constellations: first, and obviously, humans cooperate with other humans; second, human agents also now increasingly cooperate with AI agents; and third, AI agents also cooperate with one another. This results in a complex web of interrelated actions that are increasingly transforming human social practices as we know them.”

To achieve the alignment of AI agents with value laden cooperative human life for their safe deployment in human societies Muller argues that instead of building moral machines, we need an approach to value alignment that takes into account these categorical different cognitive and moral capabilities between human and AI agents. This is  deep agential diversity. 

With such an approach, she argues, that domestication, as when we integrated non human animals into society, could be applied to AI agents. 

“I have already argued that we understand the limits of AI agents’ moral capabilities better if we compare them with nonhuman animals. I now want to argue that in order to develop a useful and normatively accurate picture of our relations with nonhuman intelligent agents, we can also learn from our experience with nonhuman animals. This is because aligning AI agents to human values is structurally analogous to domesticating nonhuman animals: domestication allows human moral agents to cooperate with nonhuman agents without human-like moral capabilities. The agential qualities and cognitive capacities of animals differ radically from humans, and yet a very fruitful discussion about the normative relations between humans and animals has delivered insights about what roles nonhuman animals can play in human social systems, and what morally follows from those roles “

Muller concludes:

“In political philosophy, we are generally interested in how it is possible to preserve the equality and freedom of persons in the face of a set of given natural and social circumstances. Part of the social circumstances are technological advances: they impact – and sometimes transform – human relations, create new opportunities for flourishing and independence, but also for exploitation and dependency. The approach I defended and the resulting framework I developed in Domesticating Artificial Intelligence 235 this article might give us some orientation on how to begin thinking about these challenges more rigorously.” 

I'd challenge Muller specifically on one thing about domestication. The argument may be ascribing far too much real neural capacity to machines. For example fMRI scans seem to be showing domestic 'dogs. like so many other animals, experience consciousness and emotions at a level comparable to humans.' That is far beyond where machines are at and likely will ever be.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.