Skip to main content

UDHR and Alignment


The Universal Declaration of Human Rights (UDHR) is a document that sets out fundamental human rights to be universally protected. Ideally any alignment of AI should use this as the basis for what human values are. The document was written in 1948 as a response to the atrocities of the Second World War. They remain the clearest expression of human values I know. They have failed though in practice, as I can certainly think of many examples where the post war governments, in countries like the UK, have breached most of the 30 articles stated. 

If governments can't or won't follow and uphold 30 basic principles for human values, why is there an expectation that AI can or will be able to?

Cansu Canca considered this issue in a post from 2019 "AI & Global Governance: Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR" Canca states that 'When we dive deep, the UDHR is simply unable to guide us on those questions. Solving such challenges is the job of ethical reasoning.'

The conclusion of: 'I do not mean to say that the UDHR is not of any use in the discussion of ethical tech. Its clarity, legacy, and wide acceptance makes the UDHR a good tool to use to start the exploration on what might be problematic about any given AI system or practices in developing these systems. However, if the aim is not just to identify the problem but also to solve it, then the UDHR is simply inadequate to do so. Here, I invite you to engage in ethics.'


Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...