Skip to main content

UDHR and Alignment


The Universal Declaration of Human Rights (UDHR) is a document that sets out fundamental human rights to be universally protected. Ideally any alignment of AI should use this as the basis for what human values are. The document was written in 1948 as a response to the atrocities of the Second World War. They remain the clearest expression of human values I know. They have failed though in practice, as I can certainly think of many examples where the post war governments, in countries like the UK, have breached most of the 30 articles stated. 

If governments can't or won't follow and uphold 30 basic principles for human values, why is there an expectation that AI can or will be able to?

Cansu Canca considered this issue in a post from 2019 "AI & Global Governance: Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR" Canca states that 'When we dive deep, the UDHR is simply unable to guide us on those questions. Solving such challenges is the job of ethical reasoning.'

The conclusion of: 'I do not mean to say that the UDHR is not of any use in the discussion of ethical tech. Its clarity, legacy, and wide acceptance makes the UDHR a good tool to use to start the exploration on what might be problematic about any given AI system or practices in developing these systems. However, if the aim is not just to identify the problem but also to solve it, then the UDHR is simply inadequate to do so. Here, I invite you to engage in ethics.'


Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in