Skip to main content

Beware of discussions on AI Ethics

 


I have a problem with Ai ethics. I admit this may be Ethics 101 to many. But my problem is in a similar way that I have a problem with the anthropomorphising of AI in discourse. There are many that should know better, but do it all the same. 

For example, Open AI have an 'Open Ethics' project. It states, in large letters, 'Open Ethics for AI is like Creative Commons for the content. We aim to build trust between machines and humans by helping machines to explain themselves.' Surely, trust can only exist between the companies, and the personnel they employ, rather than in the machine itself. It is difficult to guarantee trust in anything unless one reviews the code, the compiler, the build, the training methodologies of the LLM. Transparency is critical to trust. And trust should not be transferred to the machine tool without transparency and that a set of other principles are being followed, such as voluntary participation, informed consent, anonymity, confidentiality. Can this be true about the LLMs we use? 

The Office of Privacy Commissioner of Canada (OPC) recently announced that it has launched an investigation into OpenAI. The investigation was launched in response to a complaint that OpenAI collects, uses and discloses personal information without consent. 

I'm also reminded too of the value, for OpenAI and others, of the 'uncanny valley', a term coined by Japanese roboticist Masahiro Mori. It describes the feeling of discomfort that people sometimes experience when interacting with artificial entities that appear almost, but not quite, human. This can happen with things like humanoid robots, lifelike computer-generated images, and even some forms of artificial intelligence like the arrival of GPT chatbots.

One study showed that people find human-like robots less likable and trustworthy than robots that look more mechanical or robotic.

Another study found that people are more likely to attribute emotional states to human-like robots than to more mechanical ones, but are less likely to feel empathy for them. This suggests that the uncanny valley effect may be due to our ability to perceive emotions in human-like robots, but not to empathize with them because we know they're not actually human.

It's often thought to be related to our brains' natural ability to recognise human faces and expressions, and the dissonance we experience when something looks human but doesn't act quite like one. Similar may be happening when people utilise human generated voices in their interactions with GPT chatbots.

The Open Ethics approach reminds to of The 'bait and switch' tactic. This is when a company offers one thing (the 'bait'), but then switches to something else (the 'switch'). In this case, a company might offer an ethical framework for their current AI products, but then switch to talking about how they're preparing for future AI issues. This might distract people from the current issues, and make them think that the company is being proactive and ethical.

This is why we should maintain that when we are talking about machines, we talk about machines. When we talk ethics, we are talking about humans. The AI serves as a distraction.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured, and how it can be implemented. The constant obfuscation by OpenAI in particular, although they are not alone in this, between the current capabilities of GPT machines and a potential future AGI/ASI does not help the debate. Conflating capabilities merely distracts from the current ethical behaviour's of companies.

Just some further points to emphasise this:

  • Morality is subjective and cannot be objectively conveyed in measurable metrics that make it easy for a computer to process. Being literal doesn't make for useful code in such circumstances! 
  • Ethics are intertwined with emotion, and machines lack emotion.
  • Machines can make ethical decisions, but only if humans program them to do so. Even this may/will run into difficulties.
  • Machine ethics is concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence.
  • The easiest way to deflect responsibility is to give agency to an entity that doesn't have it.
Some of the discussion in machine ethics assumes that machines can, in some sense, be ethical agents responsible for their actions. This is untrue. When such debates occur, it's best asking what is being hidden from the discourse.

Addendum: 31st May 2023

In 2023, there have been many layoffs in the tech industry, affecting tens of thousands of workers.

Meta, Google, Twitter, Amazon, and Microsoft are among the companies that have cut jobs in an effort to increase efficiency and profitability

Here are some of the specific layoffs that have occurred:

  • Meta cut 21,000 jobs, including a team that was building a tool for third-party fact-checkers to add comments flagging misleading articles shared on the platform, 200 content moderators, 16 members of Instagram’s well-being group, and more than 100 employees working on platform integrity
  • Google cut a third of its department tasked with fighting misinformation, radicalization, and censorship
  • Twitter slashed its ethical AI team from 17 to just one member and laid off 15% of its trust and safety department
  • Amazon downsized its ethical AI team and cut 50 positions dedicated to identifying abusive and illegal behaviour on its streaming platform Twitch
  • Microsoft cut all 30 members of its ethics and society team

A paper ' The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI' from the Global Research Initiative has a an excellent section on the limitations of Corporate AI Ethics:

'Many companies, governments, NGOs, and academic institutions follow the path of generating AI ethics principles and statements. These ethics statements are necessary but insufficient in of themselves. These ethics principles are presented as the product of a growing “global consensus” on AI ethics. This promotes a majoritarian view of ethics, which is especially concerning given the widespread evidence showing that AI bias and misuse harms many people whose voices are largely missing from these ethics principles and in official ethics debates.

There are now so many ethics policy statements that some groups began to aggregate them into standalone AI ethics surveys, which attempted to summarize and consolidate a representative sample of AI principle statements in order to identify themes and make normative assertions about the state of AI ethics. These surveys tend to aggregate AI ethics content from a very wide variety of contexts, blending corporate statements released on corporate blogs, publicly informed governing declarations, government policy guidelines from national and coalition strategies, and nonprofit mission statements and charters. However, they usually lack a comprehensive account of the methods used and sometimes equate internal and often secret corporate decision-making processes with grassroots-driven statements and governmental policy recommendations. The vast majority of these documents were generated from countries and organizations in the global North. Principle statements and the ethical priorities of the global South with regard to artificial intelligence are often absent from these surveys. Scholars and advocates have increasingly called attention to the gap between high-level statements and meaningful accountability.

Critics have identified conflicting ideals and vague definitions as barriers that are preventing the operationalization of ethics principles in AI product development, deployment, and auditing frameworks. One example is Microsoft’s former funding of an Israeli facial-recognition surveillance company AnyVision. AnyVision facilitates surveillance in the West Bank, allowing Israeli authorities to identify Palestinian individuals and track their movements in public space. Given the documented human-rights abuses happening on the West Bank, together with the civil-liberties implications associated with facial recognition in policing contexts, this use case directly contradicted Microsoft’s own declared principles of “lawful surveillance” and “non-discrimination,” along with the company’s promise not to “deploy facial recognition technology in scenarios that we believe will put freedoms at risk.” More perplexing was that AnyVision confirmed to reporters that their technology had been vetted against Microsoft’s ethical commitments. After public outcry, Microsoft acknowledged that there could be an ethical problem, and hired former Attorney General Eric Holder to investigate the alignment between AnyVision’s actions and Microsoft’s ethical principles...

Given the concerns that ethical promises are inadequate in the face of notable accountability gaps, many have argued that human rights principles, which are based on more established legal interpretations and practice, should replace “ethics” as the dominant framework for conversations about AI governance and oversight. Advocates for this approach describe human rights as ethics “with teeth,” or an alternative to the challenge of operationalizing ethics.'

My emphasis.


Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in