Skip to main content

Beware of discussions on AI Ethics

 


I have a problem with Ai ethics. I admit this may be Ethics 101 to many. But my problem is in a similar way that I have a problem with the anthropomorphising of AI in discourse. There are many that should know better, but do it all the same. 

For example, Open AI have an 'Open Ethics' project. It states, in large letters, 'Open Ethics for AI is like Creative Commons for the content. We aim to build trust between machines and humans by helping machines to explain themselves.' Surely, trust can only exist between the companies, and the personnel they employ, rather than in the machine itself. It is difficult to guarantee trust in anything unless one reviews the code, the compiler, the build, the training methodologies of the LLM. Transparency is critical to trust. And trust should not be transferred to the machine tool without transparency and that a set of other principles are being followed, such as voluntary participation, informed consent, anonymity, confidentiality. Can this be true about the LLMs we use? 

The Office of Privacy Commissioner of Canada (OPC) recently announced that it has launched an investigation into OpenAI. The investigation was launched in response to a complaint that OpenAI collects, uses and discloses personal information without consent. 

I'm also reminded too of the value, for OpenAI and others, of the 'uncanny valley', a term coined by Japanese roboticist Masahiro Mori. It describes the feeling of discomfort that people sometimes experience when interacting with artificial entities that appear almost, but not quite, human. This can happen with things like humanoid robots, lifelike computer-generated images, and even some forms of artificial intelligence like the arrival of GPT chatbots.

One study showed that people find human-like robots less likable and trustworthy than robots that look more mechanical or robotic.

Another study found that people are more likely to attribute emotional states to human-like robots than to more mechanical ones, but are less likely to feel empathy for them. This suggests that the uncanny valley effect may be due to our ability to perceive emotions in human-like robots, but not to empathize with them because we know they're not actually human.

It's often thought to be related to our brains' natural ability to recognise human faces and expressions, and the dissonance we experience when something looks human but doesn't act quite like one. Similar may be happening when people utilise human generated voices in their interactions with GPT chatbots.

The Open Ethics approach reminds to of The 'bait and switch' tactic. This is when a company offers one thing (the 'bait'), but then switches to something else (the 'switch'). In this case, a company might offer an ethical framework for their current AI products, but then switch to talking about how they're preparing for future AI issues. This might distract people from the current issues, and make them think that the company is being proactive and ethical.

This is why we should maintain that when we are talking about machines, we talk about machines. When we talk ethics, we are talking about humans. The AI serves as a distraction.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured, and how it can be implemented. The constant obfuscation by OpenAI in particular, although they are not alone in this, between the current capabilities of GPT machines and a potential future AGI/ASI does not help the debate. Conflating capabilities merely distracts from the current ethical behaviour's of companies.

Just some further points to emphasise this:

  • Morality is subjective and cannot be objectively conveyed in measurable metrics that make it easy for a computer to process. Being literal doesn't make for useful code in such circumstances! 
  • Ethics are intertwined with emotion, and machines lack emotion.
  • Machines can make ethical decisions, but only if humans program them to do so. Even this may/will run into difficulties.
  • Machine ethics is concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence.
  • The easiest way to deflect responsibility is to give agency to an entity that doesn't have it.
Some of the discussion in machine ethics assumes that machines can, in some sense, be ethical agents responsible for their actions. This is untrue. When such debates occur, it's best asking what is being hidden from the discourse.

Addendum: 31st May 2023

In 2023, there have been many layoffs in the tech industry, affecting tens of thousands of workers.

Meta, Google, Twitter, Amazon, and Microsoft are among the companies that have cut jobs in an effort to increase efficiency and profitability

Here are some of the specific layoffs that have occurred:

  • Meta cut 21,000 jobs, including a team that was building a tool for third-party fact-checkers to add comments flagging misleading articles shared on the platform, 200 content moderators, 16 members of Instagram’s well-being group, and more than 100 employees working on platform integrity
  • Google cut a third of its department tasked with fighting misinformation, radicalization, and censorship
  • Twitter slashed its ethical AI team from 17 to just one member and laid off 15% of its trust and safety department
  • Amazon downsized its ethical AI team and cut 50 positions dedicated to identifying abusive and illegal behaviour on its streaming platform Twitch
  • Microsoft cut all 30 members of its ethics and society team

A paper ' The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI' from the Global Research Initiative has a an excellent section on the limitations of Corporate AI Ethics:

'Many companies, governments, NGOs, and academic institutions follow the path of generating AI ethics principles and statements. These ethics statements are necessary but insufficient in of themselves. These ethics principles are presented as the product of a growing “global consensus” on AI ethics. This promotes a majoritarian view of ethics, which is especially concerning given the widespread evidence showing that AI bias and misuse harms many people whose voices are largely missing from these ethics principles and in official ethics debates.

There are now so many ethics policy statements that some groups began to aggregate them into standalone AI ethics surveys, which attempted to summarize and consolidate a representative sample of AI principle statements in order to identify themes and make normative assertions about the state of AI ethics. These surveys tend to aggregate AI ethics content from a very wide variety of contexts, blending corporate statements released on corporate blogs, publicly informed governing declarations, government policy guidelines from national and coalition strategies, and nonprofit mission statements and charters. However, they usually lack a comprehensive account of the methods used and sometimes equate internal and often secret corporate decision-making processes with grassroots-driven statements and governmental policy recommendations. The vast majority of these documents were generated from countries and organizations in the global North. Principle statements and the ethical priorities of the global South with regard to artificial intelligence are often absent from these surveys. Scholars and advocates have increasingly called attention to the gap between high-level statements and meaningful accountability.

Critics have identified conflicting ideals and vague definitions as barriers that are preventing the operationalization of ethics principles in AI product development, deployment, and auditing frameworks. One example is Microsoft’s former funding of an Israeli facial-recognition surveillance company AnyVision. AnyVision facilitates surveillance in the West Bank, allowing Israeli authorities to identify Palestinian individuals and track their movements in public space. Given the documented human-rights abuses happening on the West Bank, together with the civil-liberties implications associated with facial recognition in policing contexts, this use case directly contradicted Microsoft’s own declared principles of “lawful surveillance” and “non-discrimination,” along with the company’s promise not to “deploy facial recognition technology in scenarios that we believe will put freedoms at risk.” More perplexing was that AnyVision confirmed to reporters that their technology had been vetted against Microsoft’s ethical commitments. After public outcry, Microsoft acknowledged that there could be an ethical problem, and hired former Attorney General Eric Holder to investigate the alignment between AnyVision’s actions and Microsoft’s ethical principles...

Given the concerns that ethical promises are inadequate in the face of notable accountability gaps, many have argued that human rights principles, which are based on more established legal interpretations and practice, should replace “ethics” as the dominant framework for conversations about AI governance and oversight. Advocates for this approach describe human rights as ethics “with teeth,” or an alternative to the challenge of operationalizing ethics.'

My emphasis.


Comments

Popular posts from this blog

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about ...

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in ...

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).  ...