Skip to main content

Aligning existing standards: a Comparison and Analysis of AI Documents




The development of artificial intelligence (AI) is rapidly accelerating, and with it, the need for standards and guidelines to ensure the responsible and trustworthy development and use of AI. In recent years, there has been a growing effort to develop AI standards and guidelines by a variety of stakeholders, including governments, businesses, and academia.

One important way to promote the development of AI is to compare and analyse existing AI documents. Do the standards we have even align? This approach can help to identify gaps and inconsistencies in current standards and guidelines, as well as areas where there is overlap or redundancy. It can also help to identify areas where new standards or guidelines are needed.

Golpayegani, Pandit & Lewis, Feb 23, in a conference paper: 'Comparison and Analysis of 3 Key AI Documents: EU’s Proposed AI Act, Assessment List for Trustworthy AI (ALTAI), and ISO/IEC 42001 AI Management System' examined the alignment between just 3 EU standards. They recognised that 'The lack of alignment between different sources of requirements, such as laws and standards, creates difficulties in identifying and fulfilling obligations.'

From the Abstract:

‘Conforming to multiple and sometimes conflicting guidelines, standards, and legislations regarding development, deployment, and governance of AI is a serious challenge for organisations. While the AI standards and regulations are both in early stages of development, it is prudent to avoid a highly-fragmented landscape and market confusion by finding out the gaps and resolving the potential conflicts. This paper provides an initial comparison of ISO/IEC 42001 AI management system standard with the EU trustworthy AI assessment list (ALTAI) and the proposed AI Act using an upper-level ontology for semantic interoperability between trustworthy AI documents with a focus on activities.’

There are a number of benefits to comparing and analysing AI documents. For standardisation bodies, it can help to identify areas that need creation or modification of standards. For legislators, it can help to determine the degree to which compliance with existing AI standards contributes to conformity to legal obligations and identify the aspects of AI that are not subject to regulation. For AI providers and developers, it can help to identify inconsistencies and areas of overlaps in existing standards and guidelines, as well as ensure that organisational AI policies are effective in satisfying normative and legal requirements.

Given the potential of AI research to cause harm, recently some AI conferences, such as NeurIPS, provided ethical guidelines and asked researchers to assess the impact of their work on key areas of concern, e.g. safety, fairness, and privacy. The comparison methodology can be applied in assessing the alignment of ethical guidelines provided by different conferences, universities’ policies on ethics and data protection as well as ethical assessment approaches.

Here are some specific examples of how comparing and analysing AI documents can be used to understand the alignment issues associated with AI standards:

Standardisation bodies can use the comparison to identify areas that need creation or modification of standards. For example, the International Organization for Standardization (ISO) is currently developing a new standard for AI ethics. By comparing and analysing existing AI documents, ISO can identify areas where the new standard needs to be more specific or detailed.
  • Legislators can use the comparison to determine the degree to which compliance with existing AI standards contributes to conformity to legal obligations. For example, the European Union is currently developing regulation for AI. By comparing and analysing existing AI documents, the EU can identify areas where compliance with existing standards can help to ensure that AI systems comply with the new regulation.
  • AI providers and developers can use the comparison to identify inconsistencies and areas of overlaps in existing standards and guidelines. This can help them to ensure that their AI systems are developed and used in a way that is consistent with the requirements of multiple standards and guidelines.
  • Universities can use the comparison to develop policies on ethics and data protection for AI research. By comparing and analysing ethical guidelines provided by different conferences and organisations, universities can develop policies that are comprehensive and up-to-date.

The comparison of AI documents should be a first step; what’s the use of seeking to align ‘human values’ into AI if we’re unable to align what ethical and legislative standards we already employ and should employ, to hold tech producers accountable to? The reality is, the funding for AI safety is only a fraction of what it is for other areas of AI development.

Comments

Popular posts from this blog

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about ...

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in ...

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).  ...