Skip to main content

Aligning existing standards: a Comparison and Analysis of AI Documents




The development of artificial intelligence (AI) is rapidly accelerating, and with it, the need for standards and guidelines to ensure the responsible and trustworthy development and use of AI. In recent years, there has been a growing effort to develop AI standards and guidelines by a variety of stakeholders, including governments, businesses, and academia.

One important way to promote the development of AI is to compare and analyse existing AI documents. Do the standards we have even align? This approach can help to identify gaps and inconsistencies in current standards and guidelines, as well as areas where there is overlap or redundancy. It can also help to identify areas where new standards or guidelines are needed.

Golpayegani, Pandit & Lewis, Feb 23, in a conference paper: 'Comparison and Analysis of 3 Key AI Documents: EU’s Proposed AI Act, Assessment List for Trustworthy AI (ALTAI), and ISO/IEC 42001 AI Management System' examined the alignment between just 3 EU standards. They recognised that 'The lack of alignment between different sources of requirements, such as laws and standards, creates difficulties in identifying and fulfilling obligations.'

From the Abstract:

‘Conforming to multiple and sometimes conflicting guidelines, standards, and legislations regarding development, deployment, and governance of AI is a serious challenge for organisations. While the AI standards and regulations are both in early stages of development, it is prudent to avoid a highly-fragmented landscape and market confusion by finding out the gaps and resolving the potential conflicts. This paper provides an initial comparison of ISO/IEC 42001 AI management system standard with the EU trustworthy AI assessment list (ALTAI) and the proposed AI Act using an upper-level ontology for semantic interoperability between trustworthy AI documents with a focus on activities.’

There are a number of benefits to comparing and analysing AI documents. For standardisation bodies, it can help to identify areas that need creation or modification of standards. For legislators, it can help to determine the degree to which compliance with existing AI standards contributes to conformity to legal obligations and identify the aspects of AI that are not subject to regulation. For AI providers and developers, it can help to identify inconsistencies and areas of overlaps in existing standards and guidelines, as well as ensure that organisational AI policies are effective in satisfying normative and legal requirements.

Given the potential of AI research to cause harm, recently some AI conferences, such as NeurIPS, provided ethical guidelines and asked researchers to assess the impact of their work on key areas of concern, e.g. safety, fairness, and privacy. The comparison methodology can be applied in assessing the alignment of ethical guidelines provided by different conferences, universities’ policies on ethics and data protection as well as ethical assessment approaches.

Here are some specific examples of how comparing and analysing AI documents can be used to understand the alignment issues associated with AI standards:

Standardisation bodies can use the comparison to identify areas that need creation or modification of standards. For example, the International Organization for Standardization (ISO) is currently developing a new standard for AI ethics. By comparing and analysing existing AI documents, ISO can identify areas where the new standard needs to be more specific or detailed.
  • Legislators can use the comparison to determine the degree to which compliance with existing AI standards contributes to conformity to legal obligations. For example, the European Union is currently developing regulation for AI. By comparing and analysing existing AI documents, the EU can identify areas where compliance with existing standards can help to ensure that AI systems comply with the new regulation.
  • AI providers and developers can use the comparison to identify inconsistencies and areas of overlaps in existing standards and guidelines. This can help them to ensure that their AI systems are developed and used in a way that is consistent with the requirements of multiple standards and guidelines.
  • Universities can use the comparison to develop policies on ethics and data protection for AI research. By comparing and analysing ethical guidelines provided by different conferences and organisations, universities can develop policies that are comprehensive and up-to-date.

The comparison of AI documents should be a first step; what’s the use of seeking to align ‘human values’ into AI if we’re unable to align what ethical and legislative standards we already employ and should employ, to hold tech producers accountable to? The reality is, the funding for AI safety is only a fraction of what it is for other areas of AI development.

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in