Skip to main content

Aligning existing standards: a Comparison and Analysis of AI Documents




The development of artificial intelligence (AI) is rapidly accelerating, and with it, the need for standards and guidelines to ensure the responsible and trustworthy development and use of AI. In recent years, there has been a growing effort to develop AI standards and guidelines by a variety of stakeholders, including governments, businesses, and academia.

One important way to promote the development of AI is to compare and analyse existing AI documents. Do the standards we have even align? This approach can help to identify gaps and inconsistencies in current standards and guidelines, as well as areas where there is overlap or redundancy. It can also help to identify areas where new standards or guidelines are needed.

Golpayegani, Pandit & Lewis, Feb 23, in a conference paper: 'Comparison and Analysis of 3 Key AI Documents: EU’s Proposed AI Act, Assessment List for Trustworthy AI (ALTAI), and ISO/IEC 42001 AI Management System' examined the alignment between just 3 EU standards. They recognised that 'The lack of alignment between different sources of requirements, such as laws and standards, creates difficulties in identifying and fulfilling obligations.'

From the Abstract:

‘Conforming to multiple and sometimes conflicting guidelines, standards, and legislations regarding development, deployment, and governance of AI is a serious challenge for organisations. While the AI standards and regulations are both in early stages of development, it is prudent to avoid a highly-fragmented landscape and market confusion by finding out the gaps and resolving the potential conflicts. This paper provides an initial comparison of ISO/IEC 42001 AI management system standard with the EU trustworthy AI assessment list (ALTAI) and the proposed AI Act using an upper-level ontology for semantic interoperability between trustworthy AI documents with a focus on activities.’

There are a number of benefits to comparing and analysing AI documents. For standardisation bodies, it can help to identify areas that need creation or modification of standards. For legislators, it can help to determine the degree to which compliance with existing AI standards contributes to conformity to legal obligations and identify the aspects of AI that are not subject to regulation. For AI providers and developers, it can help to identify inconsistencies and areas of overlaps in existing standards and guidelines, as well as ensure that organisational AI policies are effective in satisfying normative and legal requirements.

Given the potential of AI research to cause harm, recently some AI conferences, such as NeurIPS, provided ethical guidelines and asked researchers to assess the impact of their work on key areas of concern, e.g. safety, fairness, and privacy. The comparison methodology can be applied in assessing the alignment of ethical guidelines provided by different conferences, universities’ policies on ethics and data protection as well as ethical assessment approaches.

Here are some specific examples of how comparing and analysing AI documents can be used to understand the alignment issues associated with AI standards:

Standardisation bodies can use the comparison to identify areas that need creation or modification of standards. For example, the International Organization for Standardization (ISO) is currently developing a new standard for AI ethics. By comparing and analysing existing AI documents, ISO can identify areas where the new standard needs to be more specific or detailed.
  • Legislators can use the comparison to determine the degree to which compliance with existing AI standards contributes to conformity to legal obligations. For example, the European Union is currently developing regulation for AI. By comparing and analysing existing AI documents, the EU can identify areas where compliance with existing standards can help to ensure that AI systems comply with the new regulation.
  • AI providers and developers can use the comparison to identify inconsistencies and areas of overlaps in existing standards and guidelines. This can help them to ensure that their AI systems are developed and used in a way that is consistent with the requirements of multiple standards and guidelines.
  • Universities can use the comparison to develop policies on ethics and data protection for AI research. By comparing and analysing ethical guidelines provided by different conferences and organisations, universities can develop policies that are comprehensive and up-to-date.

The comparison of AI documents should be a first step; what’s the use of seeking to align ‘human values’ into AI if we’re unable to align what ethical and legislative standards we already employ and should employ, to hold tech producers accountable to? The reality is, the funding for AI safety is only a fraction of what it is for other areas of AI development.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.