Skip to main content

I ask Bard to comment on Government AI policy, with surprising results

 


The UK Government Response to AI

and an AI response to government

The UK government published a white paper on AI regulation by the Department of Science, Innovation and Technology in March 2023, which sets out a framework for ‘regulating AI in a way that promotes innovation while minimizing risks’. 

The paper outlines five key principles for AI regulation: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. The paper also sets out a number of measures that will be taken to implement these principles, including the creation of a new AI Council to advise the government on AI policy and the development of a new AI toolkit to help businesses understand and comply with AI regulations. 

The government's approach to AI regulation is based on the belief that AI has the potential to deliver significant benefits to society, but that it is important to take steps to mitigate the risks associated with AI. The government's hope is that by taking a pro-innovation approach to AI regulation, ‘the UK can become a world leader in the development and use of AI.’

The number of concerns raised in the paper are widespread and severe. These are primarily due to the assumptions made in the paper as well as the approaches taken to mitigate safety. They are often, typical from this government, hyperbolic, for example:

‘Our regulatory intervention will ensure that AI does not cause harm at a societal level, threatening democracy or UK values.’ 

I realise that this white paper is an enabling document, hence it’s subtitle, ‘A pro-innovation approach to AI regulation’, but one would expect that it took risk at least as seriously as the EU AI Act. For example, in the whole paper there is only one mention of the Alignment issue, of AI aligning with human values. 

‘We recognise that industry, academia, research organisations and global partners are looking for ways to address the challenges related to the regulation of foundation models. For example, we know that developers of foundation models are exploring ways to embed alignment theory into their models. This is an important area of research, and government will need to work closely with the AI research community to leverage insights and inform our iteration of the regulatory framework. Our collaborative, adaptable framework will draw on the expertise of those researchers and other stakeholders as we continue to develop policy in this evolving area.’ 


Conclusion and actions (White Paper)

In the first six months following publication we will: 

• Engage with industry, the public sector, regulators, academia and civil society through the consultation period. 

• Publish the government’s response to this consultation. 

• Issue the cross-sectoral principles to regulators, together with initial guidance to regulators for their implementation. We will work with regulators to understand how the description of AI’s characteristics can be applied within different regulatory remits and the impact this will have on the application of the cross-sectoral principles. 

• Design and publish an AI Regulation Roadmap with plans for establishing the central functions (detailed in section 3.3.1), including monitoring and coordinating implementation of the principles. This roadmap will set out key partner organisations and identify existing initiatives that will be scaled-up or leveraged to deliver the central functions. It will also clude plans to pilot a new AI sandbox or testbed. 

• Analyse findings from commissioned research projects and improve our understanding of: 

• Potential barriers faced by businesses seeking to comply with our framework and ways to overcome these. 

•  How accountability for regulatory compliance is currently assigned throughout the AI life cycle in real-world scenarios. 

• The ability of key regulators to implement our regulatory framework, and how we can best support them. 

• Best practice in measuring and reporting on AI-related risks across regulatory frameworks. . In the six to twelve months after publication we will: 

• Agree partnership arrangements with leading organisations and existing initiatives to deliver the first central functions. 

• Encourage key regulators to publish guidance on how the cross-sectoral principles apply within their remit. 

• Publish proposals for the design of a central M&E framework including identified metrics, data sources, and any identified thresholds or triggers for further intervention or iteration of the framework. This will be published for consultation. 

• Continue to develop a regulatory sandbox or testbed with innovators and regulators. In the longer-term, twelve months or more after publication, we will: 

• Deliver a first iteration of all the central functions required to ensure the framework is effective. • Work with key regulators that have not published guidance on how the cross-sectoral principles apply within their remit to encourage and support them to do so. 

• Publish a draft central, cross-economy AI risk register for consultation. 

• Develop the regulatory sandbox or testbed drawing on insights from the pilot. 

• Publish the first monitoring and evaluation report. This will evaluate how well the crosssectoral principles are functioning and the delivery of the central functions. Performance will be measured against our framework characteristics: pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative. The report will also consider existing regulatory activity and the role of government in supporting this, including whether appropriate guidance (including joint guidance) has been issued. In the report, we will include considerations on the need for any iteration of the framework, including the need for statutory interventions. 

• Publish an updated AI Regulation Roadmap which will set out plans for the future delivery of the central functions. In particular, it will assess whether a central government team is the most appropriate mechanism for overseeing the central functions in the longer term or if a more independent body would be more effective. 

Conclusion, how did Google's AI Bard respond?

After reading through this bunch of bull, I decided to ask Bard what it made of the governments white paper. After all consultation is a theme in this paper. Here is what it responded with:

The weaknesses of the following approach in addressing concerns over AI are:

  • It is too slow. It will take at least 18 months for the government to implement this approach, and even then, it will only be a first iteration. This is too slow for a rapidly evolving technology like AI.
  •  It is too bureaucratic. The government is proposing to create a new regulatory body, the AI Council, and to establish a number of central functions. This will add a layer of bureaucracy that could slow down innovation.
  •  It is too focused on compliance. The government is focused on ensuring that AI systems are compliant with regulations. However, this is not enough. We also need to ensure that AI systems are ethical and socially beneficial.

A better approach would be to:

  •  Focus on early intervention. Instead of waiting for AI systems to be developed and deployed, we need to start thinking about the ethical and social implications of AI early on in the development process.
  •  Encourage innovation. We need to create an environment where AI developers are free to innovate and experiment. This means avoiding excessive regulation and bureaucracy.
  •  Promote public debate. We need to have a public debate about the future of AI. This will help us to identify the potential risks and benefits of AI, and to develop policies that ensure that AI is used for good.

The government's approach is a step in the right direction, but it is not enough. We need to do more to ensure that AI is used for good.

I can’t argue with Bard here. It makes far more sense than this government.






 

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in