The UK Government Response to AI
and an AI response to government
The UK government published a white paper on AI regulation by the Department of Science, Innovation and Technology in March 2023, which sets out a framework for ‘regulating AI in a way that promotes innovation while minimizing risks’.
The paper outlines five key principles for AI regulation: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. The paper also sets out a number of measures that will be taken to implement these principles, including the creation of a new AI Council to advise the government on AI policy and the development of a new AI toolkit to help businesses understand and comply with AI regulations.
The government's approach to AI regulation is based on the belief that AI has the potential to deliver significant benefits to society, but that it is important to take steps to mitigate the risks associated with AI. The government's hope is that by taking a pro-innovation approach to AI regulation, ‘the UK can become a world leader in the development and use of AI.’
The number of concerns raised in the paper are widespread and severe. These are primarily due to the assumptions made in the paper as well as the approaches taken to mitigate safety. They are often, typical from this government, hyperbolic, for example:
‘Our regulatory intervention will ensure that AI does not cause harm at a societal level, threatening democracy or UK values.’
I realise that this white paper is an enabling document, hence it’s subtitle, ‘A pro-innovation approach to AI regulation’, but one would expect that it took risk at least as seriously as the EU AI Act. For example, in the whole paper there is only one mention of the Alignment issue, of AI aligning with human values.
‘We recognise that industry, academia, research organisations and global partners are looking for ways to address the challenges related to the regulation of foundation models. For example, we know that developers of foundation models are exploring ways to embed alignment theory into their models. This is an important area of research, and government will need to work closely with the AI research community to leverage insights and inform our iteration of the regulatory framework. Our collaborative, adaptable framework will draw on the expertise of those researchers and other stakeholders as we continue to develop policy in this evolving area.’
Conclusion and actions (White Paper)
In the first six months following publication we will:
• Engage with industry, the public sector, regulators, academia and civil society through the consultation period.
• Publish the government’s response to this consultation.
• Issue the cross-sectoral principles to regulators, together with initial guidance to regulators for their implementation. We will work with regulators to understand how the description of AI’s characteristics can be applied within different regulatory remits and the impact this will have on the application of the cross-sectoral principles.
• Design and publish an AI Regulation Roadmap with plans for establishing the central functions (detailed in section 3.3.1), including monitoring and coordinating implementation of the principles. This roadmap will set out key partner organisations and identify existing initiatives that will be scaled-up or leveraged to deliver the central functions. It will also clude plans to pilot a new AI sandbox or testbed.
• Analyse findings from commissioned research projects and improve our understanding of:
• Potential barriers faced by businesses seeking to comply with our framework and ways to overcome these.
• How accountability for regulatory compliance is currently assigned throughout the AI life cycle in real-world scenarios.
• The ability of key regulators to implement our regulatory framework, and how we can best support them.
• Best practice in measuring and reporting on AI-related risks across regulatory frameworks. . In the six to twelve months after publication we will:
• Agree partnership arrangements with leading organisations and existing initiatives to deliver the first central functions.
• Encourage key regulators to publish guidance on how the cross-sectoral principles apply within their remit.
• Publish proposals for the design of a central M&E framework including identified metrics, data sources, and any identified thresholds or triggers for further intervention or iteration of the framework. This will be published for consultation.
• Continue to develop a regulatory sandbox or testbed with innovators and regulators. In the longer-term, twelve months or more after publication, we will:
• Deliver a first iteration of all the central functions required to ensure the framework is effective. • Work with key regulators that have not published guidance on how the cross-sectoral principles apply within their remit to encourage and support them to do so.
• Publish a draft central, cross-economy AI risk register for consultation.
• Develop the regulatory sandbox or testbed drawing on insights from the pilot.
• Publish the first monitoring and evaluation report. This will evaluate how well the crosssectoral principles are functioning and the delivery of the central functions. Performance will be measured against our framework characteristics: pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative. The report will also consider existing regulatory activity and the role of government in supporting this, including whether appropriate guidance (including joint guidance) has been issued. In the report, we will include considerations on the need for any iteration of the framework, including the need for statutory interventions.
• Publish an updated AI Regulation Roadmap which will set out plans for the future delivery of the central functions. In particular, it will assess whether a central government team is the most appropriate mechanism for overseeing the central functions in the longer term or if a more independent body would be more effective.
Conclusion, how did Google's AI Bard respond?
After reading through this bunch of bull, I decided to ask Bard what it made of the governments white paper. After all consultation is a theme in this paper. Here is what it responded with:
The weaknesses of the following approach in addressing concerns over AI are:
- It is too slow. It will take at least 18 months for the government to implement this approach, and even then, it will only be a first iteration. This is too slow for a rapidly evolving technology like AI.
- It is too bureaucratic. The government is proposing to create a new regulatory body, the AI Council, and to establish a number of central functions. This will add a layer of bureaucracy that could slow down innovation.
- It is too focused on compliance. The government is focused on ensuring that AI systems are compliant with regulations. However, this is not enough. We also need to ensure that AI systems are ethical and socially beneficial.
A better approach would be to:
- Focus on early intervention. Instead of waiting for AI systems to be developed and deployed, we need to start thinking about the ethical and social implications of AI early on in the development process.
- Encourage innovation. We need to create an environment where AI developers are free to innovate and experiment. This means avoiding excessive regulation and bureaucracy.
- Promote public debate. We need to have a public debate about the future of AI. This will help us to identify the potential risks and benefits of AI, and to develop policies that ensure that AI is used for good.
The government's approach is a step in the right direction, but it is not enough. We need to do more to ensure that AI is used for good.
I can’t argue with Bard here. It makes far more sense than this government.
Comments
Post a Comment