Skip to main content

Blair and Hague step into the AI debate

 



This blogpost will be added to over a few days, maybe weeks as it is in response to a report that has been published today, 13th June, 2023. I am writing this in the morning, I will need time to read it through in detail. However it is important enough for me to give my initial impressions. 

On first glance it seems a comprehensive report with some interesting areas for debate, acknowledgement of the potential for the transformative effect on states of such technology, yet rather predictable solutions being offered that are too state orientated. It ultimately seems to be about power. How the power of corporations co-exist with the power of the state and what a future symbiotic co-existence might look like. There are the now usual calls for the UK's state to be elevated as a centre of AI Safety (which seems geopolitically unrealistic). The potential 'benefits' seem overplayed and the potential dangers underplayed. 

One fear, such interventions are beginning to bring about, is the premises of such reports: that of the transformative nature being of 'good enough'. By that I am sure that AI can 'transform' public services, making some service delivery, 'more efficient', by changing the standards to good enough, rather than of a human level quality. I will get around to expanding upon this in the future. In the meantime here is the link to the report and the executive summary.


This is the executive summary of the joint Blair Hauge report 'A New National Purpose: AI Promises a World-Leading Future of Britain.'

'Artificial intelligence (AI) is the most important technology of our generation.

Getting the policy right on this issue is fundamental and could determine Britain’s future. The potential opportunities are vast: to change the shape of the state, the nature of science and augment the abilities of citizens.

But the risks are also profound and the time to shape this technology positively is now.

For the United Kingdom, this task is urgent. The speed of change in AI underlines everything that was laid out in our first New National Purpose report, which called for a radical new policy agenda and a reshaping of the state, with science and technology at its core.

First, the state must be reoriented to this challenge. Major changes are needed to how government is organised, works with the private sector, promotes research, draws on expertise and receives advice.

Recommendations to achieve this include:

Securing multi-decade investment in science-and-technology infrastructure as well as talent and research programmes by reprioritising large amounts of capital expenditure to this task.

Boosting how Number 10 operates, dissolving the AI Council and empowering the Foundation Model Taskforce by having it report directly to the prime minister.

Sharpening the Office for Artificial Intelligence so that it provides better foresight function and agility for government to deal with technological change.

Second, the UK can become a leader in the development of safe, reliable and cutting-edge AI – in collaboration with its allies. The country has an opportunity to construct effective regulation that goes well beyond existing proposals yet is also more attractive to talent and firms than the approach being adopted by the European Union.

Recommendations to achieve this include:

Creating Sentinel, a national laboratory effort focused on researching and testing safe AI, with the aim of becoming the “brain” for both a UK and an international AI regulator. Sentinel would recognise that effective regulation and control is and will likely remain an ongoing research problem, requiring an unusually close combination of research and regulation.

Finally, the UK can pioneer the deployment and use of this technology in the real world, building next-generation companies and creating a 21st-century strategic state.

Recommendations to achieve this include:

Launching major AI-talent programmes, including international recruitment and the creation of polymath fellowships to allow top non-AI researchers to learn AI as well as leading AI researchers to learn non-AI fields and cross-fertilise ideas.

Requiring a tiered-access approach to compute provision under which access to larger amounts of compute comes with additional requirements to demonstrate responsible use.

Requiring generative-AI companies to label the synthetic media they produce as deepfakes and social-media platforms to remove unlabelled deepfakes.

Building AI-era infrastructure, including compute capacity and remodelling data, as a public asset with the creation of highly valuable, public-good data sets.

It is critical to engage the public throughout all of these developments to ensure AI development is accountable and give people the skills and chance to adapt. The UK has both the responsibility and opportunity to lead the world in establishing the framework for safe AI.'

One thing I can immediately agree on, is that constructing effective AI policy at either a national of international level is going to be a severe challenge. Although I seem to come to the conclusion that it will be much harder than this rather optimistic report assumes.  I for one would not 'reprioritise large amounts of capital expenditure to this task.' without many safeguards being in existence and proven to work. 

Update 14 June 2023

As I did with the government's white paper, I posed the question of what are the weaknesses as regards practicality to Bard. Here are Bard's comments:

'The report "A New National Purpose: AI Promises a World-Leading Future of Britain" by Tony Blair and William Hague has several weaknesses as regards practicality.

First, the report is too ambitious. The authors recommend that the government invest heavily in AI research and development, create a national AI strategy, and promote the use of AI in businesses and public services. While these are all laudable goals, they are also very expensive. It is not clear that the government is willing to commit the necessary resources to AI.

Second, the report does not take into account the potential risks of AI. AI has the potential to be used for malicious purposes, such as creating autonomous weapons or spreading disinformation. The report does not address these risks, and it is unclear how the government would mitigate them.

Third, the report does not provide a clear timeline for achieving its goals. The authors say that the UK should be a world leader in AI within a decade, but they do not say how this will be achieved. It is unclear how the government will accelerate the development of AI and ensure that the UK is not left behind by other countries.

Overall, the report is a good starting point for a discussion about the role of AI in the UK economy and society. However, it is not a practical roadmap for achieving the government's goals in AI. The report is too ambitious, it does not take into account the potential risks of AI, and it does not provide a clear timeline for achieving its goals.

Here are some additional thoughts on the report's practicality:

  • The government needs to be more realistic about the amount of money it is willing to spend on AI. The authors' recommendations are very expensive, and it is not clear that the government is willing to commit the necessary resources.
  • The government needs to develop a comprehensive plan to mitigate the risks of AI. This plan should include measures to prevent AI from being used for malicious purposes, such as creating autonomous weapons or spreading disinformation.
  • The government needs to set a clear timeline for achieving its goals in AI. This timeline should be realistic and achievable, and it should be based on a sound understanding of the challenges and opportunities of AI.'

I concur with Bard, but will make some additional observations. As the widespread commercial AI usage is in it's initial stages still, it is not clear what exactly are the best use cases for these technologies, as regards significant productivity gains that will affect the economy. What's been apparent is that large tech companies have expressed their intention to cut thousands' of jobs with the excuse of 'AI'. 

The only recommendation on this subject is more wooly than any sheep: 'The Office for Artificial Intelligence should use AI tools to analyse labour-market surveys, job adverts and news stories on redundancies to produce a live dashboard with assessments of what tasks, roles and jobs are being disrupted by AI today and in the medium term. This analysis, produced alongside the Bank of England, the ONS and unions, would help the government direct retraining efforts by providing a rich, live analysis of which jobs, industries and communities are being affected by AI.'

There is the report authors assumption that AI technology could do this job well! Well enough to be of utility at least, this is not a proven case. As for 'retraining', then this or even successive governments hardly have a strong track record in this field.  This one topic typifies much of the report, it seems 'well intentioned', though that is highly debatable, but in reality isn't too much better than the government white paper from March in terms of practical policy.



Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in