Skip to main content

Don't Look To Sunak To Effectively Regulate AI

 


Sunak, as reported in the Guardian, was speaking on the plane to Japan for the G7 summit, where AI will be discussed, Sunak said a global approach to regulation was needed. “We have taken a deliberately iterative approach because the technology is evolving quickly and we want to make sure that our regulation can evolve as it does as well,” he said. “Now that is going to involve coordination with our allies … you would expect it to form some of the conversations as well at the G7.

“I think that the UK has a track record of being in a leadership position and bringing people together, particularly in regard to technological regulation in the online safety bill … And again, the companies themselves, in that instance as well, have worked with us and looked to us to provide those guard rails as they will do and have done on AI.”

The white paper on AI regulation the government introduced in March directly contradicts Sunak's statements as I've written about before. It's all about enabling AI companies, largely ignoring regulation. As far as 'bringing people together' in regard the 'online safety bill', the real result was bringing together criticisms the government refused to address in amendments. A good example is from Article 19 who point out the human rights concerns. The Bill ignores the platforms' business model that amplifies harmful content and gives them too much power over users' speech. It also relies on algorithmic moderation that often removes legal content and harms freedom of expression. 

As Paul Bernal pointed out in his blog: 'As it is, the Online Safety Bill looks likely to attack the symptoms rather than the causes of online harms. Unless it finds a way to address the underlying problems – and to confront the massive blind spot it has for the role of politicians and journalists – it will be just yet another massive game of Whac-A-Mole, doomed to failure and disappointment.'

One only has to look at the recent track record of this and recent administrations to realise that no effective AI regulation will emanate from the UK before it's too late.



Comments

Popular posts from this blog

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about ...

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in ...

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).  ...