Skip to main content

What ozone-depleting substances can tell us about governance of AGI

 



There are not too many YouTubers that get it. That balance of fascination and constrained horror of what we are witnessing as AI developments occur, that seek out the latest papers, that seek to explain their salient points, and know which ones to choose from the multitude. Thankfully there are channels, only, a very few, like AI Explained, and thankfully too readers of this blog like Just Matthew, who help inspire the content. 

In this latest video, that he published just three hours before writing this, the person (or persons) behind the AI explained channel explored a number of different papers, some of which I've covered in this blog, some of which I've partially read. There's also some tasty surprises. Whilst I was researching through some less than original work, in order to write today's offerings, I missed the launch of the paper, 'Governance of SuperIntelligence' by OpenAI. (Do note that Altman finished his Ted Talk with his stated aim of creating AGI, now the statement concerns itself with ASI. That's a significant change of intent). 

So thanks AI explained. Please watch the video, it's under 20 minutes long and covers far more than I will in this blog.

 Back to the document from OpenAI. It starts: 

'Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.'

It ends, all to briefly, with:

 we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous...

If you aren't concerned that a corporate entity is stating this, then I don't know what to say.

After the first few weeks, whilst researching for these blog posts, I would have questioned the feasibility of OpenAI's opening statements. Now, I hold slight doubts. My optimism for the control of more advanced AI systems are mostly held in these slight doubts, that what is being promised/threatened, is not possible. My doubts, the more I learn, are diminishing. Do I think human institutions are up to the task of combatting my fears of AI? Well, let's have a little journey back into recent history. 

When chlorine and bromine atoms come into contact with ozone in the stratosphere, they destroy ozone molecules. One chlorine atom can destroy over 100,000 ozone molecules before it is removed from the stratosphere. Ozone can be destroyed more quickly than it is naturally created. In 2000, the ozone hole reached its maximum extent since 1979 and has stopped increasing in size in subsequent years, which is attributable to the phasing out of ozone-depleting substances under the Montreal Protocol (for more information, see the EEA indicator 'Consumption of ozone-depleting substances' 

This sounds great, legislative action by the world's nations can affect positive change, what I was concerned about in the early 80's is in recession... but there is a graph that I'd like to present to you, from Copernicus


Look at the last two years on the infographic. What's the new trend? Stopping ozone-depleting substance is a relatively simple task. Ask yourself though, how does that compare with stopping our current form of capitalism through legislation and by the institutions that protect capitalism. Because without such a drastic approach, how else would you propose stopping the biggest companies, and military contractors, from the further development of AI, 'Because the upsides are so tremendous'?


 

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.