Skip to main content

Working hypothesis, after a week of peering down the AGI rabbit hole.

Working hypothesis, after a week of peering down the AGI rabbit hole.



I started this blog to chart and comment on the rapid changes that are taking place in the development of narrow AI to see what sort of path we are on, and if there is any possibility that we’d see an emergent AGI, over a course of 6 months.

Any findings are part of an ongoing hypothesis.

Speculations

  • The opportunity for an AGI lies in the distributed links that agent AIs make to perform specific tasks.
  •  No AGI is possible without memory being committed to AI agent results. 
  •  Hardware developments, and the mass deployment of, for example Nvidia DGX H100’s ,will be required for agencies to see what the scale of narrow AIs working in cooperation can bring to more general problems.
  • AI self learning, with the assumption of AI improving upon itself, is unproven. There are many assumptions being made in the AI space.
  • Conflating AI intelligence with animal/human intelligence with sentience remains a stretch at best, hyperbole and misleading at worst.
  • Calling AGI’s ‘God’ even before superintelligence is viable, is not a useful response.
  • Conflating AGIs with popular film fictional representations can be highly misleading.
  • Solving the Alignment issue is unlikely. 
  • Making AGI systems 'fit for purpose', depends upon the purpose. Expectations about purpose will have to be compromised and tailored to existing circumstance.

Legislation

  • Alignment issues will remain problematic, during the course of narrow AI development.
  • Hiding one’s head in the sand about AIs is not a useful response.
  • Legislation as an attempt to control development remains highly problematic; the documents I’ve reviewed from the USA, the EU and some of it’s member states have been inadequate, and have barely been able to understand the problems that AI threw up last year, never mind last month. AI development will now always be too fast for reactionary legislation. 
  • The winners of the productivity changes that AI brings forth will remain with the system elites that created them, so in China for example in their hybrid capitalist system, in the West an increasingly small number of the tech elite. Benefits will be highly constrained for the mass of people, and may well be detrimental to large minorities. This is not speculation.

Tools

  • Current AI tools affect ‘white collar’ work in post industrialised nations the most.
  • Current AI tools remain riddled with biases.
  • Current AI tools have data gatekeepers enacted to present ‘acceptable’ results to consumers of its services.
  • Current AI tools already increase productivity in many white collar fields, to an extent, particularly in more entry level areas, I expect this to grow significantly in the forthcoming days, weeks, months.

Critical Responses

  • We have a catch 22 situation, critical responses without an understanding of the SWOT issues narrow AI and AGI represent are unhelpful, yet keeping up with the developments may not be possible for any human.
  • As ever the range of responses is varied, from tech billionaires calling for a delay in development, under the guise of being helpful, whilst the reality is they are playing catch up, to those who are idolatrous of tech, and to institutions confined by their practicing mission.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.