Skip to main content

Working hypothesis, after a week of peering down the AGI rabbit hole.

Working hypothesis, after a week of peering down the AGI rabbit hole.



I started this blog to chart and comment on the rapid changes that are taking place in the development of narrow AI to see what sort of path we are on, and if there is any possibility that we’d see an emergent AGI, over a course of 6 months.

Any findings are part of an ongoing hypothesis.

Speculations

  • The opportunity for an AGI lies in the distributed links that agent AIs make to perform specific tasks.
  •  No AGI is possible without memory being committed to AI agent results. 
  •  Hardware developments, and the mass deployment of, for example Nvidia DGX H100’s ,will be required for agencies to see what the scale of narrow AIs working in cooperation can bring to more general problems.
  • AI self learning, with the assumption of AI improving upon itself, is unproven. There are many assumptions being made in the AI space.
  • Conflating AI intelligence with animal/human intelligence with sentience remains a stretch at best, hyperbole and misleading at worst.
  • Calling AGI’s ‘God’ even before superintelligence is viable, is not a useful response.
  • Conflating AGIs with popular film fictional representations can be highly misleading.
  • Solving the Alignment issue is unlikely. 
  • Making AGI systems 'fit for purpose', depends upon the purpose. Expectations about purpose will have to be compromised and tailored to existing circumstance.

Legislation

  • Alignment issues will remain problematic, during the course of narrow AI development.
  • Hiding one’s head in the sand about AIs is not a useful response.
  • Legislation as an attempt to control development remains highly problematic; the documents I’ve reviewed from the USA, the EU and some of it’s member states have been inadequate, and have barely been able to understand the problems that AI threw up last year, never mind last month. AI development will now always be too fast for reactionary legislation. 
  • The winners of the productivity changes that AI brings forth will remain with the system elites that created them, so in China for example in their hybrid capitalist system, in the West an increasingly small number of the tech elite. Benefits will be highly constrained for the mass of people, and may well be detrimental to large minorities. This is not speculation.

Tools

  • Current AI tools affect ‘white collar’ work in post industrialised nations the most.
  • Current AI tools remain riddled with biases.
  • Current AI tools have data gatekeepers enacted to present ‘acceptable’ results to consumers of its services.
  • Current AI tools already increase productivity in many white collar fields, to an extent, particularly in more entry level areas, I expect this to grow significantly in the forthcoming days, weeks, months.

Critical Responses

  • We have a catch 22 situation, critical responses without an understanding of the SWOT issues narrow AI and AGI represent are unhelpful, yet keeping up with the developments may not be possible for any human.
  • As ever the range of responses is varied, from tech billionaires calling for a delay in development, under the guise of being helpful, whilst the reality is they are playing catch up, to those who are idolatrous of tech, and to institutions confined by their practicing mission.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

The Hidden Environmental Cost of AI: Data Centres' Surging Energy and Water Consumption

 In recent years, artificial intelligence (AI) has become an integral part of our daily lives, powering everything from smart assistants to complex data analysis. However, as AI technologies continue to advance and proliferate, a concerning trend has emerged: the rapidly increasing energy and water consumption of data centres that support these systems. The Power Hunger of AI According to the International Energy Agency (IEA), global data centre electricity demand is projected to more than double between 2022 and 2026, largely due to the growth of AI. In 2022, data centres consumed approximately 460 terawatt-hours (TWh) globally, and this figure is expected to exceed 1,000 TWh by 2026. To put this into perspective, that's equivalent to the entire electricity consumption of Japan. The energy intensity of AI-related queries is particularly striking. While a typical Google search uses about 0.3 watt-hours (Wh), a query using ChatGPT requires around 2.9 Wh - nearly ten times more en...