Skip to main content

Month One: A traveler's notes from the AGI Rabbit Hole

 




I titled this blog "CHARTING THE EMERGENCE OF AGI?" with a subtitle of "6 months to AGI?" Both the title and subtitle express doubt of the outcomes in the questioning. After my first week, I felt compelled to write my initial findings. In this blog post today, I shall provide my findings after the first month of this blog. It's an ongoing hypothesis. Some views have changed, a little, they are further informed, so have been added to. This post has some fairly lengthy lists, therefore I will only repeat this again when I conclude this blog in five months time.

I will once again present the developing hypothesis within the semantic structure I have used to frame the different elements of discourse I have come across that I wish to cover.

Of course, I am not the only person who has been charting developments in the field of AI/AGI/ASI. I never expected that I would be, but prior to writing this blog, I didn't follow any particular authors' work in this field of inquiry. For this quick monthly catch-up on where I think we might be, I am going to include as a reference Dr Alan Thompson's "Countdown to AGI" video. His assessment, and he's been covering this topic for many years, was that as of March 2023, we are 42% of the way there to AGI. A lot of developments have occurred since, which in his reasoning might push up that calculation by another couple of percentage points. Coming into this week, I have been more skeptical. I will view Thompson's video after completing the writing of this blog, and include a link to it at the end.

As ever I may be wrong on any and all of these points, but this is where I have got to in my thinking.

Speculations

  • The opportunities for AGI lies in the distributed links that agent AIs make to perform specific tasks. It also likely to require new hardware and algorithms.
  •  No AGI is possible without persistent memory being committed to AI agent results. 
  •  Hardware developments, and the mass deployment of, for example Nvidia DGX H100’s, will be required for agencies to see what the scale of narrow AIs working in cooperation can bring to more general problems.
  • There are many assumptions being made in the AI space. Partly due to competitive world views, differences in training, employment, understandings of those making the claims. Partly too due to the different models people have experiences of using. What's clear is that no one single person is capable of comprehending the entirety of the field, due to the speed of change, and equally it's becoming clear that no single body, be it legislative or in a regulation capacity has this ability either, due to the potentially emergent capabilities of Transformer based LLMs. 
  • The 'emergent' capabilities of AI are being called into question, they are 'likely to be a mirage' according to a recent study. 
  • Conflating AI intelligence with animal/human intelligence with sentience remains a stretch at best, hyperbole and misleading at worst. (update - the term digital intelligence seems more apt, or alien intelligence, as it's only aping human intelligence but is decidedly different). 
  • Calling AGI’s ‘God’ even before superintelligence is viable, is not a useful response.
  • Conflating AGIs with popular film fictional representations can be highly misleading. though they do represent some considerations artists have previously undertaken to play out thought experiments, particularly around AI ethics.
  • Solving the Alignment issue is unlikely. 
  • Taking a Schmidhuber like 'this is inevitable' let's just deal with the consequences optimistically approach without fear, is both naïve and insensitive to many well founded fears. 
  • Making AGI systems 'fit for purpose', depends upon the purpose. Expectations about purpose will have to be compromised and tailored to existing circumstance.
  • Periods of civil unrest seem inevitable, given the current trajectory and ultimately many of the jobs losses that will accrue from the deployment, both by the commercial sectors and the public sectors, of AI technologies.
  • Certain types of governments will be particularly attracted to sophisticated narrow AI models, for many reasons: efficiencies, cost cutting - jobs - with their pesky striking complaining humans that don't accept compliance to the 'bullying of government ministers, for example. The military, civil forces, especially the police forces, intelligence services, 
  • We have seen the last human US Presidential race take place. Major elections in the US, UK and many others will no longer be solely determined by human information.
  • Any AGI will not be equivalent to human intelligence, it will remain an alien/digital intelligence, however it is marketed.
  • AGIs are possible, given the constraints of the above point.
  • The 'first sparks of AGI' is unlikely to come via OpenAI, despite what the infamous paper stated, but is far more likely to come via the Open Source community.
  • Many current AI tools remain clunky, from a UX perspective, expect that to change.
  • Many Open Source AI tools require large elements of fiddling with repositories and such like to get something capable of working to a users specification, - like it did in the early days of the internet technologies - expect little change.
  • Expect many current existing tools to become plugins of other tools, with competition to become fierce in this area: OpenAI have promised it, Microsoft will probably beat them to a public launch of it, the rest will rush to it. This hold the promise that AI assistants will rapidly develop in the usefulness to users and their ubiquity. 
  • Expect many more bogus, short lived 'jobs' such as 'prompt engineers' developing and disappearing just as rapidly.
  • Expect LLMs to develop Theories of Mind rapidly from 'inherent emergent capabilities' (currently being disputed, see speculations above) thereby presenting themselves as 'more human like' and subsequently fooling people of machines having such qualities of mind that don't and can't exist in current technologies.
  • AI tools are already taking jobs, estimates vary, depending on sources, but we should perhaps anticipate that 3% of jobs to be affected by these tools in the next year, and 30% of jobs to be effectively replaced by the mid 2030s, in what may turn out to be a conservative projection.
  • From what I've seen so far Quantum computing may not be the best platform for AI, it requires different approaches to programming, different programming languages etc. If, however, future AI developments are made to fit Quantum computing with AI applications in an efficient way, the possibilities of AI applications could see yet another exponential rise in AI development. 

Legislation and Regulation

  • Alignment issues will remain problematic, during the course of narrow AI development. It is highly unlikely that legislation is capable of dealing with this subject, ever, if it is intended to be fit for purpose.
  • Hiding one’s head in the sand about AIs is not a useful response.
  • Legislation as an attempt to control development remains highly problematic; the documents I’ve reviewed from the USA, the EU and some of it’s member states have been inadequate, and have barely been able to understand the problems that AI threw up last year, never mind last month. Most frameworks look to enable AI development, as a priority, such as the German federal AI strategy. AI development will now always be too fast for reactionary legislation. The UK's effort in this field, so far, is frankly embarrassing. UK government legislation, take for example the UK government's recent 'Online Safety Bill', is totally inadequate for dealing with last decades problems, and should not be taken seriously for countering the issues society presently faces from digital technologies. The AI White Paper, and any legislation that will appear or be proposed off the back of it, will be of no assistance either. This is not speculation. 
  • Open Source AI development throws a large wrench, in practice, into developing effective  regulatory and legislative frameworks. Wizard-Vicuna LLM being a prime example. The Genie's escaped from the bottle folks.
  • The winners of the productivity changes that AI brings forth will remain with the system elites that created them, so in China for example in their hybrid capitalist system, in the West an increasingly small number of the tech elite, and to an extent in the apparatus of nation states. Benefits will be highly constrained for the mass of people, and may well be detrimental to large minorities. This is not speculation. Jobs, quality of life, quality of interaction with commercial and public services will all change, rapidly.
  • An AI war, with weapons, is already taking place in Europe. (Any Human Rights charges that may accrue from such usage will not be filed at the Hague - speculation). This causes issues, not least as this knowledge is not subject to current discourse and therefore the scrutiny and public concerns are unlikely to be aired.
  • AI tools can and are being used nefariously. Regulation and legislation are little if any deterrent to this.
  • Expecting a cartel of Tech companies to be able and willing to police themselves with a standards body or such like is fanciful. The history of such bodies struggled with the Internet technologies, despite W3C and the best efforts of Tim-Berners Lee, with proprietary approaches a plenty and monopoly giants always seeking to 'lead'. There are far more considerations involved in AI.    

Tools

  • Current AI tools affect ‘white collar’ work in post industrialised nations the most.
  • Current AI tools remain riddled with biases, deceptions, 'hallucinations'.
  • Current AI tools have data gatekeepers enacted to present ‘acceptable’ results to consumers of its services.
  • Current AI tools already increase productivity in many white collar fields, to an extent, particularly in more entry level areas, I expect this to grow significantly in the forthcoming days, weeks, months.
  • Current AI tools are projected with vast amounts of hype, both negative and positive, often from the makers of such tools. Many of those with a long standing interest in the field over hype capabilities too - confirmation bias by authors is readily apparent. 
  • Development of more narrow expert systems, such as the Climate Q&A, where LLMs are trained on subject specific knowledge areas, such as Palantir's AIP, will undoubtedly come to fruition. 
  • AI tools can and are being used nefariously. (See Legislation and Regulation above).
  • AI tools are already changing many workplace jobs, are taking jobs, even in the IT industries and creative industries (See Speculations above for the anticipated outcomes).
  • There are a large range of highly useful and productive tools I can't cover (especially in the fields of medicine and biology) that I'm aware of, but have insufficient knowledge of to add much to any discussion.

Critical Responses

  • We have a catch 22 situation, critical responses without an understanding of the SWOT issues narrow AI and AGI represent are unhelpful, yet keeping up with the developments may not be possible for any human.
  • As ever the range of responses is varied, from tech billionaires calling for a delay in development, under the guise of being helpful, whilst the reality is they are playing catch up, to those who are idolatrous of tech, and to institutions confined by their practicing mission.
  • There are strong arguments that machines, certainly as they are engineered currently, can never approach AGI
  • There are arguments that we are nearing the midway point in AI's becoming AGI's, although a major step along that path, as far as I understand the argument, is for the machine to be better able to explore and take data input from its environment. If that is the case, which makes sense to me (sic) then AI capable robotics represent the most significant milestone to date along such a path. (See video below)
  • The most useful critical take on AI ethics I've come across so far has been from UNESCO. (note to self, more reading is required).
  • Geoffrey Hinton's response to leaving Google, and his fears, should not be ignored. He was hardly the first person to air their concern (Emily Bender, for example seems to have been wrongly diminished in her calls) but is the most prominent.
  • There are arguments for disrupting AI, that many of it's higher level tools only enable 'facistic futures'. 
  • Economics is already a constraint of current AI development. Capitalism being a significant driver of AI outcomes and consequences.
  • There is no doubt that current AI is damaging the long term environmental sustainability of societies, and that eco-techno solutions, are really no solutions at all, they are just an excuse for the continuance of the same approaches that got us into this condition.
  • There are notable concerns about the ulterior motives of  a small minority, but extremely wealthy Tech owners /CEOs that hold extreme views, being a danger to us all. Douglass Rushkoff first brought this Survival of the Richest idea to my attention a few years ago in a TED Talk. 
  • The leaked Google letter betrays much about the exponential development the Open Source community is also bringing to AI development. This can be argued as an opportunity or an added threat.



Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in