Skip to main content

Is an AGI even required to achieve similar results?


A Comprehensive Artificial Intelligence Services technical report model by Drexler, from 2019, seems useful to revisit at this time. Instead of focusing on the hypothetical scenario of a single superintelligent agent that surpasses human intelligence, we should, the report argues, consider the more realistic possibility of a diverse and interconnected network of AI systems that provide various services for different tasks and domains. They call this approach Comprehensive AI Services (CAIS).

The main advantages of CAIS are that it avoids some of the conceptual and technical difficulties of defining and measuring intelligence, and that it allows for a more fine-grained and flexible analysis of the potential benefits and risks of AI. 

It's also a good way of considering where we have arrived at, with AgentGPT's operating via Hugging Face or via AutoGPT for example. By connecting a range of Narrow AI tools to perform the tasks that they are optimised for, and having a 'manager' assign the allocation of these tasks, giving the correct prompts for each agent, this 'comprehensive' approach could provide similar results to an AGI? 

The authors of the technical report suggested that CAIS can help us better align AI systems with human values and goals, by enabling more human oversight and collaboration, and by fostering a culture of responsibility and accountability among AI developers and users. Which seems far more plausible than trying to do that with a monolithic AGI.

The authors conclude by outlining some of the open questions and challenges that CAIS poses for AI research and governance, such as how to ensure the reliability, security, and interoperability of AI services, how to balance the trade-offs between centralization and decentralization of AI systems, and how to promote ethical and social norms for AI use and development. These are questions that exist for all AI systems. 



Comments

Popular posts from this blog

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about ...

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in ...

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).  ...