Skip to main content

Flaws in Optimism; an AI future, it's complex

 


Shapiro, when discussing the GTO Framework introduced his video with 'The Problem', as he saw it, and framed the need for the framework as an optimistic response to the two other positions he proposed people take up, that of Doomerism and Denialism.

Doomsters, Denialists, Optimists, which can be matched onto 3 main outcomes: Dystopia, Extinction, Utopia. The ‘grey area’ the shade is presented as the area occupying the the middle of this triangle. Shapiro sets out a Sympathy For and a Flaws With positions, then outlines his framework without too much consideration of the potential flaws. I seek to redress that in this blog. I do not seek to criticise Shapiro too much for the dedicated work that he and his colleagues have put into GATO and consider their efforts to be a very useful addition to the discourse.

My critical response is not only to Shapiro either, but to many figures in the debate, such as Altman, Goertzel, Leahy, Kurzweil amongst others. Call me an admixture of a Doomer, Denier and Optimist should you wish.

Doomerism could easily be equated to Millennial fears as seen in 2000, 1000, which is fairly well documented, often tied in to faith like positions rather than realities. Denialism, commonly known as the Ostrich effect, can be seen as a negative position, it can also be based on more pragmatic understandings. Optimism, well it's just what makes us get up in the mornings and so often is a fiction of the imaginations, and is always a creative response.

Below Shapiro sets out the Doomer and Denier positions. Below that I set out a critique, Flaws with Optimism.

Sympathy for the Doomer

  • Acknowledge potential existential risks of AI

  • Understand concerns about uncontrolled AGI

  • Recognise the needs for safeguards and regulations

  • Relate to current trends: stagnant wages, wealth inequality, and the Moloch problem of capitalism

  • These problems are large and complex

There’s a lot to be worried about, it’s true.


Flaws with Doomerism

  • Overemphasis on worst-case scenarios

  • Dogmatic; tend to believe catastrophe is a foregone conclusion

  • May discourage innovation and collaboration

  • Distracts from finding real solutions

  • Can lead to nihilism and fatalism

  • Irresponsible for thought leaders: fosters hopelessness and inaction

While it’s important to raise the alarm, it’s time to stop crying wolf and to get our hands dirty. No more naval gazing and hand wringing.


Sympathy for the Denier

  • Acknowledge slow progress towards AGI

  • Empathise AI’s potential benefits and achievements

  • Cautions against overregulation and fearmongering

  • Recognise the adaptability and resilience of humanity

  • We’ve survived 100% of everything so far!

Where’ the fire? Nothing has exploded yet. There’s some legitimacy to this mindset.


Flaws with Denialism

  • Underestimating potential risks and AI advancements

  • Ignoring possible negative consequences of inaction

  • Does not acknowledge exponential growth, unintended consequences, and salatory leaps

  • Lack of urgency in addressing AI-related challenges

  • Irresponsible for thought leaders; fosters complacency and inaction

“Nothing to see here” messaging leads to boiled frog syndrome. Exponential growth means that the time between true alarm and “too late” will be very short.


Sympathy for Optimistic

  • Who, in their right mind, wouldn’t want a Utopia when faced with the alternatives.


Flaws with Optimism - and the GATO framework Assumptions

  • Over emphasis on AGI / ASI inevitability, not recognising Smith and others claims that AGI is impossible, that AI Ethics is a misnomer as AI goals are simplistic reward based models, not ethics as such as AI has no will or mind, and therefore motivation beyond simple rewards can't exist. It’s not even equivalent to a dog.

  • No acknowledgement that the implementation of heuristic imperatives may never be possible. Therefore AI Alignment may well be a mirage.

  • Lack of concern for monopoly positions of Tech providers.

  • Lack of sufficient acknowledgement of the collateral damage that will accrue with current narrow AI usage and the unintended consequences.

  • No acknowledgement of the hardware and algorithmic technical requirements needed to expand AI far beyond near future potential – such as Quantum systems – and the differences this may mean in the speed of developments.

  • Lack of context of these developments – particularly the environment.

  • Over optimism that the worlds fundamental problems will be solved - inequality and the environmental crisis / Capitalism will be solved as soon as AGI / ASI is reached.

  • Over reliance on policy and regulation being effective to combat the significant challenges raised by AI never mind AGI /ASI.

  • Refusal to recognise the fragility of the biosphere supporting humanity.

  • Faith in ‘stakeholder capitalism’ as being robust, with buy in promoted as an effective solution.

  • Too little acknowledgement that the current forms of capitalism are able to adapt to developments and that increasing authoritarianism, as is being seen across the political sphere, is a response that could well increase.

  • Denial about the mineral shortages that continued exploitation of the worlds resources will have / is having on the tech industry.

  • No acknowledgement of the supply issues that geopolitics and environmental constraints are having on hardware production.

  • Over reliance on AI improving GDP growth as a desired outcome, when the climate crisis indicates otherwise.

  • Overemphasis on the whole utility of the concept of GDP being a useful indicator of human needs.

  • Not recognising that National Security can be put at risk with an AI arms race, with nations seeking to secure natural resources due to pressures on the bio sphere. Palantir have demonstrated where this arms race is heading.

  • Denialism of AI currently is nothing like human intelligence, it’s an Alien Intelligence, and the consequences of if this is possible to be expanded upon, then any AGI / ASI that would ensue would be uncontrollable as it is alien.

  • Media engagement – with traditional media and content creators fundamentally misunderstands how the global media works in practice.

  • No acknowledgement of a media backlash against AI technologies, should civil unrest occur due to mass job losses and or other outcomes,  arising from their mass adoption.

  • Depending on Policy Advocacy – such as Altman appearing at the Senate hearings yesterday, is too bureaucratic and too slow.

  • Over dependence on legislators will only ever enrich a small niche of lawyers and is too reactive.

  • No acknowledgement of the many risks that may arise from Open Source / decentralised distribution of AI / AGI / ASI systems.

  • No acknowledgment of the need, which may well be impossible, of sandboxing any AGI /ASI.

As I have been developing this blog I have been covering or at least commenting on most of these issues above, and may well be adding more as more understanding develops. Feel free to discuss.

Comments

Popular posts from this blog

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about ...

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in ...

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).  ...