Skip to main content

Let's not ignore the Loxodonta in the room

 


When researching the leaders, CEOs and pioneers / advocates of AI there's something that should not be ignored. That is the number of them that hold, how should I say, fringe views. They may have Transhumanism tendencies, often outright support for this 'potential of augmenting humans with technology. It may be they have a faith in nanotechnologies to wire technologies directly into the human cortex. Or hold both views in the case of Ray Kurzweil as revealed in his interview with Fridman. I first came across the idea of Transhumanism after visiting a self proclaimed Transhumanist artist back in the late 1980's. I was, frankly horrified with the hubris of it all. 

These are far from the only views commonly held by what I call tech-evangelists who always, always anthropomorphise technologies, which somewhat gives the game away.

Another common idea is that the AI 'singularity', a term borrowed from physics, is inevitable, the singularity in this usage is the concept that superintelligence is the event horizon that we can't peer beyond, when according to some, we will have created a superhuman or, as some call it, a 'God'. 

Today I came across an excellent blog by Johannes Jäger that dives into this murky realm and who is, thankfully, not afraid to call out the elephant in the room of the Tech-evangelists. I encourage you to have a read, especially the post 'Machine Metaphysics and the Cult of Techno-Transcendentalism'.

I'm currently in my second week of blogging about AGI, and I know, I should have expected to come across the varied view points of tech-evangelists by this point. I'm still surprised though at the prelevance and the increasingly perverse worldviews held by those that wish to 'transform our lives.'



Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

AI Agents and the Latest Silicon Valley Hype

In what appears to be yet another grandiose proclamation from the tech industry, Google has released a whitepaper extolling the virtues of what they're calling "Generative AI agents". (https://www.aibase.com/news/14498) Whilst the basic premise—distinguishing between AI models and agents—holds water, one must approach these sweeping claims with considerable caution. Let's begin with the fundamentals. Yes, AI models like Large Language Models do indeed process information and generate outputs. That much isn't controversial. However, the leap from these essentially sophisticated pattern-matching systems to autonomous "agents" requires rather more scrutiny than the tech evangelists would have us believe. The whitepaper's architectural approaches—with their rather grandiose names like "ReAct" and "Tree of Thought"—sound remarkably like repackaged versions of long-standing computer science concepts, dressed up in fashionable AI clot...