Skip to main content

Merging with AI, the Transhumanists gamble

 


In his presentation on 17 July 2019, Elon Musk said that ultimately he wants “to achieve a symbiosis with artificial intelligence.” Even in a “benign scenario,” humans would be “left behind.” Musk wants to create technology that allows a “merging with AI.”

Neuralink is a revolutionary technology that aims to connect your brain to a computer. Imagine being able to control your devices, access information, and communicate with others using only your thoughts. The firm plans to insert a sensor smaller than a fingertip, possibly with only local anesthesia. A complex robot will implant thin wires or threads in brain regions that control movement and sensation. The implant is connected to a wireless device that processes and transmits your neural signals to your phone or computer via Bluetooth.

Neuralink's vision is to create a symbiosis with artificial intelligence, where humans can enhance their abilities and keep up with the rapid advances of technology. Neuralink's founder, Elon Musk, said in a presentation in 2019 that he wants to help people who suffer from paralysis, blindness, and other neurological disorders by restoring their sensory and motor functions. He also said that Neuralink could create new sensations and experiences by stimulating different parts of the brain.

In other words, Neuralink fits Julian Huxley's concept of transhumanism and challenges the traditional notion of what it means to be human. It blurs the boundaries between the natural and the artificial, the self and the other, the mind and the machine. Neuralink devices question René Descartes' famous statement "cogito, ergo sum" (I think, therefore I am). I think, but are the thoughts really mine?

In late May of 2023 Neuralink’s FDA approval was approved from the U.S. Food and Drug Administration to conduct its first tests on humans. 

A clinical trial for the device in humans is no guarantee of regulatory or commercial success. Neuralink will face intense scrutiny by the FDA and ethical and security questions. The company has also drawn criticism for its research on animals from the Physicians Committee for Responsible Medicine.

Some of the possible procedural risks are:

  • Injury to the facial nerve --this nerve goes through the middle ear to give movement to the muscles of the face.
  • Meningitis --this is an infection of the lining of the surface of the brain.
  • Cerebrospinal fluid leakage --the brain is surrounded by fluid that may leak from a hole in the skull or dura mater (the protective layer around the brain) caused by the implant.
  • Perilymph fluid leak --the inner ear or cochlea contains fluid that may leak into the middle ear due to damage from the implant.
  • Brain hemorrhage --this is bleeding in the brain that can cause stroke or death.
  • Infection --this is a risk of any surgical procedure and can lead to complications such as abscess formation or device malfunction.
  • Device malfunction --this can occur due to mechanical failure, battery depletion, lead fracture, or electromagnetic interference.
  • Lack of benefit for certain symptoms --this can happen if the implant does not target the correct brain area or does not provide adequate stimulation.
  • Worsening mental or emotional status --this can occur due to changes in brain activity, medication adjustments, or psychological factors related to having an implant.
  • Migration of the implant --this can happen if the implant is not fixed securely and moves within the brain tissue, causing damage or loss of function.
  • Unwanted tissue reactions --this can happen if the implant causes inflammation, scarring, or foreign body response in the surrounding brain tissue, affecting its performance and biocompatibility.
In the paper 'A Critique of Transhumanism,' Diederich concludes:
If the connection between human activity and reward is weakened or eliminated, psychological and behavioral problems arise. The use of illicit drugs is one example: drugs like cocaine have an immediate impact on the brain’s reward system. Frequent users are seeking the drug directly in order to obtain pleasure and reward without having to engage in behaviour that offers exceptional satisfaction without chemical stimulus. Brain machine interfaces such as the one proposed by Neuralink risk to weaken or destroy the fundamental connection between behaviour and reward.

 

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

AI Agents and the Latest Silicon Valley Hype

In what appears to be yet another grandiose proclamation from the tech industry, Google has released a whitepaper extolling the virtues of what they're calling "Generative AI agents". (https://www.aibase.com/news/14498) Whilst the basic premise—distinguishing between AI models and agents—holds water, one must approach these sweeping claims with considerable caution. Let's begin with the fundamentals. Yes, AI models like Large Language Models do indeed process information and generate outputs. That much isn't controversial. However, the leap from these essentially sophisticated pattern-matching systems to autonomous "agents" requires rather more scrutiny than the tech evangelists would have us believe. The whitepaper's architectural approaches—with their rather grandiose names like "ReAct" and "Tree of Thought"—sound remarkably like repackaged versions of long-standing computer science concepts, dressed up in fashionable AI clot...