Skip to main content

Let's not ignore the Loxodonta in the room

 


When researching the leaders, CEOs and pioneers / advocates of AI there's something that should not be ignored. That is the number of them that hold, how should I say, fringe views. They may have Transhumanism tendencies, often outright support for this 'potential of augmenting humans with technology. It may be they have a faith in nanotechnologies to wire technologies directly into the human cortex. Or hold both views in the case of Ray Kurzweil as revealed in his interview with Fridman. I first came across the idea of Transhumanism after visiting a self proclaimed Transhumanist artist back in the late 1980's. I was, frankly horrified with the hubris of it all. 

These are far from the only views commonly held by what I call tech-evangelists who always, always anthropomorphise technologies, which somewhat gives the game away.

Another common idea is that the AI 'singularity', a term borrowed from physics, is inevitable, the singularity in this usage is the concept that superintelligence is the event horizon that we can't peer beyond, when according to some, we will have created a superhuman or, as some call it, a 'God'. 

Today I came across an excellent blog by Johannes Jäger that dives into this murky realm and who is, thankfully, not afraid to call out the elephant in the room of the Tech-evangelists. I encourage you to have a read, especially the post 'Machine Metaphysics and the Cult of Techno-Transcendentalism'.

I'm currently in my second week of blogging about AGI, and I know, I should have expected to come across the varied view points of tech-evangelists by this point. I'm still surprised though at the prelevance and the increasingly perverse worldviews held by those that wish to 'transform our lives.'



Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in