Skip to main content

Deceptions: how the language used by tech deceives

 


In response to the quick article on this mornings Radio 4

Misnomers: The terms used to market the field of AI tend to be misnomers, in commonly understood terminology. Let’s start with Artificial Intelligence. The definition of ‘intelligence’ is “the ability to learn, understand and think in a logical way about things; the ability to do this well. “ AI neither understands nor thinks, instead it redefines AI in it’s own terms as:  the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. In these redefined terms the term AI works, but what it certainly isn’t is intelligent in the true sense. Hence AI is a contested term.

Neural Networks: has a few competing definitions, along the lines of ‘a computer system which is designed to work in a similar way to the human brain and nervous system’.  What we are really defining is an artificial neural network, that maybe inspired by biological brains, but is nothing like one in its structure and operation. 

‘The original motive for the pioneers of AI was to replicate human brain function: nature’s most complex and smartest known creation. This is why the field of AI has derived most of its nomenclature from the form and functions of the human brain, including the term AI or artificial intelligence.

So, artificial neural networks have taken direct inspiration from human neural networks. Even though a large part of the human brain’s functions remain a mystery, we do know this much: biological neural pathways or networks allow the brain to process massive amounts of information in the most complex ways imaginable, and that’s precisely what scientists are trying to replicate via artificial neural networks.

If you think Intel’s latest Core™ i9 processor running at 3.7GHz is powerful, then consider the human brain’s neural network in contrast: 100 billion neurons, which is what the brain uses for the most ‘basic’ processing. There’s absolutely no comparison in that sense between the two! The neurons in the human brain perform their functions through a massive inter-connected network known as synapses. On average, our mind has 100 trillion synapses, so that’s around 1,000 per neuron. Every time we use our brain, chemical reactions and electrical currents run across these vast networks of neurons.’

By Thomas, How similar are Neural Networks to our Brains

I could go on right through the lexicon of terms the tech industry uses and dispute them, but hopefully you get the point. 

Anthropomorphising machines is a dangerous trait. It may be convenient to use the terms the tech industries hands on to us, but ultimately it will prove to be unhelpful and even outright deceptive. This matters as the more AI products and services are promoted to a wider user base, without the fundamentals of understanding being challenged, the more people will be deceived and give more credit to the 'intelligence' than is deserved. 

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.

AI Agents and the Latest Silicon Valley Hype

In what appears to be yet another grandiose proclamation from the tech industry, Google has released a whitepaper extolling the virtues of what they're calling "Generative AI agents". (https://www.aibase.com/news/14498) Whilst the basic premise—distinguishing between AI models and agents—holds water, one must approach these sweeping claims with considerable caution. Let's begin with the fundamentals. Yes, AI models like Large Language Models do indeed process information and generate outputs. That much isn't controversial. However, the leap from these essentially sophisticated pattern-matching systems to autonomous "agents" requires rather more scrutiny than the tech evangelists would have us believe. The whitepaper's architectural approaches—with their rather grandiose names like "ReAct" and "Tree of Thought"—sound remarkably like repackaged versions of long-standing computer science concepts, dressed up in fashionable AI clot...