Skip to main content

Deceptions: how the language used by tech deceives

 


In response to the quick article on this mornings Radio 4

Misnomers: The terms used to market the field of AI tend to be misnomers, in commonly understood terminology. Let’s start with Artificial Intelligence. The definition of ‘intelligence’ is “the ability to learn, understand and think in a logical way about things; the ability to do this well. “ AI neither understands nor thinks, instead it redefines AI in it’s own terms as:  the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. In these redefined terms the term AI works, but what it certainly isn’t is intelligent in the true sense. Hence AI is a contested term.

Neural Networks: has a few competing definitions, along the lines of ‘a computer system which is designed to work in a similar way to the human brain and nervous system’.  What we are really defining is an artificial neural network, that maybe inspired by biological brains, but is nothing like one in its structure and operation. 

‘The original motive for the pioneers of AI was to replicate human brain function: nature’s most complex and smartest known creation. This is why the field of AI has derived most of its nomenclature from the form and functions of the human brain, including the term AI or artificial intelligence.

So, artificial neural networks have taken direct inspiration from human neural networks. Even though a large part of the human brain’s functions remain a mystery, we do know this much: biological neural pathways or networks allow the brain to process massive amounts of information in the most complex ways imaginable, and that’s precisely what scientists are trying to replicate via artificial neural networks.

If you think Intel’s latest Core™ i9 processor running at 3.7GHz is powerful, then consider the human brain’s neural network in contrast: 100 billion neurons, which is what the brain uses for the most ‘basic’ processing. There’s absolutely no comparison in that sense between the two! The neurons in the human brain perform their functions through a massive inter-connected network known as synapses. On average, our mind has 100 trillion synapses, so that’s around 1,000 per neuron. Every time we use our brain, chemical reactions and electrical currents run across these vast networks of neurons.’

By Thomas, How similar are Neural Networks to our Brains

I could go on right through the lexicon of terms the tech industry uses and dispute them, but hopefully you get the point. 

Anthropomorphising machines is a dangerous trait. It may be convenient to use the terms the tech industries hands on to us, but ultimately it will prove to be unhelpful and even outright deceptive. This matters as the more AI products and services are promoted to a wider user base, without the fundamentals of understanding being challenged, the more people will be deceived and give more credit to the 'intelligence' than is deserved. 

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

The Hidden Environmental Cost of AI: Data Centres' Surging Energy and Water Consumption

 In recent years, artificial intelligence (AI) has become an integral part of our daily lives, powering everything from smart assistants to complex data analysis. However, as AI technologies continue to advance and proliferate, a concerning trend has emerged: the rapidly increasing energy and water consumption of data centres that support these systems. The Power Hunger of AI According to the International Energy Agency (IEA), global data centre electricity demand is projected to more than double between 2022 and 2026, largely due to the growth of AI. In 2022, data centres consumed approximately 460 terawatt-hours (TWh) globally, and this figure is expected to exceed 1,000 TWh by 2026. To put this into perspective, that's equivalent to the entire electricity consumption of Japan. The energy intensity of AI-related queries is particularly striking. While a typical Google search uses about 0.3 watt-hours (Wh), a query using ChatGPT requires around 2.9 Wh - nearly ten times more en...