David Bowie (the first four verses of) Saviour Machine
President Joe once had a dream
The world held his hand, gave their pledge
So he told them his scheme for a Saviour Machine
They called it the Prayer, its answer was law
Its logic stopped war, gave them food
How they adored till it cried in its boredom
Please don't believe in me, please disagree with me
Life is too easy, a plague seems quite feasible now
Or maybe a war, or I may kill you all
Don't let me stay, don't let me stay
My logic says burn so send me away
Your minds are too green, I despise all I've seen
You can't stake your lives on a Saviour Machine
Apart from Hal 9000, which I saw when the film first came out in a cinema, the next time I became aware of a dystopian AI was in David Bowie's lyrics. These have both informed my cultural grounding of AGI. I'm a lot older now, but these are my biases. With that out of the way I wanted to explore in this blog what I've pondered on so far, concerning the central question of this blog, not narrow AI Tools, but AGI. By AGI I do largely limit the extent of it's capabilities to human like, and don't conflate, in a Sam Altman kind of way ASI powers.
Metacrisis
All the current data we have points to our species being on the cusp of a metacrisis. Metacrisis is a term used to describe the overlapping and interconnected nature of multiple crises that humanity is currently facing. These crises are not new, but they are becoming more acute and are increasingly affecting each other. The term "meta" refers to the fact that these crises are not just individual problems but are interconnected and systemic, affecting multiple areas of society. The metacrisis includes crises such as climate change, biodiversity loss, social inequality, political polarisation, and technological risks. The metacrisis is seen as a challenge to the current way of thinking and requires a new approach to problem-solving that takes into account the interconnected nature of these crises.
The Intergovernmental Panel on Climate Change’s 2018
special report warned that humankind has less than 12 years to avoid potentially irreversible climate disruption. So we are time constrained, to say the least.
No plausible program of emissions reductions alone can prevent climate disaster. There is a 12 percent chance that climate change will reduce global output by more than 50 percent by 2100, and a 51 percent chance that it lowers per capita GDP by 20 percent or more by then, unless emissions decline.
71% of the world’s population live in countries where
inequality has grown.
In this volatile environment we are faced with a technological experiment of a relatively unknown technology is being tested on millions. One that many experts say poses an existential risk by the end of the decade at the earliest, or within two decades at the outside.
Of course there are some that are confident that the claims of tech utopians and or tech doomsters is all hyperbole. Barry Smith is amongst a small minority of informed people that don't expect AI is capable of evolving to anything like AGI. Many of the points he raises I agree with, some I agree with not out of reasoning so much as out of hope. As whilst we may never accomplish an AGI like machine, the tech industry may well get close. That is sufficient reason for concern.
I am getting nearer to a conclusion that Connor Leahy promotes, that we should never build an AGI but it may be impossible to stop the attempts. Why, well it's built upon too many flawed premises. Votava, in an article '
embracing ambiguity' states
'With the increasing adoption of machine learning, we also need to understand the limitation of such techniques, despite their perceived ‘sexiness’. All come with theoretical assumptions that are rarely met in the real world. And all learn from the data available to them in the past. This training data — with all its limitations and biases — has a fundamental impact on the model performance in the real-life situation.
So, before you start fanatically defending the outputs of a data-product (be it a business intelligence dashboard or AI decision engine) make sure that you have considered and, where necessary, communicated the catalogue of caveats. Data can automate a lot of things. Critical thinking is not one of them.'
Previous Failures
It's not the first time that our species have thought we found a magic bullet solution to all or many of our problems. I'm reminded of Richard Dawkins and the neo Darwinian approach that led to 'the selfish gene'. Which, despite seeming a major truth, to many, twenty years ago, is wrong, as explained by Denis Noble in this
video. The concept of the selfish gene has been criticised for oversimplification, lack of evidence, flawed perspective, and ignoring complexity. Yet much scientific investment occurred on the back of neo-Darwinian concepts, exploring and mapping gene sequences, in the hope of finding cures, amongst other things, as the gene was considered the ultimate source. The subsequent research proved the '
gene hype' was erroneous as genes are far more complex in reality. It's this oversimplification and ignoring complexity that I want to explore, in the context of AGI, but I thought this might provide a useful analogy.
The reductionist approach to explaining biological phenomena has displayed its power through the spectacular triumphs of molecular biology. But the approach has its limitations... It is not likely to be useful, or practicable, to explain many biological processes in terms of particle physics. Moreover, exploration of other levels, such as molecules, genes, cells, organisms and populations, may well be more appropriate for an adequate explanation — begging the question, of course, of what constitutes an ‘adequate’ explanation. — Denis Noble.
Just Some Flaws / Features found in LLMs
When LLMs are ingested with their data, this data is often 'cleaned', in part to clarify lexical, syntactic and semantic ambiguities. Ambiguity can be a challenge for AI systems, therefore detecting ambiguities in requirements documents is vital for systematising typical ambiguous phrases. This all helps simplify the data in it's usage of representing the world. Not all data represents an accurate view of the world, which is important to bear in mind.
We will always have to work with a simplified model of reality. We will always have to make assumptions. And we will always have to question the quality of the data we use.
Then there is reductionism in LLMs. Whilst reductionist approaches may limit the scope of LLMs, and lead to overfitting, and results in outputs that lack context or fail to consider the broader context of a problem, they can also, importantly, improve the interpretability and efficiency of LLMs. One of the reasons why people prefer to use Chat GPT3.5 models over Chat GPT4 is due to time efficiencies. Ultimately, the impact of reductionist approaches on the quality outputs of LLMs will depend on the specific application and the complexity of the problem or dataset being analysed. Choosing what model to use then depends on the user understanding enough about the complexities of a problem or decision to be posed to the LLM in order to choose wisely.
Text Predictions
There is a method used to determine how well a LLM can predict text, which is important when considering how LLMs deal with nuance in replying to a question, measuring how well they predict unseen words from a test set. Perplexity value is a useful metric for evaluating the performance of language models, but it has some limitations. For example, perplexity is not always a good measure of how well a model will perform on real-world tasks. Additionally, perplexity can be sensitive to the size and composition of the training dataset.
Despite its limitations, perplexity is a widely used metric for evaluating the performance of language models. It is a simple and intuitive measure that can be easily calculated.
Transparency
So, when dealing with an LLM we have to critically ask ourselves, how well is the data representing the portion of the real world that matters for the problem at hand? What are the biases in the data? Which model is actually suitable for the question I want to posit? How well did the model of choice perform in predicting text? And, importantly, is an adequate explanation, which it may produce, be sufficient for the problem posed?
The answer is, as yet we don't know. We may never know precisely enough to make an informed choice. At least from the major corporate LLMs as the level of transparency about the data is virtual non existent. Which may be fine, in many cases, unless you're dealing with finance, medicine, biology, military actions, security, decision making.
It is a problem that may be overcome, given greater transparency, given time, perhaps lots of time, decades even, to understand how the weightings given to data operate in the black box environment of a neural network.
And it's time that we are lacking in. Time that is the driver excuse of; 'we might loose the race' if we don't put out our currently biased, partly tested AI that we don't fully understand, before our competition/enemies do the same. That's what we've recently witnessed, and it will happen again prior to any legislation is in place. The mass experiment continues.
How have we done so far in meeting crisis?
“Prediction is very difficult, especially when the future is concerned” — Niels Bohr
To be blunt humans have rarely dealt well with existential risk. In fact we have done so badly that the Bulletin of Atomic Scientists
Doomsday Clock states we are currently in 'A time of unprecedented danger: It is 90 seconds to midnight'.
Previously...
In 2012 the Grouville Hoard was discovered in Jersey. The horde represented the wealth of the Curiosolitae, a Celtic tribe from Brittany . It's now asserted the tribe hid their wealth as the Roman army swept up through France. It was never retrieved, not in the lifetimes of the Curiosolitae.
In 2017 Rushkoff got invited to give a
talk to the Silicon Valley elite, to garner his ideas of how these people could save themselves, and their wealth, from what they saw as the upcoming crisis. The modern day Curiosolitae.
So I wonder, if it is possible, which I doubt, to create an AGI/ASI within, say, seven years, would the tech elite be willing to share the utility, would they prefer to utilise it to control the decaying environment they find themselves in, treating it as a Saviour Machine, to misguide their existence.
Should they find that the machine is flawed, would they risk their investment and be like Dave, from 2001, and switch it off?
And should they not be able to deliver such a machine before the decade ends, will the consequences of the metacrisis prove the continuing development of such a machine irrelevant?
Comments
Post a Comment