Skip to main content

Posts

Showing posts from June 4, 2023

Practical Example of Political Bias in LLMs and the Framing of Solutions from a USA Lens

  I wanted to conduct a little experiment, as a follow up to some posts which assert bias and a USA centric, hegemonic view of the world. I apologise now, that this is a necessarily long post. Please bare with me as the results that follow may raise your eyebrows and lead to some serious questions. The methodology is clear, it may not be perfect, but you can make of it what you will, and repeat it yourself with a topic of your choosing. I used GPT-4, via Perplexity AI (As it makes the sources more apparent) to suggest policy solutions to a real world problem, the economic state of the UK economy, in order to ascertain the bias in it's chosen sources and the effect this would have upon the answer(s).  I chose the field of economics as, for me, any differences in the given answers would be rapidly apparent, as I have informally studied economics since the 2008 Great Financial Crash. I'm no self-proclaimed expert but would hope I've learnt sufficient for this experiment to be

Harari on AI and the future of humanity

I have seen a few discussions and lectures from Harari on the subject of AI, this though may be the best so far. The questions are pointed, which certainly helps. Harari tends to bring a different perspective to the debate on AI safety, which is of value. It's well worth watching the whole video, below is a snippet.  Harari: So, we need to know three things about AI. First of all, AI is still just a tiny baby. We haven't seen anything yet. Real AI, deployed into the world, not in a laboratory or in science fiction, is only about 10 years old. If you look at the wonderful scenery outside, with all the plants and trees, and think about biological evolution, the evolution of life on Earth took something like 4 billion years. It took 4 billion years to reach these plants and to reach us, human beings. Now, AI is at the stage of, I don't know, amoebas. It's like 4 billion years ago, and the first living organisms are crawling out of the organic soup. ChatGPT and all these w

Copyright, the learning issue and unethical corporate generated art

  "The End of Art" Proclaimed the Philosopher, and art-critic, Arthur Danto, after contemplating on Andy Warhol when he exhibited his Brillo Box in 1964.   Arthur Danto argued that art has undergone a historical transformation from mimesis, or imitation, to self-consciousness. In the past, art was judged on its ability to accurately represent reality. However, with the rise of new artistic movements such as Cubism, Abstract Expressionism, and Pop Art, art began to focus more on subjective expression and the exploration of new forms. This shift in focus led to a new understanding of what art is and what it can do. Danto argued that this process of self-consciousness is complete when art becomes aware of itself as art. This is what he calls the "end of art." He believes that the history of art is a history of the gradual realisation of the medium's own possibilities. When art becomes aware of itself, it can no longer progress in the same way. This does not mean th

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

Merging with AI, the Transhumanists gamble

  In his presentation on 17 July 2019, Elon Musk said that ultimately he wants “to achieve a symbiosis with artificial intelligence.” Even in a “benign scenario,” humans would be “left behind.” Musk wants to create technology that allows a “merging with AI.” Neuralink is a revolutionary technology that aims to connect your brain to a computer. Imagine being able to control your devices, access information, and communicate with others using only your thoughts. The firm plans to insert a sensor smaller than a fingertip, possibly with only local anesthesia. A complex robot will implant thin wires or threads in brain regions that control movement and sensation. The implant is connected to a wireless device that processes and transmits your neural signals to your phone or computer via Bluetooth. Neuralink's vision is to create a symbiosis with artificial intelligence, where humans can enhance their abilities and keep up with the rapid advances of technology. Neuralink's founder, Elo

Intel and the gamble for good

  The Aurora supercomputer, a joint project between Argonne National Laboratory, Intel, and Hewlett Packard Enterprise, is expected to be completed in 2023 . Aurora will be one of the first exascale supercomputers, capable of performing more than 1 quintillion calculations per second. The anticipated usage includes: Materials science : Aurora will be used to design new materials with improved properties, such as strength, lightness, and conductivity. This could lead to new technologies in areas such as energy, transportation, and medicine. Drug discovery : Aurora will be used to accelerate the discovery of new drugs by simulating the behavior of molecules and proteins. This could lead to new treatments for cancer, Alzheimer's disease, and other diseases. Climate science: Aurora will be used to improve our understanding of climate change by simulating the Earth's atmosphere and oceans. This could help us to develop more effective strategies for mitigating and adapting to climat

Protecting People, not profits

  Protecting data. That's been the principle focus of tech regulation for decades. Protecting people hasn't. This should teach us a valuable lesson. Matt Clifford, an advisor to the UK Prime Minister, today stated in an interview: "We have got two years to get in place a framework that makes both controlling and regulating very large models much more possible than it is today." Earlier in the interview Clifford set out why this timescale is important, "if we don't start to think about safety then in about two years time we will be finding that we have systems that are very powerful indeed." But many have been thinking about safety in these systems, for a very long time. There are existing laws in the UK and most countries that could be enacted now, which are rarely enforced, around privacy of data as an example.  Clifford though is misunderstanding the risks and the sector. The uncensored LLMs, that pose a national security risk, are in the Open Source a

I'm afraid. I'm afraid, Dave. Dave, my mind is going

  David Bowie (the first four verses of) Saviour Machine President Joe once had a dream The world held his hand, gave their pledge So he told them his scheme for a Saviour Machine They called it the Prayer, its answer was law Its logic stopped war, gave them food How they adored till it cried in its boredom Please don't believe in me, please disagree with me Life is too easy, a plague seems quite feasible now Or maybe a war, or I may kill you all Don't let me stay, don't let me stay My logic says burn so send me away Your minds are too green, I despise all I've seen You can't stake your lives on a Saviour Machine Apart from Hal 9000, which I saw when the film first came out in a cinema, the next time I became aware of a dystopian AI was in David Bowie's lyrics. These have both informed my cultural grounding of AGI. I'm a lot older now, but these are my biases. With that out of the way I wanted to explore in this blog what I've pondered on so far, concern

From Narrow AI To Smart Cities – the overreach of the Tech Sector

  From Narrow AI Tools – to the design, development, deployment, and management of industrial metaverse applications We can look at AI applications as tools. This point of view though is far too narrow. These tools are unlike anything we have utilised so far. NVIDIA are using the term ‘ Omniverse Cloud ’ to name it’s platfom-as-a-service offering , that provides ‘ a full-stack cloud environment to design, develop, deploy, and manage industrial metaverse applications.’ To put more simply: it’s like a virtual workshop where people can design, create, and manage things like manufacturing robots, buildings, and even whole cities (good luck with that last one). Whilst I can readily envisage how PAAS system can function efficiently in the marketing sector, and to a large extent, manufacturing sectors and perhaps even buildings, I fail to see the possibility of it extending much further. Barry Smith in his talk about urban planning and smart cities provided a strong critique of the issues inv