Skip to main content

Posts

Showing posts from May 14, 2023

This Blog and the Tools Used (+ how to build one of your own)

  I've tested many dozens of AI tools and features during the research and writing of this blog. I'd like to have utilised even more, if I had a powerful enough graphics card to download and execute different LLMs then I would have. My Dell workstation though is limited by 32GB and a 4GB NVIDIA Quadro card with ageing Xeon processor. My Dell laptop has 16GB, a 12th Gen i7 and 4GB 3050. These are averagely powerful pieces of hardware, but are insufficient to run much in the way of Open Source AI models, at least ones that I'd use as regular tools.  I state this to demonstrate that most people's usage of AI tools is likely to be through 'cloud' services, such as the new (clunky) ChatGPT4 Apple OS phone app. Which is a shame, as it does hinder people's understanding of how these tools work, and what sort of libraries they are dependent upon. This may ease up, as more smaller and capable LLMs become available, but it's probably a bit too late. Why is this of

Don't Look To Sunak To Effectively Regulate AI

  Sunak, as reported in the Guardian , was speaking on the plane to Japan for the G7 summit, where AI will be discussed, Sunak said a global approach to regulation was needed. “We have taken a deliberately iterative approach because the technology is evolving quickly and we want to make sure that our regulation can evolve as it does as well,” he said. “Now that is going to involve coordination with our allies … you would expect it to form some of the conversations as well at the G7. “I think that the UK has a track record of being in a leadership position and bringing people together, particularly in regard to technological regulation in the online safety bill … And again, the companies themselves, in that instance as well, have worked with us and looked to us to provide those guard rails as they will do and have done on AI.” The white paper on AI regulation the government introduced in March directly c ontradicts Sunak's statements  as I've written about before. It's all a

Power and Progress, what lessons are there from previous tech disruptions?

  Simon Johnson discusses #PowerAndProgress, a new book co-authored with Daron Acemoglu on iNET. Find a copy & learn more A thousand years of history and contemporary evidence make one thing clear. Progress depends on the choices we make about technology. New ways of organizing production and communication can either serve the narrow interests of an elite or become the foundation for widespread prosperity. The wealth generated by technological improvements in agriculture during the European Middle Ages was captured by the nobility and used to build grand cathedrals while peasants remained on the edge of starvation. The first hundred years of industrialization in England delivered stagnant incomes for working people. And throughout the world today, digital technologies and artificial intelligence undermine jobs and democracy through excessive automation, massive data collection, and intrusive surveillance. It doesn’t have to be this way. Power and Progress demonstrates that the path

Flaws in Optimism; an AI future, it's complex

  Shapiro, when discussing the GTO Framework introduced his video with 'The Problem', as he saw it, and framed the need for the framework as an optimistic response to the two other positions he proposed people take up, that of Doomerism and Denialism. Doomsters, Denialists, Optimists, which can be matched onto 3 main outcomes: Dystopia, Extinction, Utopia. The ‘grey area’ the shade is presented as the area occupying the the middle of this triangle. Shapiro sets out a Sympathy For and a Flaws With positions, then outlines his framework without too much consideration of the potential flaws. I seek to redress that in this blog. I do not seek to criticise Shapiro too much for the dedicated work that he and his colleagues have put into GATO and consider their efforts to be a very useful addition to the discourse. My critical response is not only to Shapiro either, but to many figures in the debate, such as Altman, Goertzel, Leahy, Kurzweil amongst others. Call me an admixture of

Beyond Chat GPT, Russell on AI

 Stuart Russell gives both an informed and easy to follow set of responses to questions raised at the Common Wealth Club of California. There is little new, in order to add to my understanding, in what he states here, but he does put it forward concisely.  Stuart Russell is a Professor of Computer Science, Director of the Kavli Center for Ethics, Science, and the Public, and Director of the Center for Human-Compatible AI, University of California, Berkeley; Author, Human Compatible: Artificial Intelligence and the Problem of Control. An example of Russel's thought is below: 'But the drawback in doing that is that we have to specify those objectives, right? The machines don't dream them up by themselves. And if we mis-specify the objectives, then we have what's called a misalignment between the machine's behaviour and what humans want the future to be like. The most obvious example of that is in social media, where we have specified objectives like maximising the num

Shapiro's GATO, a rare attempt at community action to Align AI

Shapiro describes himself as: 'I research AI cognitive architectures based on Natural Language and LLMs. I also build automation tools and products with cutting-edge AI. Lastly, I conduct interviews with thought leaders and industry veterans.' In the video above he sets out a optimistic plan, he names a GATO framework, Global Alignment Taxonomy Omnibus. What GATO boils down to is an ambitious plan to solve the AI Alignment problem, based on heuristic imperatives. Now, the purpose of this video is to introduce the crowning achievement of not just my work, but of the rapidly growing community that I'm building. What started around the years to comparatives as my research on alignment for individual models and agents has quickly expanded. So, this GATO Framework, Global Alignment Taxonomy Omnibus, is that comprehensive strategy that I just mentioned that is missing. It is not just for responsible AI development, but is a coherent roadmap that everyone on the planet can partici

Emergent abilities or a mirage?

Emergent abilities in large language models I'm reminded of the Three Cup 'game' when discussions of AGI/ASI occur. The three cup scam is a gambling trick where a ball is placed under one of three cups, which are then moved around a mat. The customer is asked to guess which cup the ball is under, with a cash prize at stake. However, the scammer uses sleight of hand to move the ball to a different cup, making it impossible for the customer to win. This scam has been used to steal money from many an unsuspecting tourist. Emergent abilities in large language models (LLMs) refer to the sudden and unpredictable increases in performance at specific tasks that occur as the model scale increases. These abilities are intriguing because they seem to emerge spontaneously as the model becomes larger, without any explicit training or fine-tuning on the specific task. This suggests that LLMs may have a capacity for generalisation and transfer learning that was previously unknown. Additio