Skip to main content

Posts

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in
Recent posts

The tech utopia of endless leisure time is here: goodbye jobs

  'AI eliminated nearly 4,000 jobs in May' so it's reported by hallenger, Gray & Christmas, Inc. Following on from reports by IBM et al that thousands of job cuts will occur due to AI replacement, there is no need to wait for the utopia of AI allowing humans more leisure time, as that's already here, in the form of redundancies, if we are to accept the reports findings. 'With the exception of Education, Government, Industrial Manufacturing, and Utilities, every industry has seen an increase in layoffs this year.' What's particularly notable is that it's the Tech sector that's the most affected from job cuts in the US economy: 'The Technology sector announced the most cuts in May with 22,887, for a total of 136,831 this year, up 2,939% from the 4,503 cuts announced in the same period last year. The Tech sector has now announced the most cuts for the sector since 2001, when 168,395 cuts were announced for the entire year. ' Another reason

Blair and Hague step into the AI debate

  This blogpost will be added to over a few days, maybe weeks as it is in response to a report that has been published today, 13th June, 2023. I am writing this in the morning, I will need time to read it through in detail. However it is important enough for me to give my initial impressions.  On first glance it seems a comprehensive report with some interesting areas for debate, acknowledgement of the potential for the transformative effect on states of such technology, yet rather predictable solutions being offered that are too state orientated. It ultimately seems to be about power. How the power of corporations co-exist with the power of the state and what a future symbiotic co-existence might look like. There are the now usual calls for the UK's state to be elevated as a centre of AI Safety (which seems geopolitically unrealistic). The potential 'benefits' seem overplayed and the potential dangers underplayed.  One fear, such interventions are beginning to bring about,

Don't try this at home.What happened when a tech enthusiast let Chat GPT become the home assistant

  Home Assistants have been pushed out to wealthier populations which much glee by tech companies over the last few years. Selling the dream of dominating your home environment, by the 'master's voice', from turning on and off lights to preparing your electric car, and eventually to you home robots that will cook and clean for you, if they can ever get around to dealing with changes of floor levels. So rather than waiting for the dream to be complete and the tech companies to sell you more product, what if you could code it yourself. Well someone has tried: 'As much as we like technology, what humans love more is control and predictability. We're afraid of wild beasts with fangs, claws, and venom because we don't know how wild animals will react to us. Like the untrained, we can't risk our safety because it's difficult to protect against something that's unpredictable. Based off of some comments that I've seen in conversations around the internet

Anthropic's Claude, now with a 100K Token Context Window.

  Claude is an LLM from Anthropic, which now has a neat trick up its sleeve, you can ingest several thousands of words into it's prompt window and ask questions of that document immediately. It's no wonder that the company advertise amongst the use cases for this model legal firms. Two Claude models were launched in May, with two different pricing structures.  I've not mentioned Anthropic before, and haven't read it's AI Safety framework as yet, which I'll have to rectify. But I not that in their paper, ' Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback ', they stated under limitations: We’ve pragmatically defined an aligned assistant as an AI that is18 helpful, honest, and harmless. We are optimistic that at present capability levels, the techniques we have discussed here provide a reasonable approach to achieving helpfulness and harmlessness. However, although our techniques improve model honesty, we believe we

Practical Example of Political Bias in LLMs and the Framing of Solutions from a USA Lens

  I wanted to conduct a little experiment, as a follow up to some posts which assert bias and a USA centric, hegemonic view of the world. I apologise now, that this is a necessarily long post. Please bare with me as the results that follow may raise your eyebrows and lead to some serious questions. The methodology is clear, it may not be perfect, but you can make of it what you will, and repeat it yourself with a topic of your choosing. I used GPT-4, via Perplexity AI (As it makes the sources more apparent) to suggest policy solutions to a real world problem, the economic state of the UK economy, in order to ascertain the bias in it's chosen sources and the effect this would have upon the answer(s).  I chose the field of economics as, for me, any differences in the given answers would be rapidly apparent, as I have informally studied economics since the 2008 Great Financial Crash. I'm no self-proclaimed expert but would hope I've learnt sufficient for this experiment to be

Harari on AI and the future of humanity

I have seen a few discussions and lectures from Harari on the subject of AI, this though may be the best so far. The questions are pointed, which certainly helps. Harari tends to bring a different perspective to the debate on AI safety, which is of value. It's well worth watching the whole video, below is a snippet.  Harari: So, we need to know three things about AI. First of all, AI is still just a tiny baby. We haven't seen anything yet. Real AI, deployed into the world, not in a laboratory or in science fiction, is only about 10 years old. If you look at the wonderful scenery outside, with all the plants and trees, and think about biological evolution, the evolution of life on Earth took something like 4 billion years. It took 4 billion years to reach these plants and to reach us, human beings. Now, AI is at the stage of, I don't know, amoebas. It's like 4 billion years ago, and the first living organisms are crawling out of the organic soup. ChatGPT and all these w