Skip to main content

Posts

Showing posts from April 30, 2023

Leaked Google Letter, Open Source the Threat or Opportunity?

 It's difficult to keep up with AI news. And recent news suggests that it's also difficult for the software giants to keep up with Open Source AI models. On May 4th Semi Analysis leaked an internal letter from within Google, seemingly from a software engineer that reveals a number of significant issues that the Open Source community present to both Google and Open AI. Below are some of the revelations in the letter: 'the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today.' 'Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not mont

Meadway and Walker, the constraining economics of AI

There was an interesting debate on Novara Media between Michael Walker and James Meadway, which explored AI from a different perspective than is usually aired, certainly than in print and broadcast media. It involved the economic realities of AI. Michael Walker is the Contributing Editor at Novara Media, and one of the better presenters of complex issues.  Meadway is one an astute economist, the director of the Progressive Economy Forum his podcast, Macro Dose, can be found at patreon.com/Macrodose  The Debate gets twisted Walker was framing the debate from Hinton's much publicised resignation from Google, and his subsequent warning. The debate that followed was soon to change direction. Walker:  Geoffrey Hinton, a computer scientist at the University of Toronto, is considered one of the fathers of artificial intelligence (AI). He helped develop neural networks, a type of machine learning that mimics the way the human brain works. Neural networks are now used in a wide range of ap

Palantir and their new AIP, chat courtesy of "a scary business"

  Palantir Technologies, a software company that specialises in analysing, integrating, and presenting vast amounts of data, has launched the P alantir Artificial Intelligence Platform (AIP). The AIP was showcased in a video released by the company last Tuesday. The AIP is a powerful tool that can be used to make 'better' decisions in a variety of settings, including the military. In the video, Palantir showed how the AIP could be used to help a military operator monitor the Eastern European theatre and respond to a perceived threat. The operator was monitoring satellite imagery when they noticed a large concentration of enemy forces gathering near the border. The AIP quickly analysed the situation and proposed a number of tactical responses. For example, the AIP could be used to launch a drone to gather more information about the enemy forces, or to coordinate airstrikes. The AIP can also utilise other Palantir tools, such as Project Maven, DoD project to develop artificial in

Climate Q&A by Ekimetrics, AI with a purpose

  Hugging Face, Ekimetrics  and Gradio an example of connected narrow AI at it's most useful Ekimetrics describes itself as "a pioneering leader in data science and AI-powered solutions for sustainable business performance." I've just discovered their Climate Q&A on Hugging Face. Put simply it's the best tool I've come across, by far, for interrogating the IPCC reports.  Hugging Face In case you haven't come across Hugging Face it's a company that develops tools to help people build machine learning applications. Its most notable product is the Transformers library, which is a collection of pre-trained machine learning models for natural language processing tasks. Hugging Face also operates a platform that allows users to share machine learning models and datasets. Hugging Face's products and services: Transformers library: The Transformers library is a collection of pre-trained machine learning models for natural language processing tasks. The

Hinton, a warning that should attract attention

  Elon Musk warning about the dangers of AI, and calling for a six month pause in its development can be easily dismissed, after all he threatened to launch his own 'truth' anti-GPT4. Even, with all due respect, can the calls of the Apple cofounder Steve Wozniak. But when Dr Hinton, 'the godfather of AI' highlights the dangers of AI, his work in the field deserves respect, and subsequently so do t he warnings he provides .  Hinton's research investigates ways of using neural networks for machine learning, memory, perception, and symbol processing. He has authored or co-authored more than 200 peer reviewed publications.  Hinton has claimed that Google had previously acted as a steward of AI development, carefully testing its work before launching a product into the public domain. Due to Microsoft's involvement with Open AI, and the subsequent launch of a GPT agent into its Edge browser, Google have launched Bard as it's early foray into the field. It won'

I ask Bard to comment on Government AI policy, with surprising results

  The UK Government Response to AI and an AI response to government The UK government published a white paper on AI regulation  by the Department of Science, Innovation and Technology in March 2023, which sets out a framework for ‘regulating AI in a way that promotes innovation while minimizing risks’.  The paper outlines five key principles for AI regulation: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. The paper also sets out a number of measures that will be taken to implement these principles, including the creation of a new AI Council to advise the government on AI policy and the development of a new AI toolkit to help businesses understand and comply with AI regulations.  The government's approach to AI regulation is based on the belief that AI has the potential to deliver significant benefits to society, but that it is important to take steps to mitigate the risks associated with AI

Jobs, AI, the disruption

(Image produced with the new Kadinsky 2.1) Not all illustrators will be unemployed shortly given the current generative abilities of text to art models. It may not be long though. There is a significant debate currently of artists whose work is being 'copied' by different generative AI models. Legal class action is pending, it would seem. There have also been cases of people that have sought to copy living artists work (via AI generation) and launch it on Twitter, before the artist has finished their work, to 'prove' they are the originator. Etsy, it would seem, has many previously non-artists, suddenly start to sell 'their' work, which was, at best, their prompt. These are symptoms of early technologies released into the current world that has little restriction on plagiarism for most living artists, but which offers plenty of legislative threat by corporate entities that hold the rights to artists work. Intellectual Property rights are in a mess. They have bee