Skip to main content

Posts

Showing posts from April 16, 2023

Meta's AI SAM, a criminal and military aid?

Is it me? Is it, just me that as soon as I watched this I thought the most obvious usage cases for this are all nefarious. How to case a joint, with measurements, easily, how to acquire targets. These are the first things that sprang to mind. In fact it was hard to think of many good cases, the narrator gives an example of a mechanic, but if a mechanic requires such augmented reality, I'd prefer another garage. If you need this for cooking recipes, then I'd rather go to a greasy café.  SAM is a 'Segment Anything Model' or, ' a promptable segmentation system with zero-shot generalization to unfamiliar objects and images, without the need for additional training.' The Meta website further informs us 'SAM's advanced capabilities are the result of its training on millions of images and masks collected through the use of a model-in-the-loop "data engine." ' One can only wonder where the millions of images used for training where obtained from,

Greg Brockman, TED and PR

Brockman ends the talk with the most telling of comments. "And if there's one thing to take away from this talk is that this technology just looks different. Just different from anything people had anticipated. And so we all have to become literate. (My emphasis) And that's, honestly, one of the reasons we released ChatGPT. Together. I believe we can achieve the OpenAI mission of ensuring that Artificial General Intelligence benefits all of humanity". Cue standing ovation. The talk itself was about the wonders and not so wonderous failures of ChatGPT (hence the call for human Guinea Pigs, beta testers, consumers, and importantly advocates) which is obviously far from an AGI in its current state. The wonderous thing was the rhetorical strategy of pathos employed, a call to humans affection for their animal pets, in 'how GPT4 helped save my dogs life'.  Pathos is a rhetorical strategy that aims to persuade an audience by evoking emotions such as pity, fear, or j

Bark, the Open Source Text To Speech AI

  When you think of Text to Speech in AI terms, the first company you may think of is Eleven Labs  as the quality of their product literally speaks for itself. If you are looking for an Open Source tool, then Bark, by Suno may be of interest.   In Hacker News one of the founders of Suno said this of Bark: 'At Suno we work on audio foundation models, creating speech, music, sounds effects etc…. Text to speech was a natural playground for us to share with the community and get some feedback. Given that this model is a full GPT model, the text input is merely a guidance and the model can technically create any audio from scratch even without input text, aka hallucinations or audio continuation.  When used as a TTS model, it’s very different from the awesome high quality TTS models already available. It produces a wider range of audio – that could be a high quality studio recording of an actor or the same text leading to two people shouting in an argument at a noisy bar.' This too

Consensus, my new search engine of choice for research?

 For those that find Google Scholar less than ideal, then the new AI enhanced search engine Consensus may be what you're looking for. 'Consensus uses AI to find answers in research papers', when the initial search is complete it has a synthesise button (as seen in the screen shot below) which draws together a 'consensus' of the results based on most cited, it would seem.  Whilst I've only tried it out for less than an hour, I can already see where it will replace the distinctly cluttered Google Scholar. 

Eliezer Yudkowsky on Alignment and can it be regulated for?

Yudkowsky is one of the leading figures on matters of AI Alignment, this is a one hour discussion from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University. He's recently conducted a TED talk on the subjects raised here, but, at greater length and depth. Early in the discussion Yudkowsky states: 'Just this very day... China released it's own preliminary set of regulations or something for AI models, it's actually stricter than what we've got. Possibly it was written by somebody who didn't quite understand how this works because it's things like all of the data that you're training it on needs to be like honest and accurate! So possibly regulations that are not factual.' This is one of the significant issues with regulation as a means of controlling AI development. It requires levels of expertise in governance not often seen, it requires laws to be fit for purpose, and any laws should not be so reactive to current technolog

The Incentive to Deceive

Rob Miles is one of the better explainers of AI on YouTube, he's detailed, he rarely holds back on calling out elephants, and, importantly for broadcast media, he's personable. He's also has a long, in YouTube terms, track record of covering Alignment issues. As a PhD student he's particularly adept at explaining the complexities of Alignment issues. In this video he gives a fine explanation of the reward training in LLM's both implying and stating the issues that ensue from such training approaches, including the policies to please humans, and the utility of such models to deceive.  Two parts near the end of the video caught my attention: 'This is potentially fairly dangerous, there are certain type of goals that are instrumentally valuable for a wide range of different terminal goals, in the sense that, you can't get what you want if you're turned off, you can't get what you want if you're modified, you probably want to gain power and influence

McQuillan on Disrupting AI

McQuillan offers up his critique of the AI industries in terms few seem to bring together in one, overall analysis. From AIs tendency to cheat, the power usage, to fascistic futures.  We are faced with a situation where our most advanced forms of technology are busy sedimenting the potential for fascistic futures, against the backdrop of climate collapse. 'Dan is Lecturer in Creative and Social Computing at Goldsmiths, University of London. He has a degree in Physics from Oxford and a PhD in Experimental Particle Physics from Imperial College, London. After his PhD he was a support worker for people with learning disabilities and volunteered as a mental health advocate, informing people in psychiatric detention about their rights. In the early days of the world wide web, he started a pioneering website to provide translated information for asylum seekers and refugees.  When open source hardware sensors started appearing he co-founded a citizen science project in Kosovo, supporting

Let's not ignore the Loxodonta in the room

  When researching the leaders, CEOs and pioneers / advocates of AI there's something that should not be ignored. That is the number of them that hold, how should I say, fringe views. They may have Transhumanism tendencies, often outright support for this 'potential of augmenting humans with technology. It may be they have a faith in nanotechnologies to wire technologies directly into the human cortex. Or hold both views in the case of Ray Kurzweil as revealed in his interview with Fridman. I first came across the idea of Transhumanism after visiting a self proclaimed Transhumanist artist back in the late 1980's. I was, frankly horrified with the hubris of it all.  These are far from the only views commonly held by what I call tech-evangelists who always, always anthropomorphise technologies, which somewhat gives the game away. Another common idea is that the AI 'singularity', a term borrowed from physics, is inevitable, the singularity in this usage is the concep

Is an AGI even required to achieve similar results?

A Comprehensive Artificial Intelligence Services technical report  model by Drexler, from 2019, seems useful to revisit at this time. Instead of focusing on the hypothetical scenario of a single superintelligent agent that surpasses human intelligence, we should, the report argues, consider the more realistic possibility of a diverse and interconnected network of AI systems that provide various services for different tasks and domains. They call this approach Comprehensive AI Services (CAIS). The main advantages of CAIS are that it avoids some of the conceptual and technical difficulties of defining and measuring intelligence, and that it allows for a more fine-grained and flexible analysis of the potential benefits and risks of AI.  It's also a good way of considering where we have arrived at, with AgentGPT's operating via Hugging Face or via AutoGPT for example. By connecting a range of Narrow AI tools to perform the tasks that they are optimised for, and having a 'manage

Dreamix, Google's Text To Video, Deep Fakes For All?

 If you're involved in video in any sense, then this should be of interest. Such tools will be making significant changes in the not too distant future. This still looks to be Beta software, for now. Deep fakes for all?

Just Launched, OpenAssistant AI, The Open-source ChatGPT Rival?

 On Sunday the 16th of April Open Assistant was launched, it describes itself as: "Conversational AI for everyone.We believe we can create a revolution. In the same way that Stable Diffusion helped the world make art and images in new ways, we want to improve the world by providing amazing conversational AI." "Open Assistant is a project organized by LAION and individuals around the world interested in bringing this technology to everyone." LAION is "Funded by donations and public research grants, our aim is to open all cornerstone results from such an important field as large-scale machine learning to all interested communities." One of the important things about this model is the training data will also be released in a Creative Commons license.  OpenAssistant promises to be a very timely and useful addition to the expanding library of tools, and to me, it being Open Source still, it's more interesting than most, but needs significant training.

Terms of Service of Chat GPT - what are you signing up for?

Most people don't read the terms of service that are long, legalise and so ticked quickly, with OpenAI products that may be a mistake. It's worth watching this before you sign up for it's services, it really is, and see if you are willing to have OpenAI defend ChatGPT at your expense.

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.

UDHR and Alignment

The Universal Declaration of Human Rights (UDHR) is a document that sets out fundamental human rights to be universally protected. Ideally any alignment of AI should use this as the basis for what human values are. The document was written in 1948 as a response to the atrocities of the Second World War. They remain the clearest expression of human values I know. They have failed though in practice, as I can certainly think of many examples where the post war governments, in countries like the UK, have breached most of the 30 articles stated.  If governments can't or won't follow and uphold 30 basic principles for human values, why is there an expectation that AI can or will be able to? Cansu Canca  considered this issue in a post from 2019 " AI & Global Governance: Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR " Canca states that 'When we dive deep, the UDHR is simply unable to guide us on those questions. Solving such challenges is th