Skip to main content

Posts

Showing posts from May 7, 2023

What can Faraday teach us about human responses to AI?

Professor Simone Natale argues that AI resides also, and especially, in the perception of human users. This talk presents materials from his new monograph, Deceitful Media: Artificial Intelligence and Social Life after the Turing Test . This talk is from two years ago and doesn't seem to have attracted the attention it deserves, but serves well today. Natale begins with an analogy, and a warning from history: In the middle of the 19th century, a new religious movement called spiritualism began to attract attention. Spiritualists believed that they could communicate with the spirits of the dead, and they would hold seances where they would try to contact the deceased. One of the leading scientific figures of the time, Michael Faraday, was skeptical of spiritualism. He decided to investigate the matter by conducting experiments and observing seances. Faraday's investigation led him to conclude that the phenomena at seances were not caused by spirits, but by the participants them

The Spread of False Information through LLMs

  A new paper, 'A Drop of Ink may Make a Million Think: The Spread of False Information in Large Language Models', from the School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, and the Institute of Software, Chinese Academy of Sciences, Beijing.  The presence of false information on the internet and in the text corpus poses a significant risk to the reliability and safety of LLMs. This paper investigates how false information spreads in LLMs and affects related responses. The authors conducted a series of experiments to study the effects of source authority, injection paradigm, and information relevance. They found that false information can spread and contaminate related memories in LLMs, and that LLMs are more likely to follow false information presented in a trustworthy style. The authors conclude that new false information defense algorithms are needed to address the global impact of false information, and that new alignment algori

An incomplete Goliath, Google, to launch undercooked tools

  Google announced a slew of AI product integrations at their I/O 2023 keynote event this week. It seems that the core technology behind these will be its new PaLM2 LLM. That's a problem, as The Guardian article concluded: In its preliminary research, the company warned that systems built on PaLM 2 “continue to produce toxic language harms”, with some languages issuing “toxic” responses to queries about black people in almost a fifth of all tests, part of the reason the Bard chatbot is only available in three languages at launch.  Hinton wouldn't have approved. PaLM 2 will steal a march on OpenAI/Microsoft as it will be the first Multimodal GPT to be launched to the public. According the a Google blog the model will have the following capabilities: Multilinguality: PaLM 2 is more heavily trained on multilingual text, spanning more than 100 languages. This has significantly improved its ability to understand, generate and translate nuanced text — including idioms, poems and r

You can fool some of the people all of the time: AIs and deception

  Two papers that each consider trust and AIs are of interest, the first is ' Suspicious Minds: The Problem of Trust and Conversational Agents ' by Jonas Ivarsson, University of Gothenburg. The second by Rogers and Webber, " Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High Stakes HRI Scenario ".  Artificial intelligence is getting so good at talking that it's hard to tell the difference between humans and machines. It can now be difficult to know who you're talking to. 'Consequently, the ‘Turing test’ has moved from the laboratory into the wild', as Ivarsson states, If you think you're talking to a human, but it's actually a machine, you might share personal information that you wouldn't want to share with a machine. This is also a problem because it can erode trust in human-to-human interactions. If people can't tell the difference between humans and machines, they might start to distrust each other.

Klein on the climate crisis and AIs role

  Naomi Klein is the bestselling author of No Logo and The Shock Doctrine and Professor of Climate Justice and Co-director of the Centre for Climate Justice at the University of British Columbia. In a recent Guardian article, ' AI machines aren’t ‘hallucinating’. But their makers are ' Klein sets out an argument exploding the tech hypesters myth peddling, covering climate, governance, trust in tech corporations, and AIs promise to save us from drudgery.  Last year, the top tech companies spent a record $70m to lobby Washington – more than the oil and gas sector – and that sum, Bloomberg News notes, is on top of the millions spent “on their wide array of trade groups, non-profits and thinktanks. -Klein The context of the tech companies lobbying power, together with the familiarity all policy makers and legislators will have with the brand names involved - using some these corporate products on a daily basis - is vital to frame the likely effects of the lobbying efforts.  The sum

You want your LLM to read an entire novel? Well you can now

  A few days ago I wrote about the leaked letter from Google 'Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.' Two days later and here is the proof of exactly that:  MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.' As it turns out, the full text of The Great Gatsby weighs in at just under 68k tokens. So, naturally, we had StoryWriter read The Great Gatsby and generate an epilogue. .... StoryWriter took in The Great Gatsby in about

Month One: A traveler's notes from the AGI Rabbit Hole

  I titled this blog "CHARTING THE EMERGENCE OF AGI?" with a subtitle of "6 months to AGI?" Both the title and subtitle express doubt of the outcomes in the questioning. After my first week, I felt compelled to write my initial findings. In this blog post today, I shall provide my findings after the first month of this blog. It's an ongoing hypothesis. Some views have changed, a little, they are further informed, so have been added to. This post has some fairly lengthy lists, therefore I will only repeat this again when I conclude this blog in five months time. I will once again present the developing hypothesis within the semantic structure I have used to frame the different elements of discourse I have come across that I wish to cover. Of course, I am not the only person who has been charting developments in the field of AI/AGI/ASI. I never expected that I would be, but prior to writing this blog, I didn't follow any particular authors' work in this fi