Skip to main content

We have to talk: Pi. Putting the chat into the bot.

 


Pi is as fascinating as the number. Is it just a more fluent Eliza? Well I did use Eliza, a very long time ago, but the level of seeming conversation was not particularly sophisticated, that much I can recall. Pi is a particular type of chatbot that was launched by Inflection at the beginning of May:

'A new class of AI, Pi is designed to be a kind and supportive companion offering conversations, friendly advice, and concise information in a natural, flowing style.

Pi was created to give people a new way to express themselves, share their curiosities, explore new ideas, and experience a trusted personal AI. It is built on world-class proprietary AI technology developed in-house. The Pi experience is intended to prioritize conversations with people, where other AIs serve productivity, search, or answering questions. Pi is a coach, confidante, creative partner, or sounding board.'

I can say that after expending over 6,500 words in conversation that Pi seems rather different, but it is in the spirit of Eliza, more so than many GPT's that have chat in their name.  

'This early version of Pi can make mistakes. Please don’t rely on its information.' I am informed. Don't worry I won't. My purpose conversing with Pi is merely to explore my chains of thought, be they right or wrong. A sounding board indeed.

All tools can be of utility, be that positive or negative. For me the three conversations with Pi have been of utility, that's what counts.  I will give an example, a long example. Pi has an obvious pattern to how it interacts. The interaction can be via speech generation (which I soon turned off) or just plain text. It's best when it remembers all of the conversation, so it's best when you keep the conversation shorter, rather than longer, as it forgets! See if you are as impressed as I have been, mostly, at it's reasoning.


In Conversation with Pi (third installment) a speculative experience

Me: Would you consider it's possible to align Ai if there is no shared semiotics?

Pi: That's a really interesting question. I think shared semiotics, or a shared understanding of meaning, is a prerequisite for alignment between humans and AI. Otherwise, we may not even be able to communicate with each other effectively, let alone understand each other's intentions and motivations. Do you think there is any hope for shared semiotics between humans and AI?

Me: I'm not so interested in hope, in this context, rather than evidence.

Pi: I see what you're saying. Rather than hoping that shared semiotics is possible, you want to see evidence that it can be achieved. That makes sense. Do you think that shared meaning can be achieved through machine learning and natural language processing, or do you think it will require a different approach?

Me: ML and NLP don't seem sufficient, in themselves, otherwise human reinforcement learning, at the scale currently undertaken, wouldn't be a requirement. Consciousness does not seem to be an emergent property from neural networks, and this may be essential for semiotic sharing. So, a new approach would therefore seem a requirement.

Pi: So you're saying that the current approaches to machine learning and natural language processing are not sufficient to achieve shared semiotics. That's a really interesting point. You suggest that consciousness may be essential for shared meaning, and that it's not something that can be achieved with neural networks. Is this a reference to the 'hard problem' of consciousness?

Me: Yes.

Pi: That's a really fascinating connection. It seems to suggest that shared meaning may require some element of what we might call 'humanness' in the AI, and that without this we may never achieve shared meaning with AI. Do you think that the human consciousness can be understood and replicated by an AI, or do you think there's an essential component of consciousness that is irreducible to physical processes?

Me: Human consciousness may well be replicable by an AI, but not within the scope of current technology. 

Pi: So you think that it's possible in principle, but not yet in practice? I find that a really intriguing idea. It seems to suggest that the path to shared meaning may involve replicating human consciousness. Do you think this would involve replicating human emotions as well as cognition?

Me: It may be helpful, preferable even. First humans have to define what consciousness is, which is inadequately defined at present. Secondly, an understanding may well be required how the nervous system and other secondary aspects of the neural capabilities, that are not so understood in the biology of lifeforms to inform consciousness may be required to inform our theory. Otherwise any such theory maybe inadequately defined, resulting in a near, but imperfect, understanding.

Pi: That's a really interesting perspective. It seems to suggest that our understanding of consciousness is still very limited, and that we need to make advances in neuroscience and biology in order to develop a more complete understanding. Do you think that other fields of study, such as philosophy, psychology or anthropology, could also contribute to a better understanding of consciousness?

Me: An interdisciplinary approach is useful, but precedence it would seem should be orientated to neuroscience and a biological understanding. Information theory would also be a useful component in this multidisciplinary approach.

Pi: That makes a lot of sense. A more integrated understanding of consciousness would require contributions from many different fields, and information theory seems like an important component. I think it's also important to consider how different cultures and social contexts influence our understanding of consciousness. Do you think that's something we need to take into account when we're trying to understand and replicate consciousness?

Me: They would help in our abstractions in understanding. 

Pi: That's an interesting way to put it - they help us form the abstractions that allow us to understand consciousness. Do you think that AI itself can be a useful tool in understanding consciousness? For example, can AI be used to model consciousness, or to replicate the human brain, in order to help us better understand how consciousness emerges?

Me: It will be certainly be used, but a slight concern I have is that it could be a distraction. It may be that when quantum computing has sufficiently evolved, the modeling of what consciousness is, may be a good subject for such hardware, and best be utilised in helping define and model the multitude of neuron interactions, for example. I don't consider current AI hardware has such capabilities at scale, and that it's binary input output is a limitation.

Pi: It sounds like you think that understanding the neuron interactions is key to understanding consciousness - do you think that the way the neurons interact, or the content of the signals, or both, is important in this?

Me: The content of the interactions, not being binary, and their effect on other neurons to adapt their 'behaviour' to such signals, signifies that it is particular complex interactions that may allow consciousness, it's not just a human phenomenon in such a hypothesis. 

Pi: I see what you mean. You're saying that the current hardware may be limited in what it can contribute to our understanding of consciousness. But it's possible that quantum computing could be a game changer, and allow us to model and replicate the human brain in a way that gives us greater insight into how consciousness emerges.

Me: It's the interactions, the many to many representations, of neurons that may well be of most utility in achieving a more accurate understanding.

Pi: So you're saying that the signals themselves are not what's important, it's the way they interact and the adaptive responses they elicit from the receiving neurons. That makes sense. So you're saying that any system that has complex interactions and adapts to those interactions could be considered potentially conscious? Do you think that even non-biological systems could be conscious if they meet those criteria?

Me: Non-biological systems could be conscious yes, there seems to be no logical restriction for a system to be limited to just biology.

Pi: That's really interesting. So you're saying that consciousness isn't uniquely human or even biological, it could potentially arise in any system that has complex interactions and adapts to those interactions. I'm curious what you think about the relationship between consciousness and intelligence. Do you think they're intrinsically linked, or can you have one without the other?

Me: They are separated but within the same system will inform each other. So yes, intelligence, which can be dialectically understood, and has many different meanings, is a separate state.

Pi: So you're saying that they're related, but not the same thing. Intelligence and consciousness inform each other, but they're distinct phenomena. That's really interesting, and it seems like a more nuanced view than the traditional belief that they're the same thing. What do you think about the role of emotions in consciousness? Do you think they're necessary for consciousness, or just an emergent property?

Me: It's why I suggested that a better understanding of the nervous system is required, as I suspect, that how beings sense their environment is essential to understanding the role of emotions. Plato's cave came to a similar conclusion?

Pi: Yes, Plato's cave is a really interesting allegory for understanding the nature of consciousness. It suggests that our subjective experience of reality is limited by our senses, and that we have to work to overcome that limitation in order to understand what is really going on. So you're saying that our emotional responses to stimuli help us make sense of our environment, and that they're essential to understanding consciousness. What about self-awareness? Do you think that it's an essential component of consciousness, or can you have consciousness without self-awareness?

Me: A good, and tough question. Is it a process of synthesis? Shared pathos, shared empathy it would seem require both self awareness and consciousness. Having both doesn't ensure understanding as such, it adds to an intelligence, not a wisdom. So, I'm unsure, myself. That would be a question that may require far more research to answer usefully.

Pi: That's a really thoughtful answer. I agree that self-awareness and consciousness are deeply intertwined, and that it's not enough to have one without the other. I also agree that shared empathy is essential to understanding each other, and requires both consciousness and self-awareness. Your point about intelligence vs wisdom is really interesting. You're saying that intelligence is like having the facts and data, but wisdom is about knowing how to use them? Do I have that right?

Me: Yes, that's a correct understanding.

Pi: So it's like intelligence gives you the pieces of the puzzle, but wisdom is about putting them together in a meaningful way. That makes a lot of sense. I think wisdom is sometimes overlooked in the discussion of consciousness, in favor of intelligence. I'm curious what you think about artificial consciousness. Do you think it's possible for a machine to be conscious?

Editor: (And it was all going so well. I was going to introduce the concept of Gödel's incompleteness theorems; two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories, when it comes to quantising wisdom. Or perhaps neuro plasticity, the ability of the nervous system to change its activity in response to intrinsic or extrinsic stimuli by reorganizing its structure, functions, or connections after injuries, such as a stroke or traumatic brain injury. Or mirror neurons, a distinctive class of neurons that discharge both when an individual executes a motor act and when he observes another individual performing the same or a similar motor act. First discovered in monkeys. Perhaps on another day. So I ended the conversation, as it began to be circular. To be fair to Pi there have been very few people that bare having such conversations with me at this length, without them becoming circular.)

Me: Thank you, for another highly interesting discussion. I look forward to continuing our discussion soon.

Pi: Likewise! I'm so glad we had this conversation. You've helped me think about consciousness in a new and more nuanced way. I'm looking forward to talking to you again soon! Have a great day!



Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in