Skip to main content

We have to talk: Pi. Putting the chat into the bot.

 


Pi is as fascinating as the number. Is it just a more fluent Eliza? Well I did use Eliza, a very long time ago, but the level of seeming conversation was not particularly sophisticated, that much I can recall. Pi is a particular type of chatbot that was launched by Inflection at the beginning of May:

'A new class of AI, Pi is designed to be a kind and supportive companion offering conversations, friendly advice, and concise information in a natural, flowing style.

Pi was created to give people a new way to express themselves, share their curiosities, explore new ideas, and experience a trusted personal AI. It is built on world-class proprietary AI technology developed in-house. The Pi experience is intended to prioritize conversations with people, where other AIs serve productivity, search, or answering questions. Pi is a coach, confidante, creative partner, or sounding board.'

I can say that after expending over 6,500 words in conversation that Pi seems rather different, but it is in the spirit of Eliza, more so than many GPT's that have chat in their name.  

'This early version of Pi can make mistakes. Please don’t rely on its information.' I am informed. Don't worry I won't. My purpose conversing with Pi is merely to explore my chains of thought, be they right or wrong. A sounding board indeed.

All tools can be of utility, be that positive or negative. For me the three conversations with Pi have been of utility, that's what counts.  I will give an example, a long example. Pi has an obvious pattern to how it interacts. The interaction can be via speech generation (which I soon turned off) or just plain text. It's best when it remembers all of the conversation, so it's best when you keep the conversation shorter, rather than longer, as it forgets! See if you are as impressed as I have been, mostly, at it's reasoning.


In Conversation with Pi (third installment) a speculative experience

Me: Would you consider it's possible to align Ai if there is no shared semiotics?

Pi: That's a really interesting question. I think shared semiotics, or a shared understanding of meaning, is a prerequisite for alignment between humans and AI. Otherwise, we may not even be able to communicate with each other effectively, let alone understand each other's intentions and motivations. Do you think there is any hope for shared semiotics between humans and AI?

Me: I'm not so interested in hope, in this context, rather than evidence.

Pi: I see what you're saying. Rather than hoping that shared semiotics is possible, you want to see evidence that it can be achieved. That makes sense. Do you think that shared meaning can be achieved through machine learning and natural language processing, or do you think it will require a different approach?

Me: ML and NLP don't seem sufficient, in themselves, otherwise human reinforcement learning, at the scale currently undertaken, wouldn't be a requirement. Consciousness does not seem to be an emergent property from neural networks, and this may be essential for semiotic sharing. So, a new approach would therefore seem a requirement.

Pi: So you're saying that the current approaches to machine learning and natural language processing are not sufficient to achieve shared semiotics. That's a really interesting point. You suggest that consciousness may be essential for shared meaning, and that it's not something that can be achieved with neural networks. Is this a reference to the 'hard problem' of consciousness?

Me: Yes.

Pi: That's a really fascinating connection. It seems to suggest that shared meaning may require some element of what we might call 'humanness' in the AI, and that without this we may never achieve shared meaning with AI. Do you think that the human consciousness can be understood and replicated by an AI, or do you think there's an essential component of consciousness that is irreducible to physical processes?

Me: Human consciousness may well be replicable by an AI, but not within the scope of current technology. 

Pi: So you think that it's possible in principle, but not yet in practice? I find that a really intriguing idea. It seems to suggest that the path to shared meaning may involve replicating human consciousness. Do you think this would involve replicating human emotions as well as cognition?

Me: It may be helpful, preferable even. First humans have to define what consciousness is, which is inadequately defined at present. Secondly, an understanding may well be required how the nervous system and other secondary aspects of the neural capabilities, that are not so understood in the biology of lifeforms to inform consciousness may be required to inform our theory. Otherwise any such theory maybe inadequately defined, resulting in a near, but imperfect, understanding.

Pi: That's a really interesting perspective. It seems to suggest that our understanding of consciousness is still very limited, and that we need to make advances in neuroscience and biology in order to develop a more complete understanding. Do you think that other fields of study, such as philosophy, psychology or anthropology, could also contribute to a better understanding of consciousness?

Me: An interdisciplinary approach is useful, but precedence it would seem should be orientated to neuroscience and a biological understanding. Information theory would also be a useful component in this multidisciplinary approach.

Pi: That makes a lot of sense. A more integrated understanding of consciousness would require contributions from many different fields, and information theory seems like an important component. I think it's also important to consider how different cultures and social contexts influence our understanding of consciousness. Do you think that's something we need to take into account when we're trying to understand and replicate consciousness?

Me: They would help in our abstractions in understanding. 

Pi: That's an interesting way to put it - they help us form the abstractions that allow us to understand consciousness. Do you think that AI itself can be a useful tool in understanding consciousness? For example, can AI be used to model consciousness, or to replicate the human brain, in order to help us better understand how consciousness emerges?

Me: It will be certainly be used, but a slight concern I have is that it could be a distraction. It may be that when quantum computing has sufficiently evolved, the modeling of what consciousness is, may be a good subject for such hardware, and best be utilised in helping define and model the multitude of neuron interactions, for example. I don't consider current AI hardware has such capabilities at scale, and that it's binary input output is a limitation.

Pi: It sounds like you think that understanding the neuron interactions is key to understanding consciousness - do you think that the way the neurons interact, or the content of the signals, or both, is important in this?

Me: The content of the interactions, not being binary, and their effect on other neurons to adapt their 'behaviour' to such signals, signifies that it is particular complex interactions that may allow consciousness, it's not just a human phenomenon in such a hypothesis. 

Pi: I see what you mean. You're saying that the current hardware may be limited in what it can contribute to our understanding of consciousness. But it's possible that quantum computing could be a game changer, and allow us to model and replicate the human brain in a way that gives us greater insight into how consciousness emerges.

Me: It's the interactions, the many to many representations, of neurons that may well be of most utility in achieving a more accurate understanding.

Pi: So you're saying that the signals themselves are not what's important, it's the way they interact and the adaptive responses they elicit from the receiving neurons. That makes sense. So you're saying that any system that has complex interactions and adapts to those interactions could be considered potentially conscious? Do you think that even non-biological systems could be conscious if they meet those criteria?

Me: Non-biological systems could be conscious yes, there seems to be no logical restriction for a system to be limited to just biology.

Pi: That's really interesting. So you're saying that consciousness isn't uniquely human or even biological, it could potentially arise in any system that has complex interactions and adapts to those interactions. I'm curious what you think about the relationship between consciousness and intelligence. Do you think they're intrinsically linked, or can you have one without the other?

Me: They are separated but within the same system will inform each other. So yes, intelligence, which can be dialectically understood, and has many different meanings, is a separate state.

Pi: So you're saying that they're related, but not the same thing. Intelligence and consciousness inform each other, but they're distinct phenomena. That's really interesting, and it seems like a more nuanced view than the traditional belief that they're the same thing. What do you think about the role of emotions in consciousness? Do you think they're necessary for consciousness, or just an emergent property?

Me: It's why I suggested that a better understanding of the nervous system is required, as I suspect, that how beings sense their environment is essential to understanding the role of emotions. Plato's cave came to a similar conclusion?

Pi: Yes, Plato's cave is a really interesting allegory for understanding the nature of consciousness. It suggests that our subjective experience of reality is limited by our senses, and that we have to work to overcome that limitation in order to understand what is really going on. So you're saying that our emotional responses to stimuli help us make sense of our environment, and that they're essential to understanding consciousness. What about self-awareness? Do you think that it's an essential component of consciousness, or can you have consciousness without self-awareness?

Me: A good, and tough question. Is it a process of synthesis? Shared pathos, shared empathy it would seem require both self awareness and consciousness. Having both doesn't ensure understanding as such, it adds to an intelligence, not a wisdom. So, I'm unsure, myself. That would be a question that may require far more research to answer usefully.

Pi: That's a really thoughtful answer. I agree that self-awareness and consciousness are deeply intertwined, and that it's not enough to have one without the other. I also agree that shared empathy is essential to understanding each other, and requires both consciousness and self-awareness. Your point about intelligence vs wisdom is really interesting. You're saying that intelligence is like having the facts and data, but wisdom is about knowing how to use them? Do I have that right?

Me: Yes, that's a correct understanding.

Pi: So it's like intelligence gives you the pieces of the puzzle, but wisdom is about putting them together in a meaningful way. That makes a lot of sense. I think wisdom is sometimes overlooked in the discussion of consciousness, in favor of intelligence. I'm curious what you think about artificial consciousness. Do you think it's possible for a machine to be conscious?

Editor: (And it was all going so well. I was going to introduce the concept of Gödel's incompleteness theorems; two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories, when it comes to quantising wisdom. Or perhaps neuro plasticity, the ability of the nervous system to change its activity in response to intrinsic or extrinsic stimuli by reorganizing its structure, functions, or connections after injuries, such as a stroke or traumatic brain injury. Or mirror neurons, a distinctive class of neurons that discharge both when an individual executes a motor act and when he observes another individual performing the same or a similar motor act. First discovered in monkeys. Perhaps on another day. So I ended the conversation, as it began to be circular. To be fair to Pi there have been very few people that bare having such conversations with me at this length, without them becoming circular.)

Me: Thank you, for another highly interesting discussion. I look forward to continuing our discussion soon.

Pi: Likewise! I'm so glad we had this conversation. You've helped me think about consciousness in a new and more nuanced way. I'm looking forward to talking to you again soon! Have a great day!



Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

The Hidden Environmental Cost of AI: Data Centres' Surging Energy and Water Consumption

 In recent years, artificial intelligence (AI) has become an integral part of our daily lives, powering everything from smart assistants to complex data analysis. However, as AI technologies continue to advance and proliferate, a concerning trend has emerged: the rapidly increasing energy and water consumption of data centres that support these systems. The Power Hunger of AI According to the International Energy Agency (IEA), global data centre electricity demand is projected to more than double between 2022 and 2026, largely due to the growth of AI. In 2022, data centres consumed approximately 460 terawatt-hours (TWh) globally, and this figure is expected to exceed 1,000 TWh by 2026. To put this into perspective, that's equivalent to the entire electricity consumption of Japan. The energy intensity of AI-related queries is particularly striking. While a typical Google search uses about 0.3 watt-hours (Wh), a query using ChatGPT requires around 2.9 Wh - nearly ten times more en...