Skip to main content

Goertzel, Sorting out The Wheat From The Chaff

 


Sorting out The Wheat From The Chaff

It doesn't take long to find disinformation on the web. That is true in politics as it is in A.I research. There is a plethora of either misunderstood or hyperbolic claims made for both the current state of A.I. applications as well as both a magnitude of dystopic or tech-utopia claims available. There are though certain people, whose career, experience and broad depth of knowledge, together with their position in the tech industry, require time and attention to be given to their talks, papers and popular announcements. Once such person is Goertzel.

In this video, a presentation by Goertzel to the Future Mind Institute, where he has an audience of his peers, Goertzel is at his best. There is a difference when someone learned speaks before their peers, rather than appears on a podcast, as the questions tend to be more pointed and the speaker will not over simplify the message for their audience.

To be fair, this whole blog site could be devoted to discussing this talk be Goertzel, or similarly one of Stephen Wolfram’s talks, yet would barely be able to scratch the surface of the significant findings, reasoning's and intuitions that they present us with.

Therefore what I have done is present a small précis of just some of the topics broached in the talk with a view to giving my initial feedback.

Goertzel, rightly, from my current understanding, demonstrates that the current reliance on AI on pattern recognition, is no substitute for thinking, despite appearances. Through the presentation he builds a case for A.I. leading to an AGI given a 5 to 15 year time frame. He touches upon the foreseeable problems that will arise from this, partly in the early bookmarked section ‘GPT will make 95% of jobs obsolete’ but more importantly, and more in more alarming terms, in the questions after the presentation.

What disappoints most, after watching this twice now, is the lack of questions on an AGI future re: the climate crisis. Perhaps that’s betraying a human confirmation bias?


Some summary information about the video below. 

Dr Ben Goertzel

Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of the transhumanist organisation, Humanity+.


The rise of AI and AGI systems has the potential to create job loss and a gap between the rich and poor, but also lead to a universal basic income and the creation of superhuman AI beings, and it is important to bias the odds for a benevolent AI and Singularity while minimizing harm and accelerating development.

AI has the potential to create job loss and a gap between the rich and poor, but also lead to a universal basic income and the creation of superhuman AI beings.

  • AI will lead to job loss and a potential gap between the rich and poor, but may also result in a universal basic income and the creation of superhuman AI beings.

  • AI is both more dangerous than nukes and the most important tech advancement in decades, as creating a thinking machine that can program and improve itself will lead to exponential increases in intelligence and potentially create an ultra intelligence.

  • The evolution of intelligence from evolved to engineered and self-re-engineering has brought about both challenges and possibilities.

  • Focusing on the risks of AI is important, but emphasizing the benefits is also a valid choice.

The rise of AGI systems may lead to job obsolescence and a difficult transition period, but some are excited about the possibility of merging with a super mind.

  • Future AGI systems will likely make humans nostalgic for their creators, but the transition to that stage may be difficult, especially if AGI systems obsolete the majority of human jobs.

  • The transition period between the obsoletion of most jobs and the widespread use of molecular assemblers will be a strange and tortuous journey.

  • AI technology developed by Aprente, now owned by IBM, has been rolled out in some McDonald's locations, illustrating the slow pace of automation in industries despite the clear potential for job obsolescence.

  • Universal basic income may be rolled out in developed countries, but there is concern about providing it to citizens in less developed countries and the potential for exacerbating global inequality.

AI is taking away jobs, but history shows that technological advancements have cycles and waves, and some jobs require human connection and teaching.

Jobs requiring creativity, leadership, and human connection will survive automation, but overall there will be fewer jobs, and younger generations are shifting away from defining themselves by their work.

AI language models lack creativity and sustained reasoning despite broad training databases. Narrow AI systems could potentially obsolete 95% of human jobs

AI can do many things, but it still cannot replicate human creativity or consciousness.

  • AGI refers to a system that can generalize beyond its programming and training, take a leap into the unknown, and is different from narrow AI.

  • Large language models like Facebook Galactica cannot generate science papers at the level of even a mediocre master student due to the fact that doing original science requires taking a step beyond what was there before.

  • AI can synthesize music based on existing compositions, but it cannot replicate the creativity of humans in combining different genres and styles.

  • ChatGPT systems can probably already fool a random human into thinking it's a human in a conversation, but they cannot pass a five-year-long Turing test or perform tasks that require high-level strategy or creativity.

  • Fussing about what is life is not critical for biologists and progress can still be made in synthetic biology without worrying about whether a virus is alive or not.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

The Hidden Environmental Cost of AI: Data Centres' Surging Energy and Water Consumption

 In recent years, artificial intelligence (AI) has become an integral part of our daily lives, powering everything from smart assistants to complex data analysis. However, as AI technologies continue to advance and proliferate, a concerning trend has emerged: the rapidly increasing energy and water consumption of data centres that support these systems. The Power Hunger of AI According to the International Energy Agency (IEA), global data centre electricity demand is projected to more than double between 2022 and 2026, largely due to the growth of AI. In 2022, data centres consumed approximately 460 terawatt-hours (TWh) globally, and this figure is expected to exceed 1,000 TWh by 2026. To put this into perspective, that's equivalent to the entire electricity consumption of Japan. The energy intensity of AI-related queries is particularly striking. While a typical Google search uses about 0.3 watt-hours (Wh), a query using ChatGPT requires around 2.9 Wh - nearly ten times more en...