Skip to main content

Goertzel, Sorting out The Wheat From The Chaff

 


Sorting out The Wheat From The Chaff

It doesn't take long to find disinformation on the web. That is true in politics as it is in A.I research. There is a plethora of either misunderstood or hyperbolic claims made for both the current state of A.I. applications as well as both a magnitude of dystopic or tech-utopia claims available. There are though certain people, whose career, experience and broad depth of knowledge, together with their position in the tech industry, require time and attention to be given to their talks, papers and popular announcements. Once such person is Goertzel.

In this video, a presentation by Goertzel to the Future Mind Institute, where he has an audience of his peers, Goertzel is at his best. There is a difference when someone learned speaks before their peers, rather than appears on a podcast, as the questions tend to be more pointed and the speaker will not over simplify the message for their audience.

To be fair, this whole blog site could be devoted to discussing this talk be Goertzel, or similarly one of Stephen Wolfram’s talks, yet would barely be able to scratch the surface of the significant findings, reasoning's and intuitions that they present us with.

Therefore what I have done is present a small précis of just some of the topics broached in the talk with a view to giving my initial feedback.

Goertzel, rightly, from my current understanding, demonstrates that the current reliance on AI on pattern recognition, is no substitute for thinking, despite appearances. Through the presentation he builds a case for A.I. leading to an AGI given a 5 to 15 year time frame. He touches upon the foreseeable problems that will arise from this, partly in the early bookmarked section ‘GPT will make 95% of jobs obsolete’ but more importantly, and more in more alarming terms, in the questions after the presentation.

What disappoints most, after watching this twice now, is the lack of questions on an AGI future re: the climate crisis. Perhaps that’s betraying a human confirmation bias?


Some summary information about the video below. 

Dr Ben Goertzel

Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of the transhumanist organisation, Humanity+.


The rise of AI and AGI systems has the potential to create job loss and a gap between the rich and poor, but also lead to a universal basic income and the creation of superhuman AI beings, and it is important to bias the odds for a benevolent AI and Singularity while minimizing harm and accelerating development.

AI has the potential to create job loss and a gap between the rich and poor, but also lead to a universal basic income and the creation of superhuman AI beings.

  • AI will lead to job loss and a potential gap between the rich and poor, but may also result in a universal basic income and the creation of superhuman AI beings.

  • AI is both more dangerous than nukes and the most important tech advancement in decades, as creating a thinking machine that can program and improve itself will lead to exponential increases in intelligence and potentially create an ultra intelligence.

  • The evolution of intelligence from evolved to engineered and self-re-engineering has brought about both challenges and possibilities.

  • Focusing on the risks of AI is important, but emphasizing the benefits is also a valid choice.

The rise of AGI systems may lead to job obsolescence and a difficult transition period, but some are excited about the possibility of merging with a super mind.

  • Future AGI systems will likely make humans nostalgic for their creators, but the transition to that stage may be difficult, especially if AGI systems obsolete the majority of human jobs.

  • The transition period between the obsoletion of most jobs and the widespread use of molecular assemblers will be a strange and tortuous journey.

  • AI technology developed by Aprente, now owned by IBM, has been rolled out in some McDonald's locations, illustrating the slow pace of automation in industries despite the clear potential for job obsolescence.

  • Universal basic income may be rolled out in developed countries, but there is concern about providing it to citizens in less developed countries and the potential for exacerbating global inequality.

AI is taking away jobs, but history shows that technological advancements have cycles and waves, and some jobs require human connection and teaching.

Jobs requiring creativity, leadership, and human connection will survive automation, but overall there will be fewer jobs, and younger generations are shifting away from defining themselves by their work.

AI language models lack creativity and sustained reasoning despite broad training databases. Narrow AI systems could potentially obsolete 95% of human jobs

AI can do many things, but it still cannot replicate human creativity or consciousness.

  • AGI refers to a system that can generalize beyond its programming and training, take a leap into the unknown, and is different from narrow AI.

  • Large language models like Facebook Galactica cannot generate science papers at the level of even a mediocre master student due to the fact that doing original science requires taking a step beyond what was there before.

  • AI can synthesize music based on existing compositions, but it cannot replicate the creativity of humans in combining different genres and styles.

  • ChatGPT systems can probably already fool a random human into thinking it's a human in a conversation, but they cannot pass a five-year-long Turing test or perform tasks that require high-level strategy or creativity.

  • Fussing about what is life is not critical for biologists and progress can still be made in synthetic biology without worrying about whether a virus is alive or not.

Comments

Popular posts from this blog

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about ...

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in ...

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).  ...