Skip to main content

Goertzel, Sorting out The Wheat From The Chaff

 


Sorting out The Wheat From The Chaff

It doesn't take long to find disinformation on the web. That is true in politics as it is in A.I research. There is a plethora of either misunderstood or hyperbolic claims made for both the current state of A.I. applications as well as both a magnitude of dystopic or tech-utopia claims available. There are though certain people, whose career, experience and broad depth of knowledge, together with their position in the tech industry, require time and attention to be given to their talks, papers and popular announcements. Once such person is Goertzel.

In this video, a presentation by Goertzel to the Future Mind Institute, where he has an audience of his peers, Goertzel is at his best. There is a difference when someone learned speaks before their peers, rather than appears on a podcast, as the questions tend to be more pointed and the speaker will not over simplify the message for their audience.

To be fair, this whole blog site could be devoted to discussing this talk be Goertzel, or similarly one of Stephen Wolfram’s talks, yet would barely be able to scratch the surface of the significant findings, reasoning's and intuitions that they present us with.

Therefore what I have done is present a small précis of just some of the topics broached in the talk with a view to giving my initial feedback.

Goertzel, rightly, from my current understanding, demonstrates that the current reliance on AI on pattern recognition, is no substitute for thinking, despite appearances. Through the presentation he builds a case for A.I. leading to an AGI given a 5 to 15 year time frame. He touches upon the foreseeable problems that will arise from this, partly in the early bookmarked section ‘GPT will make 95% of jobs obsolete’ but more importantly, and more in more alarming terms, in the questions after the presentation.

What disappoints most, after watching this twice now, is the lack of questions on an AGI future re: the climate crisis. Perhaps that’s betraying a human confirmation bias?


Some summary information about the video below. 

Dr Ben Goertzel

Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of the transhumanist organisation, Humanity+.


The rise of AI and AGI systems has the potential to create job loss and a gap between the rich and poor, but also lead to a universal basic income and the creation of superhuman AI beings, and it is important to bias the odds for a benevolent AI and Singularity while minimizing harm and accelerating development.

AI has the potential to create job loss and a gap between the rich and poor, but also lead to a universal basic income and the creation of superhuman AI beings.

  • AI will lead to job loss and a potential gap between the rich and poor, but may also result in a universal basic income and the creation of superhuman AI beings.

  • AI is both more dangerous than nukes and the most important tech advancement in decades, as creating a thinking machine that can program and improve itself will lead to exponential increases in intelligence and potentially create an ultra intelligence.

  • The evolution of intelligence from evolved to engineered and self-re-engineering has brought about both challenges and possibilities.

  • Focusing on the risks of AI is important, but emphasizing the benefits is also a valid choice.

The rise of AGI systems may lead to job obsolescence and a difficult transition period, but some are excited about the possibility of merging with a super mind.

  • Future AGI systems will likely make humans nostalgic for their creators, but the transition to that stage may be difficult, especially if AGI systems obsolete the majority of human jobs.

  • The transition period between the obsoletion of most jobs and the widespread use of molecular assemblers will be a strange and tortuous journey.

  • AI technology developed by Aprente, now owned by IBM, has been rolled out in some McDonald's locations, illustrating the slow pace of automation in industries despite the clear potential for job obsolescence.

  • Universal basic income may be rolled out in developed countries, but there is concern about providing it to citizens in less developed countries and the potential for exacerbating global inequality.

AI is taking away jobs, but history shows that technological advancements have cycles and waves, and some jobs require human connection and teaching.

Jobs requiring creativity, leadership, and human connection will survive automation, but overall there will be fewer jobs, and younger generations are shifting away from defining themselves by their work.

AI language models lack creativity and sustained reasoning despite broad training databases. Narrow AI systems could potentially obsolete 95% of human jobs

AI can do many things, but it still cannot replicate human creativity or consciousness.

  • AGI refers to a system that can generalize beyond its programming and training, take a leap into the unknown, and is different from narrow AI.

  • Large language models like Facebook Galactica cannot generate science papers at the level of even a mediocre master student due to the fact that doing original science requires taking a step beyond what was there before.

  • AI can synthesize music based on existing compositions, but it cannot replicate the creativity of humans in combining different genres and styles.

  • ChatGPT systems can probably already fool a random human into thinking it's a human in a conversation, but they cannot pass a five-year-long Turing test or perform tasks that require high-level strategy or creativity.

  • Fussing about what is life is not critical for biologists and progress can still be made in synthetic biology without worrying about whether a virus is alive or not.

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in