Skip to main content

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

 



Safe Superintelligence Inc. (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach.


One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed.


Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car with a fantastic safety record – until the engine becomes so powerful that even the best brakes can't control it on a tight corner. Similarly, an incredibly powerful AI, even with built-in safety features, could be incredibly difficult to control or understand its decision-making process, potentially leading to catastrophic consequences.


SSI's laser focus on a single product – a safe superintelligence – could also be problematic. This approach might lead to tunnel vision, neglecting other crucial areas of AI safety research, such as preventing bias in AI algorithms or mitigating the risks of autonomous weapons systems.


Furthermore, their business model raises concerns. While they claim insulation from short-term pressures, it's unclear how they'll secure long-term funding and attract top talent without a clear commercial product roadmap.  Can secrecy and a lack of a clear financial incentive truly attract the best minds in the field?


Finally, the text emphasises a "lean, cracked team" working in secrecy.  While this might foster a certain level of innovation, a lack of transparency in research and development could raise ethical concerns and make it harder to assess potential risks and hold them accountable.


The question of superintelligence's arrival also needs discussion. Is it truly "within reach" as SSI claims? Misjudging the timeline could lead us woefully unprepared for a superintelligence that arrives sooner than anticipated. Additionally, limiting the team to the US and Israel might hinder the exchange of ideas and perspectives crucial for tackling a global challenge like AI safety.


Safe Superintelligence Inc. deserves credit for its ambition.  However, their approach raises questions about feasibility and potential risks. Open discussion and collaboration with the broader AI safety community will be crucial for their, and ultimately, humanity's, success. Only through honest conversations about defining universal values and navigating the ethical and technical complexities can we ensure that superintelligence, if and when it arrives, truly benefits humanity.

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in

Gemini LLM, an increase in benefits, and risks.

  Gemini LLM is being developed by Google Brain and Deepmind that was introduced at Google I/O 2023, and is expected to have a trillion parameters, like GPT-4. The project is using tens of thousands of Google's TPU AI chips for training, and could take months to complete. It may be introduced early next year. Gemini is being trained on a massive dataset of text, audio, video, images, and other media. This will allow it to 'understand' and respond to a wider range of input than previous LLMs. It will also be able to use other tools and APIs, which will make it more versatile and powerful. It's clearly looking to compete with a future GPT-5, this time Google are looking to get ahead of the curve. Training Gemini in a multimodal manner is significant because it allows the model to learn from a wider range of data. This should improve the model's accuracy and performance on a variety of tasks. For example, if Gemini is trained on both text and images, it can learn to as