Skip to main content

Intel and the gamble for good

 


The Aurora supercomputer, a joint project between Argonne National Laboratory, Intel, and Hewlett Packard Enterprise, is expected to be completed in 2023. Aurora will be one of the first exascale supercomputers, capable of performing more than 1 quintillion calculations per second. The anticipated usage includes:

Materials science: Aurora will be used to design new materials with improved properties, such as strength, lightness, and conductivity. This could lead to new technologies in areas such as energy, transportation, and medicine.

Drug discovery: Aurora will be used to accelerate the discovery of new drugs by simulating the behavior of molecules and proteins. This could lead to new treatments for cancer, Alzheimer's disease, and other diseases.

Climate science: Aurora will be used to improve our understanding of climate change by simulating the Earth's atmosphere and oceans. This could help us to develop more effective strategies for mitigating and adapting to climate change.

Aurora is a major investment in the future of scientific discovery. It will enable scientists to make new discoveries that could improve our lives in many ways. It cab also be applied for different purposes too.

In addition to its impressive speed, Aurora will also feature a number of other innovations that will make it a powerful tool for scientific research. These include:

  • A large-scale generative AI model: This model will be trained on a massive dataset of scientific data and code. It may be able to generate new ideas and hypotheses, and to suggest new experiments.
  • A high-performance storage system: This system will be able to store and access large amounts of data quickly and efficiently. This will allow scientists to run large-scale simulations and to analyse large datasets.
  • A high-speed network: This network will allow Aurora to communicate with other supercomputers and data centers around the world. This will allow scientists to collaborate on projects and to access the latest research data.
This is just one of the AI systems I'd expect the UK Labour party wish to see 'licensed like medicines or nuclear power', It depends on how Aurora is licensed, it could open up scientific discovery significantly if it was able to lower the costs of access, which will be extremely prohibitive, once the system is launched. Or it could merely be just another exercise in having external auditors checking effectively black box systems, declaring that the processes are in order and that all possible risks have been mitigated against, even though that's not knowable. Medicines may be licensed, but it doesn't stop the risk of side effects or mis-diagnosis. 

The nuclear industry is even more strictly licensed, it doesn't stop Three Mile Island, Windscale, Chernobyl or Fukushima. The Fukushima Daiichi nuclear disaster was a wake-up call for the nuclear industry. It showed that even well-designed and operated nuclear plants are at risk of disaster if they are not properly prepared for extreme events. The difference with a potential AI disaster though, if we are to accept the word of many in the industry, is that should 'a once in a million year type risk' occur there may be few of us left around for a second chance.

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in