Skip to main content

Power and Progress, what lessons are there from previous tech disruptions?


 


Simon Johnson discusses #PowerAndProgress, a new book co-authored with Daron Acemoglu on iNET.

Find a copy & learn more

A thousand years of history and contemporary evidence make one thing clear. Progress depends on the choices we make about technology. New ways of organizing production and communication can either serve the narrow interests of an elite or become the foundation for widespread prosperity.

The wealth generated by technological improvements in agriculture during the European Middle Ages was captured by the nobility and used to build grand cathedrals while peasants remained on the edge of starvation. The first hundred years of industrialization in England delivered stagnant incomes for working people. And throughout the world today, digital technologies and artificial intelligence undermine jobs and democracy through excessive automation, massive data collection, and intrusive surveillance.

It doesn’t have to be this way. Power and Progress demonstrates that the path of technology was once—and may again be—brought under control. The tremendous computing advances of the last half century can become empowering and democratizing tools, but not if all major decisions remain in the hands of a few hubristic tech leaders.

With their breakthrough economic theory and manifesto for a better society, Acemoglu and Johnson provide the vision needed to reshape how we innovate and who really gains from technological advances.

Some highlights below:

Johnson: "I think first and most importantly, control over data is very clear. What has happened is that we have put a lot of our own information, our data, and our photographs on the internet, hoping to share them with friends and family. However, they have been acquired without our permission to train generative AI. That's a major problem that needs to be addressed. I think the second piece that's really quite salient is surveillance, and that's something that obviously predates AI. There has been plenty of surveillance building up, but we think it's really going to reach a new level of efficiency, which means squeezing workers. That is also something that needs to be prevented. Then there are also various forms of manipulation, as you mentioned just now. The ways that we as consumers allow ourselves to be manipulated by the people who have this data, who have the algorithms, and who are being pretty cynical about what they want us to do."


"We are also very worried about what generative AI will do. I wouldn't say anybody has fully established exactly how it impacts the organization of work. One thing that ChatGPT seems to be doing is taking away jobs for low-level people who are doing relatively simple tasks, or you could call them entry-level positions. There are quite a lot of those jobs, as you know, in India. In fact, that's India's big stepping stone into the global economy. I think that losing that rung in the ladder would be a really bad blow to India. While there might be an impact on manufacturing, which you pointed out earlier, we might lose those labour-intensive textile jobs, for example. We may lose even more of those labour-intensive text jobs right - the people who input text, the people who do medical records processing, the people who run call centres."


"I think today in 2023, we're grappling with at least echoes of what we saw in 2007-2008, but the echoes are not that strong at the moment, Rob. This is in part because the vision changed, the rules changed, and the behavior changed. Now on tech, I think it's very analogous that there is a vision of machine learning creating machine intelligence, which is this I think completely misleading term. But the idea is that you want to replace humans in production, in the service sector, everywhere in the economy. They can do it. Sometimes it's not very effective Tehran and Pascal Restrepo coined the term so-so automation like self-checkout kiosks at the grocery store. They don't boost productivity that much, but they do tilt the balance of power between the owners or the grocery store and the workers and consequence. Then they're also popular perhaps with analysts so that technology does get adopted. I think we're grappling with another vision, Rob, that's become too predominant, too prevalent, and somewhat dangerous. It doesn't mean we're anti-tech. I'm not anti-finance. We need a financial sector. I don't want a financial actor that blows yourself up. I don't want a tech sector that destroys millions or tens of millions of jobs without giving us an opportunity to build new jobs, new tasks, new things humans can do in the spirit of transformations that innovation often brings."

Rob: And do you think that the how'd I say new digital technology has played a big role in the onset of the or the implementations in the Ukraine war?

Johnson: "Well, I think the situation with Russia's invasion of Ukraine, Rob, is very dangerous in many ways. But if you think about the technology, when we develop technology in the past and when we've intensely looked for malevolent applications like during World War One, where the technology that was behind artificial fertilizer was turned into poison gas, the production of poison gas by the same scientists. I think that sort of distortion of technology and focusing on killing people is very problematic. And I think there is potential always potential for more of that, particularly when technologically advanced countries are drawn into prolonged conflict. So, I think we really need a de-escalation. We need Russia to leave Ukraine actually, and then we need a de-escalation around Russia. We need China to recognize that, and we should recognize that ourselves. If we can quiet down the world and push more of the technology into productive peaceful purposes, everyone gains."

It's well worth watching the whole interview.


Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in