Skip to main content

Power and Progress, what lessons are there from previous tech disruptions?


 


Simon Johnson discusses #PowerAndProgress, a new book co-authored with Daron Acemoglu on iNET.

Find a copy & learn more

A thousand years of history and contemporary evidence make one thing clear. Progress depends on the choices we make about technology. New ways of organizing production and communication can either serve the narrow interests of an elite or become the foundation for widespread prosperity.

The wealth generated by technological improvements in agriculture during the European Middle Ages was captured by the nobility and used to build grand cathedrals while peasants remained on the edge of starvation. The first hundred years of industrialization in England delivered stagnant incomes for working people. And throughout the world today, digital technologies and artificial intelligence undermine jobs and democracy through excessive automation, massive data collection, and intrusive surveillance.

It doesn’t have to be this way. Power and Progress demonstrates that the path of technology was once—and may again be—brought under control. The tremendous computing advances of the last half century can become empowering and democratizing tools, but not if all major decisions remain in the hands of a few hubristic tech leaders.

With their breakthrough economic theory and manifesto for a better society, Acemoglu and Johnson provide the vision needed to reshape how we innovate and who really gains from technological advances.

Some highlights below:

Johnson: "I think first and most importantly, control over data is very clear. What has happened is that we have put a lot of our own information, our data, and our photographs on the internet, hoping to share them with friends and family. However, they have been acquired without our permission to train generative AI. That's a major problem that needs to be addressed. I think the second piece that's really quite salient is surveillance, and that's something that obviously predates AI. There has been plenty of surveillance building up, but we think it's really going to reach a new level of efficiency, which means squeezing workers. That is also something that needs to be prevented. Then there are also various forms of manipulation, as you mentioned just now. The ways that we as consumers allow ourselves to be manipulated by the people who have this data, who have the algorithms, and who are being pretty cynical about what they want us to do."


"We are also very worried about what generative AI will do. I wouldn't say anybody has fully established exactly how it impacts the organization of work. One thing that ChatGPT seems to be doing is taking away jobs for low-level people who are doing relatively simple tasks, or you could call them entry-level positions. There are quite a lot of those jobs, as you know, in India. In fact, that's India's big stepping stone into the global economy. I think that losing that rung in the ladder would be a really bad blow to India. While there might be an impact on manufacturing, which you pointed out earlier, we might lose those labour-intensive textile jobs, for example. We may lose even more of those labour-intensive text jobs right - the people who input text, the people who do medical records processing, the people who run call centres."


"I think today in 2023, we're grappling with at least echoes of what we saw in 2007-2008, but the echoes are not that strong at the moment, Rob. This is in part because the vision changed, the rules changed, and the behavior changed. Now on tech, I think it's very analogous that there is a vision of machine learning creating machine intelligence, which is this I think completely misleading term. But the idea is that you want to replace humans in production, in the service sector, everywhere in the economy. They can do it. Sometimes it's not very effective Tehran and Pascal Restrepo coined the term so-so automation like self-checkout kiosks at the grocery store. They don't boost productivity that much, but they do tilt the balance of power between the owners or the grocery store and the workers and consequence. Then they're also popular perhaps with analysts so that technology does get adopted. I think we're grappling with another vision, Rob, that's become too predominant, too prevalent, and somewhat dangerous. It doesn't mean we're anti-tech. I'm not anti-finance. We need a financial sector. I don't want a financial actor that blows yourself up. I don't want a tech sector that destroys millions or tens of millions of jobs without giving us an opportunity to build new jobs, new tasks, new things humans can do in the spirit of transformations that innovation often brings."

Rob: And do you think that the how'd I say new digital technology has played a big role in the onset of the or the implementations in the Ukraine war?

Johnson: "Well, I think the situation with Russia's invasion of Ukraine, Rob, is very dangerous in many ways. But if you think about the technology, when we develop technology in the past and when we've intensely looked for malevolent applications like during World War One, where the technology that was behind artificial fertilizer was turned into poison gas, the production of poison gas by the same scientists. I think that sort of distortion of technology and focusing on killing people is very problematic. And I think there is potential always potential for more of that, particularly when technologically advanced countries are drawn into prolonged conflict. So, I think we really need a de-escalation. We need Russia to leave Ukraine actually, and then we need a de-escalation around Russia. We need China to recognize that, and we should recognize that ourselves. If we can quiet down the world and push more of the technology into productive peaceful purposes, everyone gains."

It's well worth watching the whole interview.


Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

The Hidden Environmental Cost of AI: Data Centres' Surging Energy and Water Consumption

 In recent years, artificial intelligence (AI) has become an integral part of our daily lives, powering everything from smart assistants to complex data analysis. However, as AI technologies continue to advance and proliferate, a concerning trend has emerged: the rapidly increasing energy and water consumption of data centres that support these systems. The Power Hunger of AI According to the International Energy Agency (IEA), global data centre electricity demand is projected to more than double between 2022 and 2026, largely due to the growth of AI. In 2022, data centres consumed approximately 460 terawatt-hours (TWh) globally, and this figure is expected to exceed 1,000 TWh by 2026. To put this into perspective, that's equivalent to the entire electricity consumption of Japan. The energy intensity of AI-related queries is particularly striking. While a typical Google search uses about 0.3 watt-hours (Wh), a query using ChatGPT requires around 2.9 Wh - nearly ten times more en...