Skip to main content

Will AIs Take All Our Jobs and End Human History—or Not? Well, It’s Complicated…

Utilising the excellent Stephen Wolfram's blog post

Artificial intelligence (AI) is one of the most powerful and disruptive technologies of our time. It has the potential to transform every aspect of human life, from health care and education to entertainment and commerce. But it also raises some profound ethical and social questions: Will AI replace human workers and make them obsolete? Will AI create new opportunities and challenges for human creativity and collaboration? Will AI pose an existential threat to humanity and its values?


These are not easy questions to answer, and there is no consensus among experts and researchers on the future of AI and its impact on society. Some argue that AI will augment human capabilities and enhance human well-being, while others warn that AI will surpass human intelligence and control human destiny. Some envision a utopian scenario where AI will solve all our problems and create a post-scarcity society, while others foresee a dystopian scenario where AI will cause mass unemployment, inequality and conflict.


In this blog post, we will try to summarise some of the main arguments and perspectives on these issues, and highlight some of the uncertainties and complexities involved. We will also suggest some ways that we can prepare for the possible scenarios and shape the development and use of AI in a responsible and ethical manner.


AI and Jobs: From Automation to Augmentation


One of the most debated topics in AI is its impact on jobs and employment. Many studies have predicted that AI will automate a large number of tasks and occupations, especially those that are routine, repetitive or low-skill. According to a 2017 report by McKinsey Global Institute, up to 800 million workers worldwide could be displaced by automation by 2030. Another 2017 report by PwC estimated that 30% of UK jobs could be at high risk of automation by the mid-2030s.


However, these predictions are not deterministic or inevitable. They depend on many factors, such as the pace and direction of technological innovation, the availability and cost of human labour, the demand and preferences of consumers and employers, the legal and regulatory frameworks, and the social and cultural norms. Moreover, automation does not necessarily mean elimination. It can also mean augmentation: AI can complement human skills and abilities, rather than replace them. For example, AI can assist doctors in diagnosing diseases, teachers in personalising learning, or artists in creating new forms of expression.


Therefore, the impact of AI on jobs is not only a matter of quantity, but also of quality. AI can create new jobs that require new skills and competencies, such as data scientists, AI engineers or ethicists. AI can also change the nature and content of existing jobs, requiring workers to adapt and learn new skills. For instance, a 2018 report by the World Economic Forum estimated that by 2022, at least 54% of all employees will need significant reskilling and upskilling.


AI and Society: From Competition to Cooperation


Another important topic in AI is its impact on society and human values. Many people are concerned that AI will create or exacerbate social problems such as inequality, discrimination, privacy violation or cybercrime. For example, AI can be used to manipulate information or influence behaviour through fake news or deepfakes. AI can also be biased or unfair in its decisions or actions, due to the data or algorithms it uses. For instance, a 2016 study by ProPublica found that a widely used algorithm for predicting criminal recidivism was racially biased against black defendants.


However, these problems are not inherent or unavoidable in AI. They are reflections of the human choices and values that shape the design and use of AI. Therefore, we can address them by ensuring that AI is aligned with human values and principles, such as fairness, accountability, transparency or privacy. For example, we can develop ethical guidelines or standards for AI development and deployment. We can also implement mechanisms for oversight or audit of AI systems or outcomes. Furthermore, we can educate or empower users or stakeholders to understand or challenge AI decisions or actions.


Moreover, AI can also be used for positive social purposes, such as advancing human rights or promoting social good. For example, AI can help monitor or prevent human rights violations or abuses through satellite imagery or facial recognition. AI can also help tackle global challenges such as poverty, hunger or climate change through data analysis or optimisation. For instance, a 2019 project by Microsoft used AI to map poverty in Africa using satellite imagery.


Therefore, the impact of AI on society is not only a matter of risk, but also of opportunity. AI can enable new forms of human collaboration and cooperation across borders or domains.


Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

The Hidden Environmental Cost of AI: Data Centres' Surging Energy and Water Consumption

 In recent years, artificial intelligence (AI) has become an integral part of our daily lives, powering everything from smart assistants to complex data analysis. However, as AI technologies continue to advance and proliferate, a concerning trend has emerged: the rapidly increasing energy and water consumption of data centres that support these systems. The Power Hunger of AI According to the International Energy Agency (IEA), global data centre electricity demand is projected to more than double between 2022 and 2026, largely due to the growth of AI. In 2022, data centres consumed approximately 460 terawatt-hours (TWh) globally, and this figure is expected to exceed 1,000 TWh by 2026. To put this into perspective, that's equivalent to the entire electricity consumption of Japan. The energy intensity of AI-related queries is particularly striking. While a typical Google search uses about 0.3 watt-hours (Wh), a query using ChatGPT requires around 2.9 Wh - nearly ten times more en...