Skip to main content

Jobs, AI, the disruption


(Image produced with the new Kadinsky 2.1)

Not all illustrators will be unemployed shortly given the current generative abilities of text to art models. It may not be long though. There is a significant debate currently of artists whose work is being 'copied' by different generative AI models. Legal class action is pending, it would seem. There have also been cases of people that have sought to copy living artists work (via AI generation) and launch it on Twitter, before the artist has finished their work, to 'prove' they are the originator. Etsy, it would seem, has many previously non-artists, suddenly start to sell 'their' work, which was, at best, their prompt. These are symptoms of early technologies released into the current world that has little restriction on plagiarism for most living artists, but which offers plenty of legislative threat by corporate entities that hold the rights to artists work. Intellectual Property rights are in a mess. They have been for many decades now. How much of this current confusion then is down to AI?

Cutting through the AI hype is not a straightforward pursuit, due to the voluminous amount of articles, podcasts and videos made by hypesters. A welcome exception to the rule comes from a blue dot. written by Adrian Zidaritz. It's a tour de force in miniature. The blog covers many topics, laid out clearly. During my research on how the AI industries are affecting jobs, and after too many hours reading and listening to too much ill conceived work, it was a relief  then to see Zidaritz has covered this topic.

Since the issue of job losses due to AI will certainly continue to heat up, it is worth placing it in the larger context of present day politics. Two political movements are vying for our attention currently and so they will affect our view of AI: populism and progressivism. Although not always the case, populism is currently on the right and progressivism on the left. Both movements have posited that "the system is rigged" by the establishment (which historically always seems to be the case!), although who inside the establishment is doing the rigging differs between them: for populists it is the corrupt political class who is doing the rigging, for the progressives it is the economic power class. We focus on populism, for two reasons. First, because there is currently a wave of populism affecting many countries, not just the U.S.. Secondly, because it places the blame for job losses on illegal immigration and globalization/free trade, which as we will see below are false causes.

He goes on to say:

It is not clear to most people right now, but job losses due to AI are far surpassing the losses due to either immigration or to globalization/free trade; some estimates put the proportion of job losses due to AI at 80% of all losses. AI may eventually compensate and add more jobs, but the nature of those jobs is unclear at this time.

It is to be welcomed that the contextualisation of AI and jobs is being considered amongst the polarized present that current AI, as employed withing social media, has helped enable. Very few other authors that I have come across from the tech field have made this connection. So, the best advice I can give a reader presently, is read Zidaritz.

This blog post directly follows on from 'Creative Industries, the Initial disruption' which examines policy responses to some of these questions raised.

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

AI Agents and the Latest Silicon Valley Hype

In what appears to be yet another grandiose proclamation from the tech industry, Google has released a whitepaper extolling the virtues of what they're calling "Generative AI agents". (https://www.aibase.com/news/14498) Whilst the basic premise—distinguishing between AI models and agents—holds water, one must approach these sweeping claims with considerable caution. Let's begin with the fundamentals. Yes, AI models like Large Language Models do indeed process information and generate outputs. That much isn't controversial. However, the leap from these essentially sophisticated pattern-matching systems to autonomous "agents" requires rather more scrutiny than the tech evangelists would have us believe. The whitepaper's architectural approaches—with their rather grandiose names like "ReAct" and "Tree of Thought"—sound remarkably like repackaged versions of long-standing computer science concepts, dressed up in fashionable AI clot...