Skip to main content

The tech utopia of endless leisure time is here: goodbye jobs

 


'AI eliminated nearly 4,000 jobs in May' so it's reported by hallenger, Gray & Christmas, Inc.

Following on from reports by IBM et al that thousands of job cuts will occur due to AI replacement, there is no need to wait for the utopia of AI allowing humans more leisure time, as that's already here, in the form of redundancies, if we are to accept the reports findings.

'With the exception of Education, Government, Industrial Manufacturing, and Utilities, every industry has seen an increase in layoffs this year.'

What's particularly notable is that it's the Tech sector that's the most affected from job cuts in the US economy:

'The Technology sector announced the most cuts in May with 22,887, for a total of 136,831 this year, up 2,939% from the 4,503 cuts announced in the same period last year. The Tech sector has now announced the most cuts for the sector since 2001, when 168,395 cuts were announced for the entire year. '

Another reason why AI applications are being so hyped? If employers see the 'benefits' of replacing entry level coding jobs with AI, (it's about short term profits rather than long term sustainability after all) is it any wonder that they want to upsell such benefits?

Report Summary: Artificial intelligence (AI) is becoming increasingly sophisticated, and its ability to perform advanced tasks is leading to job losses in some sectors. According to a report from Challenger, Gray & Christmas, AI was responsible for nearly 4,000 job losses in May 2023. This represents a 5% increase from the previous month and a four-fold increase from the same month in 2022.

Analysis: The rise of AI is having a significant impact on the job market. In some sectors, such as manufacturing technology and customer service, AI is already replacing human workers. As AI continues to develop, it is likely that more jobs will be lost to automation.


Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.

AI Agents and the Latest Silicon Valley Hype

In what appears to be yet another grandiose proclamation from the tech industry, Google has released a whitepaper extolling the virtues of what they're calling "Generative AI agents". (https://www.aibase.com/news/14498) Whilst the basic premise—distinguishing between AI models and agents—holds water, one must approach these sweeping claims with considerable caution. Let's begin with the fundamentals. Yes, AI models like Large Language Models do indeed process information and generate outputs. That much isn't controversial. However, the leap from these essentially sophisticated pattern-matching systems to autonomous "agents" requires rather more scrutiny than the tech evangelists would have us believe. The whitepaper's architectural approaches—with their rather grandiose names like "ReAct" and "Tree of Thought"—sound remarkably like repackaged versions of long-standing computer science concepts, dressed up in fashionable AI clot...