Skip to main content

Don't try this at home.What happened when a tech enthusiast let Chat GPT become the home assistant

 


Home Assistants have been pushed out to wealthier populations which much glee by tech companies over the last few years. Selling the dream of dominating your home environment, by the 'master's voice', from turning on and off lights to preparing your electric car, and eventually to you home robots that will cook and clean for you, if they can ever get around to dealing with changes of floor levels. So rather than waiting for the dream to be complete and the tech companies to sell you more product, what if you could code it yourself. Well someone has tried:

'As much as we like technology, what humans love more is control and predictability. We're afraid of wild beasts with fangs, claws, and venom because we don't know how wild animals will react to us. Like the untrained, we can't risk our safety because it's difficult to protect against something that's unpredictable. Based off of some comments that I've seen in conversations around the internet and in forums, AI seems to be no different than an unpredictable beast.

It's one thing to lock GPT behind a metaphorical glass cage of a fun website or a silly app where we can enjoy it in a safe and controlled environment. But when we remove it from this metaphorical cage and set it free, and it has access to the things that you care about, to the things that make you safe, it scares us because we don't know what it will do.

I find that those of us who work on the front lines of tech or those of us who are tech enthusiasts and love exploring technology tend to be more inclined to venture down these unpredictable roads and these paths and take these type of risks. Similar to, let's say, how a trained zookeeper is able to mitigate the risk of working with wild animals, but still the danger is there, the uncertainty is always there.

Essentially, with all of that being said, I had to disable the nodes or the parts of the automations that were responsible for GPT.'

It's well worth watching the short video series this came from. And as the title says, beware of doing this at home, and be sure you don't ever try this out on a City!





Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...