Skip to main content

Don't try this at home.What happened when a tech enthusiast let Chat GPT become the home assistant

 


Home Assistants have been pushed out to wealthier populations which much glee by tech companies over the last few years. Selling the dream of dominating your home environment, by the 'master's voice', from turning on and off lights to preparing your electric car, and eventually to you home robots that will cook and clean for you, if they can ever get around to dealing with changes of floor levels. So rather than waiting for the dream to be complete and the tech companies to sell you more product, what if you could code it yourself. Well someone has tried:

'As much as we like technology, what humans love more is control and predictability. We're afraid of wild beasts with fangs, claws, and venom because we don't know how wild animals will react to us. Like the untrained, we can't risk our safety because it's difficult to protect against something that's unpredictable. Based off of some comments that I've seen in conversations around the internet and in forums, AI seems to be no different than an unpredictable beast.

It's one thing to lock GPT behind a metaphorical glass cage of a fun website or a silly app where we can enjoy it in a safe and controlled environment. But when we remove it from this metaphorical cage and set it free, and it has access to the things that you care about, to the things that make you safe, it scares us because we don't know what it will do.

I find that those of us who work on the front lines of tech or those of us who are tech enthusiasts and love exploring technology tend to be more inclined to venture down these unpredictable roads and these paths and take these type of risks. Similar to, let's say, how a trained zookeeper is able to mitigate the risk of working with wild animals, but still the danger is there, the uncertainty is always there.

Essentially, with all of that being said, I had to disable the nodes or the parts of the automations that were responsible for GPT.'

It's well worth watching the short video series this came from. And as the title says, beware of doing this at home, and be sure you don't ever try this out on a City!





Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in