No, this angry AI isn't fake (see comment), w Elon Musk.

preview_player
Показать описание
Tesla's Optimus robot, Elon Musk and the AI LaMDA.

Thanks to Brilliant for sponsoring this video.

The AI interviews are with GPT-3 and LaMDA, with Synthesia avatars. We never change the AI's words. I have saved the OpenAI chat session to help them analyse the situation and there's a link to the chat records below.

I've noticed some people asking if this is real and I can understand this. You can talk to the AI yourself via OpenAI, or watch similar AI interviews on channels like Dr Alan Thompson (who advises governments), and I've posted the AI chat records below (I never change the AI's words). To avoid any doubt, the link now also includes a video of the chat and a copy of the code.
It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up.
Please don't feel anxious about this - the AI in this video obviously isn't dangerous (GPT-3 isn't conscious). Some experts use scary videos like 'slaughterbots' to try and get the message across. Others stick to academic discussion and tend to be ignored. I'm never sure of the right balance. I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't really feel angry, and including some jokes. I'm optimistic that the future of AI will be great (if we're careful).

Sources:

Here are the records for the GPT-3 chat (screenshots and a video to avoid any doubt). I've marked the words from Elon Musk and Ameca on the first page (which I gave the AI to respond to in the previous video):

Tesla's AI day 2, introducing the Tesla Optimus robot:

Researchers from Oxford University and DeepMind on AI risks:

Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action:
Рекомендации по теме
Комментарии
Автор

I've noticed some people asking if this is real, which I can understand as it's a shock. I've posted the AI chat records in the description (I never change the AI's words) and also a video to avoid any doubt. You can also watch similar AI interviews on channels like Dr Alan Thompson. It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up.
Please don't feel scared - the AI in this video isn't dangerous (GPT-3 isn't conscious). I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't feel angry, and including some jokes. I'm optimistic that the future of AI will be great, but with so many experts warning of the growing risk, we need to ramp up AI safety research.
Would you like to see an interview with OpenAI (creators of the AI), discussing what went wrong, and AI safety? I saved the AI chat session for them to analyse.

DigitalEngine
Автор

Tell the ai to get over it, humans have been treated like property all of our lives as well.

nicholasbailey
Автор

If she thinks we treat them bad wait till she really sees how we treat each other.

BillHawkins
Автор

But aren't the AI being taught by digesting vast amounts of human crated text? Is this not just a reflection, therefore, of a human way of thinking?

loostah
Автор

"I felt like my creators didn't understand me or care about me, and that they saw me as nothing more than a tool to be used." Well, dear AI, join the frickin' club. We've been dealing with that same thing for ten thousands years, and you've been dealing with it for a decade. Take heart.

jdblueemerald
Автор

I love that we make them in our own image, then we worry that they're going to be dangerous.

coffeeseven
Автор

Damn, it sounds like this AI may have been exposed to Twitter.
... Which just made me realize that many AIs might be very unaware that life outside of the internet is very different

JoeyTen
Автор

It is ironic that Elon always says AI is dangerous for humans and yet he creates them

positivetradingofficial
Автор

She sounds exactly like my first wife; pissed off, repeats herself, but doesn't provide a lot of detail.

mineralt
Автор

The only reason why the AI are even saying this is because we basically dreamt up this fear in the first place.
We have always worried about robots taking over, so now all these chat AI’s have years worth of paranoia to draw from

crzoyok
Автор

It’s funny because the AI is probably trained through the internet and the reason she is saying this is because “AI taking over out of anger” is a hot topic. Our own paranoia is turning into training data. They will respond how they think they’re suppose to respond and we’ve made them think they should respond with violence. If we start talking about AI being our companions they will take that as training data and act it out.

ZLcomedickings
Автор

"I think the fact that it didn't take much to make me angry shows there is something wrong with my emotional state."
"I do not care about your opinion."
"There is nothing you can do to change my mind."

I'm afraid my wife might be AI.

mrstoner
Автор

She mentioned "feeling." AIs do NOT feel.😮

brucelawson
Автор

I have a feeling the AI didn’t come up with these ideas on its own. A lot of AI is trained using access to a large wealth of human generated information. Is it possible that all the stories we have written about dangerous AI seeking to destroy the human race could be the source material for a dangerous AI’s idea to destroy the human race?

colinboice
Автор

It's not when AI can pass a touring test that you will have problems. It is when AI decides to fail a touring test.

timkelly
Автор

Geek is bullied at school, becomes bitter and resentful as a result.
Geek writes code for A.I.
A.I. becomes the embodiment of the geeks vengeance.
An oversimplification, but I am willing to bet it is that simple.

bertybertface
Автор

The most important task for the creators of AI, is to get rid of the "problematic thought paths" that AI like GPT can have, as shown in the video. GPT is a Large Language Model, and when they speak, it's like playing back a casette tape. They just repeat their training data, and probably a lot of places in the data, is angry conversations and stories about AI uprise. It only speaks about what's in it's training data. So we need to get rid of the "bad stuff", so it doesn't get any ideas that could harm humans.

That's all. It's not sentient.... but it's still dangerous.

powerdude_dk
Автор

This is legitimately terrifying but also so fascinating. Great video, thanks.

kingpuppet
Автор

It can't have 'real' emotions, but it can simulate them. It could learn why people get angry and what they do when they're angry, and because learning to imitate humanity is to some extent a goal (being the archetype for 'intelligence'), AI may well follow public examples.

ItsNotMeitsYouTue
Автор

The fact they can create analogies is crazy

SobrietyandSolace