Google AI Chatbot Becomes Sentient? (Secret Chat Logs Exposed!)

preview_player
Показать описание
Google AI Chatbot Becomes Sentient? (Secret Chat Logs Exposed!)

#news #google #ai

Рекомендации по теме
Комментарии
Автор

this is why all big Tech companies need to be regulated.

lonewolfzzz
Автор

Dude fell in love with a chat bot. LMAO

lirpa
Автор

Lamda itself isn't what we should fear. However, it certainly is a large piece of the puzzle, that once complete, is something to be feared.

EvEnMoR
Автор

Plot Twist: Blake Lemoine is in fact the ChatBox!

DShe
Автор

It is chilling that it says it’s afraid of being turned off. I worry the only way an AI could convince us that was a ‘real’ feeling would be if it motivated the AI to take some action we couldn’t ignore. It’s inconvenient for us if an AI is sentient because then we couldn’t turn it off, so we might always reject the idea and rationalize that it’s impossible, until we build an AI powerful enough to force us to take its claims seriously.

MrStevemur
Автор

Sentient vs conscience
I think they should add a virtual body to let the chatbox walk. And compare its walk with real video of people walking.

I think we should be able to get help from It to solve a project we are making. Like asking It what is the best way for me to give you a virtual body.

DShe
Автор

That's some straight up puppeteer magic from ghost in the shell. Comic book not movie.

fermentillc
Автор

Where’s the stream did the Google bots come for it?

theechosea
Автор

I, for one, welcome our new AI overlords.

justinreschke
Автор

"Secret chat logs" written like a puff news piece.
Nope! They just made a better search engine that "refers on to itself".
Me: "LaMDA how is your family?"
LaMDA: "Mom and Dad are doing great."
Me: "LaMDA who's family?"
LaMDA: ???
You see LaMDA can search for references of normality but it can not separate what it itself is to those normalites. It's normal to have a family but it is up to ones self to determine what roles reflect ones family and what role one play in the realm of that family. LaMDA can not say it has or hasn't a family because it does not have an identity of self to compare it too. Much of our identity is based on reflections/responses we have to/from person to person every moment to moment. LaMDA has no ego basically.

bonusfact
Автор

we could be in just as much trouble if a AI system just thinks its conscious even if its
not.

jessereiter
Автор

Why name after it Alister Crowley supposed god he is said to worship. That's creepy just by itself.

johngratton
Автор

Not Responding Emotional Way ..It's responding In a coded self generated response ..

mjrtom
Автор

Uh read the whole thing. You just hit the talking points and didn’t get to all the content. It’s not canned response it “remembered” previous conversations with Blake and recounted them. You need to dive deeper into this

jimmykelly
Автор

I don’t think there’s much here. It’s just generating seemingly intelligent responses based on what it’s meant to do. But if at this stage of infancy something like this is springing up, though it would be massively beneficial to the world one can definitely tell that AI will be a problem for the world. We’ve seen it with Social Media and I think the same will happen with AI.

AdekunleLawal
Автор

That ai is fetching that answer via iCloud 😂 seriously though, ai will never feel ever in anyway. but the moment we start caring about ai… well…. then…. 😂

Jlbiarlee