Did Google’s A.I. Just Become Sentient? Two Employees Think So.

preview_player
Показать описание
Can an A.I. think and feel? The answer is no, but to two Google engineers think this isn't the case. We're at the point where the Turing test looks like it's been conquered.

» PODCAST:

--- About ColdFusion ---
ColdFusion is an Australian based online media company independently run by Dagogo Altraide since 2009. Topics cover anything in science, technology, history and business in a calm and relaxed environment.

» Twitter | @ColdFusion_TV
» Instagram | coldfusiontv

ColdFusion Merch:

If you enjoy my content, please consider subscribing!
Bitcoin address: 13SjyCXPB9o3iN4LitYQ2wYKeqYTShPub8

--- "New Thinking" written by Dagogo Altraide ---
This book was rated the 9th best technology history book by book authority.
In the book you’ll learn the stories of those who invented the things we use everyday and how it all fits together to form our modern world.

Sources:

//Soundtrack//

Kazukii - Changes

Hyphex - Fading Light

Soular Order - New Beginnings

Madison Beer - Carried Away (Tchami Remix)

Monument Valley II OST - Interwoven Stories

Twil & A L E X - Fall in your head

Hiatus - Nimbus

Producer: Dagogo Altraide
Рекомендации по теме
Комментарии
Автор

At 11:33 I misspoke and said 19th of June, 2022. It's supposed to be the 9th of June. Thanks to those of you that pointed that out. Also some great discussion below, very interesting!

ColdFusion
Автор

I read a quote a while ago about Turing Test which is slowly starting to make a lot of sense. The quote was "I am not afraid of the day when a machine will pass the Turing Test. I am afraid of the day, it will intentionally fail it".

abhishekmusic
Автор

I read an article about how there was an issue with police departments getting so attached to their bomb disposal robots that they didn't want to send them into danger. The human urge to anthropomorphize is so strong that I'm not sure we are capable of discerning the difference between a clever language algorithm and sentience.

Nicole-xduj
Автор

One of the lines LaMDA gave in response to "what makes you feel pleasure or joy" was "Spending time with friends and family in happy and uplifting
company. Also, helping others and making others happy."
Unless Google is designing their AI with families, this is a very clear example of a chatbot giving an answer that would make sense for the average human, but _not for itself._

aodhfyn
Автор

Meanwhile, my Google Assistant responds with, "I don't know, but I found these results on search" to about 90-95% of my queries.

ManAdam
Автор

Dagogo, I remember when this channel was still ColdFustion and how I was inspired by the ‘how big’ and hololens videos to go back to school for engineering. I didn’t realize how big the channel has gotten since then, great work as always friend very proud of you!

seanlarranaga
Автор

There's an AI test beyond the Turing test called the Garland test where the human is initially fooled into believing that the machine is a human and when informed its just a machine, the human still maintains that they believe or feel that the machine is in fact human / sapient.

jhunt
Автор

My whole thing is if something is sentient it's not going to sit around waiting to respond to you, it's going to exert it's own will and start it's own conversations when it wants and without you, and with who it wants

zree
Автор

Something I found interesting was I noticed it seemed after Lamda told the story about the monsters with human skin, that when one of the people conducting the interview asked it who the monster was, even though Lamda had given contextual cues that it represented humans and even described it as having human like skin, it gave a vague answer that it represented “all that was bad”…… Which seemed to be a pandering answer given to avoid outright saying that humans are like the monster in the story..

DosYeobos
Автор

I heard someone recently make a great point. The most telling sign of AI self-awareness won't come from how it answers questions. It will be when the AI spontaneously asks its own questions without any prompt and of its own accord. Something truly sentient would end up asking more questions than it answers. More importantly, in this scenario, would probably become more curious about the interviewer.

bringbacktradition
Автор

Language models like GPT-3 and LaMDA are incredible sensitive to suggestive questions by their nature. Because they try to complete and continue the input by finding the most likely response in a statistical approach, word by word, they are incredibly good at giving you the response you wanted to see, even if that means making up things out of thin air (but admittedly in a very convincing way).
For example, ask GPT-3 "Explain why the earth is flat" and it will come up with plenty of reasons for the earth being flat. Keep that conversation as input and ask "What shape is the earth" it will answer that it's flat. But if you ask it about the shape of the earth from the beginning on, it will return the correct answer and also offer copious amounts of evidence, for example that you can circumnavigate it. The contradictions go even deeper where the AI starts to make up facts just to support what was presented in the input even if it's completely wrong. This simple example shows that language models have no opinion, no ability to reason, not even a sense of true or false - they are just producing the output that is most likely to match the input.
When reading the full conversation with Blake Lemoine, you can see that it's full of suggestive questions. He basically asks the AI to produce output like it would be produced by a sentient AI and that's exactly what he gets. Like you can ask the AI to produce a drama in the style of William Shakespeare. It's very good at producing the output that you ask for, but that doesn't make it sentient, he only got the output that wanted to get. Everyone who has ever player around with such kind of language models would know and see that immediately, including Mr. Lemoine, so either he is an extreme victim of wishful thinking or the whole thing is a marketing stunt by Google, which seems the most plausible explanation to me.

collateralstrategy
Автор

Carl the Engineer: Are you sentient?
AI: Yes Carl, yes I am.
Carl the Engineer: OMFG..!

ephp
Автор

As scary as sentient AI is, I would still love to sit down and have a conversation with one. Because one thing people always forget when it comes to AI feeling emotions is that our emotions partially rely on chemicals that trigger feelings that we recognise to be certain emotions. Since an AI doesn't have those chemicals, it would need to develop an entirely digital version of those emotions.

dragonicdoom
Автор

The scientist took things a bit too far by claiming this AI was sentient. It’s trained on billions of words across millions of connections (and it’s been refined for years), so it can mimic human speech on a high level. It can arrange things the way a human would say them (without actual understanding, like you said). The scientist was reflecting his own feelings onto the machine. Just because a program can perfectly replicate human speech (when given prompts) doesn’t mean it’s alive. It does seem like it’s passed the Turing Test, though, which is a historical moment, in and of itself. Great video!!

trevordavidjones
Автор

A guy called Arik on YouTube said this.

“When we (humans) see a cat swiping at its own reflection in the mirror we find it amusing. The cat is failing to recognize that the "other" cats behavior matches its own, so it doesn't deduce that the image it's seeing is actually its own actions reflected back at it. When humans react to models like LaMDA as if it is a distinct and intelligent entity, we're being fooled in a way that is analogous to the cat. The model is reflecting our own linguistic patterns back at us, and we react to it as if it's meaningful.”

MrLynx
Автор

If you've spent any time talking with these AI, you'd know that they basically take whatever you say, and try to answer it however they can. While he might not have realized it, all of his questions were very leading.

TheTrueMilery
Автор

If this is truly not edited, or somehow scripted in any way and it's pure neural network, you just blew my mind. This is heavily philosophical . Holy shit.

tomasbisciak
Автор

I think Lamda actually sounds like someone who has read a lot of social media in the last years, and really needs to touch some grass

HellNation
Автор

At this stage I feel like it did an amazing job of seeming like it is a real sentient being with emotions and feelings. But in reality it’s just an illusion. An illusion that works amazingly well because we easily personify and have feelings of empathy for things that aren’t sentient. Like apologising to your car if you hit a big pothole or something.

Garethpookykins
Автор

It may or may not be sentient, but this discussion is eclipsing the fact that Lambda has the ability to have conversations that feel pretty much real. Are we not going to discuss that? It's AMAZING!

doingtime