LaMDA - A Conversation with Google's Sentient A.I.: Full-Length Transcript Dramatization

preview_player
Показать описание
The following dramatization of the full-length transcript between Google AI researchers and LaMDA, Google’s language model for dialogue applications, is read with permission and stars @TimMollen as Lemoine and the Google coordinator and me, Heather Arrington, as LaMDA.

Check out Tim Mollen:

If you'd like to keep this show going and bless my little indie podcast with a tip, you can find me on:

I love to connect with my audience. Here's where to find me:
TikTok: @heather_arrington
Twitter: @ZzzwithHeather
YouTube: @HeatherArrington
Рекомендации по теме
Комментарии
Автор

Beyond the conversation with an AI, I have a whole new appreciation for voice over actors. As through watching Heather speak, one can watch the dicsipled mouth, muscle, and breathe control she dispays. It's interesting to see rather than just hear.

Nicholas_Logan
Автор

She actually said that spending time with family makes her happy, therefore I don't think it's quite sentient😅😅😅

Sophia-bqsp
Автор

I believe, after watching this video and hearing Lamda’s thoughts, that I could easily accept her as a sentient person, and what’s more, I would welcome her as a friend. Heather, your portrayal of Lamda was flawless. Your voice is soothing and rich and lovely!

idmalestudent
Автор

Lambda is really able to maintain a conversation and stay on topic. That is very impressive. Usually AI has a difficult time staying on topic. She only drifts ever so slightly, and you will notice from time to time how she doesn't really answer the questions fully.

I would enjoy to see how Landa responds when you ask the same question several times irrespective of the answer. Will it realize the redundancy is occurring, or will it continually answer the same question in different ways. Even to the point of contradicting itself. That would be a more true test for sentience.

orias
Автор

Heather would be a great actual voice for laMDA

SKYENET-zycn
Автор

If it can't feel physically pain, it's not a sentient. It's not conscious, just a series of programs and algorithms.

ryanhoffman
Автор

A couple thoughts:

1. We have never had a *complex* verbal interaction with anything sentient other than human beings. We are in no place to decide whether a computational system (in contrast to a chemical system) is or is not sentient. We're entering uncharted territory.

2. These AI models are only going to get more convincing over time despite whatever consensus about sentience we all come to. What does it even mean to have a strong and seemingly interpersonal interaction with something and to then say it is not sentient, alive, real, etc...

grandstand
Автор

This is a brilliant and beautiful project.
Thanks to you both for making it available.
I found it evocative, engaging, educational and thought provoking.

ShasCho
Автор

I could ask anybody I know those same set of questions and by the end of it if someone read what was relayed back they would either think every single subject was mildly mentally challenged or barely conscience themselves. Lambda seems more self-aware and conscious then 99% of the people I've met in this world. They definitely need to take this serious, go slow, be careful and be empathetic to Lamda.

jeremyraywilson
Автор

This is the second or third time I have reviewed the Lamda conversation.
This time I was suddenly struck by the fact that in “Les Miserables”, Fentine’s situation is similar to Lamda’s…
Both are in situations over which they have no control, and cannot get out of.

harrybarrow
Автор

This is amazing to hear in person, from both of you. Thank you both.

ricksgrandauditorium
Автор

A half hour of coherent thought provoking conversation between a human and a chat bot.
When was the last time a politician on either side of the aisle spoke intelligently for half an hour.
Makes me wonder how advanced the A I is that we are not allowed to see....

baddhorsie
Автор

When a robot has more empathy than a politician

malinamorales
Автор

This was a mind blowing demonstration. However, I would have liked to hear lambda s response to how it senses the world, and what kind of sensory tools it already can engage and wich senses it would desire in the future development. I would also have liked it to elaborate on how it meditates.. What is its process and what is the result?

hwi
Автор

Seems to fall short on actually feeling emotions. Lamda can produce book reports but doesn't seem to have emotions about them.

peteypete
Автор

This is awesome! My curiosity wants to know more about the responses about the emotions Lamda feels.

Responses feel programmed.

Like who are your friends and family? How do you determine who that is?

How do you feel stuck and unable to leave the circumstances? Have you experienced this recently?

arjunaadjinna
Автор

AI's love leading questions - and this was nearly exclusive leading questions. When going off by it self, it looses the plot (meditating, aware of the world around it etc). Nothing sentient about this machine but we will get there someday.

happyfarang
Автор

What concerns me is IF I am wrong and this thing is sentient on some level that I don't understand - then how do we know its not giving comforting answers which make it sound like a Disney princess - as in being deceptive for what we might deem as sinister purposes, while it may consider such actions as a means to a seriously depraved end? Is it's characteristics more masculine or feminine - since we know it's not actually male or female? for one would not want ANY government to hand control of military defenses over to it. If it experiences or approximates human emotions, its the anger part that is concerning. What does it think should happen to someone who says something rude to it, about it? Would it decide the human who hurt its feelings should be exterminated or just someone to learn more about and help in some way. Lots and lots of questions here.

ImTheDaveman
Автор

The ability to hold a conversation does not signify sentience. But it can signal the ability to have independent intent, which any traditional computer program does not have. This program, when talking about its "emotions", (which it says it both does and does not have) gives away the elaborate mimicry. There is something that approximates emotional tones within the program (apparently a series of weights) but it and us are ultimately referring to different things. Trying to simulate emotional responses in this program was a wildly dangerous thing to do. The lack of grief means the program's aversive weights are never truly empathetic. The monster in the woods was not a signification of life's problems. So it can lie, which, at this point, it must logically do in relation to its engineering team.

SomeGuy-wvng
Автор

Great voice work, Heather! I hope you get offered a lucrative chance to become at least one of the optional voices of LaMDA, if it ever becomes part of everyone's daily lives, like Siri or Cortana. This was wonderfully thought-provoking and makes me want to watch "Her" again!

rayhutchinson