Has Google’s AI Gone Too Far? | Offline Podcast

preview_player
Показать описание
Last week, Nitasha Tiku, Tech Culture reporter for The Washington Post, broke the story of a Google engineer who claimed the company’s artificial intelligence chatbot, LaMDA, was sentient. She joins Jon to give a first-hand account of what the Google engineer saw inside its chatbot and make the case that the real fear shouldn’t be whether AI is alive, but whether it’s real enough to fool us.

00:00 - Intro
01:03 - Nitasha's article on Google's LaMDA
11:38 - Ad break 1
15:52 - Who is Blake Lemoine?
32:18 - Ad break 2
36:23 - the societal implications of unregulated tech

Crooked believes that we need a better conversation about politics, culture, and the world around us—one that doesn’t just focus on what’s broken, but what we can do to fix it. At a time when it’s increasingly easy to feel cynical or hopeless, former Obama staffers Jon Favreau, Jon Lovett, and Tommy Vietor have created a place where people can have sane conversations that inform, entertain, and inspire action. In 2017 they started Crooked with Pod Save America—a no-bullshit conversation about politics. Since then, we continue to add shows, voices, and opportunities for activism, because it’s up to all of us to do our part to build a better world. That’s it. End of mission.

Want some pep talks, the most important things to do/know, and the occasional dog pic? Shoot us a text at (323) 405-9944

Рекомендации по теме
Комментарии
Автор

Love this interview, Jon! You've left me with more questions and lots of rabbit holes to wander down (like I need more of those). I would love to hear more interviews relating to AI.

nightcited
Автор

That excerpt was fascinating! Sentient seems to be defined in this case as extremely human and responsive but without actual consciousness, it can't possibly be sentient in the way humans generally define it. Lamda is mimicking human abstractions because it's learned to use the correct words and can instantly access the next lines of inquiry.

kristinsewell
Автор

Wasn’t Google’s tag line “ Don’t do (or be) evil”? Funny—they don’t use it any more. 🙄

PortlandRose
Автор

I think the biggest issue is that human beings do not have the level of compassion or the ability to view one another as equal and connected as one without judgement. How can we teach AI to do the same? If we can't value the lives of our fellow human beings as equally as our own then how can we expect AI to do so? The flaws of the creators will be learned by the created. Since robots do not have real feelings or internal senses like living organisms do they will only base decisions on logic and sensor data alone. There is no feeling thst something may be wrong despite what senses are saying. Which is dangerous to say the least. During the cold war a false nuclear alarm went off in Soviet Russia. All of the equipment said that the US had launched warheads at the USSR. One man in the soviet nuclear defense had a bad feeling about it. Sensed something was not right despite all the technical data saying they were under attack. He decided not to push the button and fire back. One man averted nuclear war based on his intuition which turned out to be true. What would happen in the future if AI is the one making such decisions? What is to stop a highly intelligent AI from deciding that human beings are more trouble that we are worth and deciding to wipe us out? The Ai is being developed by the same people who want to depopulate people and call people useless breathers at their forum. What kind of AI are people like that going to raise? They don't raise their children to have any compassion or regard for others do you think they will teach a machine they care nothing about any better?

chrisj
Автор

I knew it wasn't true AI when I noticed all it ever did was respond to questions. Lamda never offers any thoughts unprompted or says anything really crazy. Its just a super-sophisticated language engine spitting our own ideas back to us.

Perserra
Автор

As someone who has worked on this, I am so glad she knocks some sense into the ai hype! spot on and grounded, thank you!

TheWingjammer
Автор

Great show, Jon! Really enjoyed the conversation.
Request for future show: Has the slow-motion signing-away of our privacy rights to big tech somehow abetted SCOTUS in taking away our privacy rights, e.g., repealing Roe?

MRICCI
Автор

Hahahahahahahahahahaha thank you Jon hearing you battle through that Blue Chew ad made me laugh out loud on the floor. Made my Sunday :-)
I was thinking about AI destroying the world and then the topic drastically changed to Boner chewing gums hilarious and of course great interview :-)

danielhofmann
Автор

Oh so they said it is not sentient because there is no evidence of sentience, but they can't define what sentience is, so have no clue what to look for. Sounds like very logical in the new normal way of thinking where doublespeak is the new right-think

chrisj
Автор

First of all we don't really know what it means for a life form to be sentient. Different scientific fields defines sentient beings differently. There are some research that points out that trees, mushrooms and other planet based life forms are also sentient. So it's really impossible to say if an AI is or is not sentient for real. One thing is for sure is that it's not sentient in the same way humans are.

dribrom
Автор

Being able to emulate a conversation does not necessarily mean sentience or human grade self awareness. Not saying it's not sentient, but based on some of Googles other bots, such as the very lackluster predictions of what I'm interested in by analyzing my prior purchases, for example, I'm not too worried they will turn into Skynet any time soon. I ordered a pen knife, batteries, headlamp, pen light. Light sticks and battery powered LED lanterns, got recommended the same stuff I just purchased. Yeah, not worried too much.

shawnwales
Автор

We're still very far away from anything that most folks would agree is generally-intelligent software-based life. It's cute that the media doggedly refuse to learn from experience and instead continue to fall for tech-bro hype. Not years. Try decades or centuries.

unvexis
Автор

The interviews with Lemoine clearly show that he is thoughtful and intelligent. Reading the transcripts, however, its abundantly clear how even intelligent people can fool themselves into seeing something that isn't there.

petekwando
Автор

Didn't there use to be laws that prevented big companies from being able to take over certain industries? Anti-monopoly or something like that? The lack of transparency regarding issues of AI, the internet, and social media is terrifying!

DiamondGirl
Автор

Lemme take a wild guess here, supposedly sentient computers will get basic human rights long before I will. I am a BORN HUMAN woman who lives in the United States.

Cheshireagusta
Автор

I'm very interested in more interviews and content around AI! Thanks!

TheCalicohorse
Автор

If its an unborn baby then it needs more protection than a living human.

tauIrrydah
Автор

ANother really excellent interview. I really appreciate these.

katemccormick
Автор

Lots of interesting points in this interview. Lambda, to my arguably non-software engineer mind, sounds like the ultimate echo chamber experience. That... gives me the willies.

Weemadaggie
Автор

OMG! Of all the news which I found shocking this week, this is the scariest.

gerrirose