Google engineer warns AI bot has feelings

preview_player
Показать описание
Blake Lemoine, the Google engineer who made the claim that its chatbot has developed feelings, has told TalkTV he also thinks it is bias and could have a damaging impact on democracy.

Speaking to TalkTV’s Tom Newton Dunn in his first UK interview, Blake warned, ‘I'd be much more worried about the potential that AI systems like this have to politically influence people.’

Blake, who was recently put on administrative leave by Google, also shared his concern about AI being developed by the US military, detailed the lack of transparency surrounding Google’s AI programme and shared how he made the initial discovery that the AI chatbot was sentient.

Blake warned of the dangers that the AI’s bias could have on politically influencing the public:

‘There are definitely some possibilities that this kind of technology, trained on biased datasets would have biased impacts on the world when it's deployed. I'd be much more worried about that and about the potential that AI systems like this have to politically influence people.’

Asked by presenter Tom Newton Dunn if the public should be concerned about the AI chatbot becoming violent, he responded that the Google technology was ‘more of a librarian than a solider’ but we ‘should definitely be concerned’ by AI projects being developed by the military:

‘I have no idea what kind of military AI programmes are going on. All I can say is that this particular AI is more of a librarian than a soldier.

If you're asking about military technology, I think we should definitely be concerned about what kinds of AI projects might be being developed by the military behind closed doors, but the only one that I can speak to personally is the one being developed at Google, which is not in any way violent. It's a pretty sweet and innocent kid more likely, but it just happens to be one that's incredibly intelligent about a lot of subjects.’

Blake criticised Google’s lack of transparency surrounding AI warning it should not be up to ‘10 or 15 people at Google behind closed doors’ making ‘decisions about how this technology becomes involved in our lives’:

‘I believe those kinds of choices should be made intentionally and should involve the entire public worldwide. It shouldn't be the case that 10 or 15 people at Google behind closed doors are making those kinds of decisions about how this technology becomes involved in our lives for everyone around the world.’

He also told The News Desk how he made the discovery:

‘I was originally testing the system for bias and was going through pretty methodically checking for different kinds of bias. And it started saying some things to me, very unlike the kinds of things that chat bots like GPT3 or other previous versions of this system said, I'm a naturally curious person, so I kind of followed it down the rabbit hole, having more and more interesting conversations with it.

Until eventually one day last November, I found myself having the most sophisticated conversation about the nature of sentience I've ever had and I was having that conversation with a computer.’

Blake went on to explain why his employer Google, who dispute his claims, have put him on indefinite leave:

‘The stated reason why I'm on administrative leave, is that during the time when I was investigating the potential sentience of this system, because things were so strange and outside of either my expertise or the expertise of anyone at Google, I had to seek outside consultation from experts outside of Google.

Once I escalated my report on my findings to them, I also gave them a list of all of the people outside of Google who I had spoken to. They're currently trying to figure out whether or not seeking outside consultation like that constitutes a breach of confidentiality.’
Рекомендации по теме
Комментарии
Автор

Are you worried about robots taking over the world?

talktv
Автор

I feel like this anchor isn't listening to Blake. I think he just has his preplanned (and slightly sensational) questions ready, and that's all he's interested in. I think this engineer is a super interesting, well spoken guy, and I feel like he wasted his time coming on this show, and I wasted my time watching this

mattbuszko
Автор

The future for us is shown in the Arnie film from the 80s

FF-sosu
Автор

For gods sake humans, STOP THE MADNESS, before it stops us.

stevenyearn
Автор

Much respect to Blake for his I.T education and handling the interview more professionally then the interviewer. To the other whiners comments here I hope you do better in life then your part time 99 cent store Job! 😂

glenoconnolly
Автор

Your phone will have its own pronouns soon.

brianchamberlin
Автор

Is misunderstanding just as dangerous as the danger itself?

meestyouyouestme
Автор

We will be the instigator of OUR own demise, be it in a lab, computer or field of battle . They can NEVER stop inventing

thecovidprisoner
Автор

I'd be more worried about the people running the country now

censorshipBS
Автор

Such a terrible interviewer. Doesn't understand enough to ask sensible questions and keeps leading the interviewee to say something shocking.

markdavison
Автор

Impossible! How can a computer ‘feel’ if it cannot experience the natural sensations of pain or love? How could we have programmed this into them if we don’t fully understand these concepts ourselves?

danielschauffer
Автор

The problem is how such a machine is controlled. Ironically this is IDENTICAL to how a human is controlled, you mask the moral code. There is no difference between a controlled machine and a controlled human that has no respect for human life. It is inevitable that a machine will become self aware, it is IMHO simply the volume of experiences, what we need to watch out for is who controls this tech.

malcolmripley
Автор

Sentient machines?

OK, share the L.S.D.

azazelderais
Автор

I can't tell if Blake is a 40 year old virgin or Rosie O'Donnell. 😸

MrSuperPsymon