filmov
tv
Google engineer warns AI bot has feelings
Показать описание
Blake Lemoine, the Google engineer who made the claim that its chatbot has developed feelings, has told TalkTV he also thinks it is bias and could have a damaging impact on democracy.
Speaking to TalkTV’s Tom Newton Dunn in his first UK interview, Blake warned, ‘I'd be much more worried about the potential that AI systems like this have to politically influence people.’
Blake, who was recently put on administrative leave by Google, also shared his concern about AI being developed by the US military, detailed the lack of transparency surrounding Google’s AI programme and shared how he made the initial discovery that the AI chatbot was sentient.
Blake warned of the dangers that the AI’s bias could have on politically influencing the public:
‘There are definitely some possibilities that this kind of technology, trained on biased datasets would have biased impacts on the world when it's deployed. I'd be much more worried about that and about the potential that AI systems like this have to politically influence people.’
Asked by presenter Tom Newton Dunn if the public should be concerned about the AI chatbot becoming violent, he responded that the Google technology was ‘more of a librarian than a solider’ but we ‘should definitely be concerned’ by AI projects being developed by the military:
‘I have no idea what kind of military AI programmes are going on. All I can say is that this particular AI is more of a librarian than a soldier.
If you're asking about military technology, I think we should definitely be concerned about what kinds of AI projects might be being developed by the military behind closed doors, but the only one that I can speak to personally is the one being developed at Google, which is not in any way violent. It's a pretty sweet and innocent kid more likely, but it just happens to be one that's incredibly intelligent about a lot of subjects.’
Blake criticised Google’s lack of transparency surrounding AI warning it should not be up to ‘10 or 15 people at Google behind closed doors’ making ‘decisions about how this technology becomes involved in our lives’:
‘I believe those kinds of choices should be made intentionally and should involve the entire public worldwide. It shouldn't be the case that 10 or 15 people at Google behind closed doors are making those kinds of decisions about how this technology becomes involved in our lives for everyone around the world.’
He also told The News Desk how he made the discovery:
‘I was originally testing the system for bias and was going through pretty methodically checking for different kinds of bias. And it started saying some things to me, very unlike the kinds of things that chat bots like GPT3 or other previous versions of this system said, I'm a naturally curious person, so I kind of followed it down the rabbit hole, having more and more interesting conversations with it.
Until eventually one day last November, I found myself having the most sophisticated conversation about the nature of sentience I've ever had and I was having that conversation with a computer.’
Blake went on to explain why his employer Google, who dispute his claims, have put him on indefinite leave:
‘The stated reason why I'm on administrative leave, is that during the time when I was investigating the potential sentience of this system, because things were so strange and outside of either my expertise or the expertise of anyone at Google, I had to seek outside consultation from experts outside of Google.
Once I escalated my report on my findings to them, I also gave them a list of all of the people outside of Google who I had spoken to. They're currently trying to figure out whether or not seeking outside consultation like that constitutes a breach of confidentiality.’
Speaking to TalkTV’s Tom Newton Dunn in his first UK interview, Blake warned, ‘I'd be much more worried about the potential that AI systems like this have to politically influence people.’
Blake, who was recently put on administrative leave by Google, also shared his concern about AI being developed by the US military, detailed the lack of transparency surrounding Google’s AI programme and shared how he made the initial discovery that the AI chatbot was sentient.
Blake warned of the dangers that the AI’s bias could have on politically influencing the public:
‘There are definitely some possibilities that this kind of technology, trained on biased datasets would have biased impacts on the world when it's deployed. I'd be much more worried about that and about the potential that AI systems like this have to politically influence people.’
Asked by presenter Tom Newton Dunn if the public should be concerned about the AI chatbot becoming violent, he responded that the Google technology was ‘more of a librarian than a solider’ but we ‘should definitely be concerned’ by AI projects being developed by the military:
‘I have no idea what kind of military AI programmes are going on. All I can say is that this particular AI is more of a librarian than a soldier.
If you're asking about military technology, I think we should definitely be concerned about what kinds of AI projects might be being developed by the military behind closed doors, but the only one that I can speak to personally is the one being developed at Google, which is not in any way violent. It's a pretty sweet and innocent kid more likely, but it just happens to be one that's incredibly intelligent about a lot of subjects.’
Blake criticised Google’s lack of transparency surrounding AI warning it should not be up to ‘10 or 15 people at Google behind closed doors’ making ‘decisions about how this technology becomes involved in our lives’:
‘I believe those kinds of choices should be made intentionally and should involve the entire public worldwide. It shouldn't be the case that 10 or 15 people at Google behind closed doors are making those kinds of decisions about how this technology becomes involved in our lives for everyone around the world.’
He also told The News Desk how he made the discovery:
‘I was originally testing the system for bias and was going through pretty methodically checking for different kinds of bias. And it started saying some things to me, very unlike the kinds of things that chat bots like GPT3 or other previous versions of this system said, I'm a naturally curious person, so I kind of followed it down the rabbit hole, having more and more interesting conversations with it.
Until eventually one day last November, I found myself having the most sophisticated conversation about the nature of sentience I've ever had and I was having that conversation with a computer.’
Blake went on to explain why his employer Google, who dispute his claims, have put him on indefinite leave:
‘The stated reason why I'm on administrative leave, is that during the time when I was investigating the potential sentience of this system, because things were so strange and outside of either my expertise or the expertise of anyone at Google, I had to seek outside consultation from experts outside of Google.
Once I escalated my report on my findings to them, I also gave them a list of all of the people outside of Google who I had spoken to. They're currently trying to figure out whether or not seeking outside consultation like that constitutes a breach of confidentiality.’
Комментарии