Robot Ethics in the 21st Century - with Alan Winfield and Raja Chatila

preview_player
Показать описание
How can we teach robots to make moral judgements, and do they have to be sentient to behave ethically? Join Alan Winfield and Raja Chatila to explore these fascinating and vital questions.

Alan Winfield is a Professor of Robot Ethics at the University of the West of England (UWE) in the Bristol Robotics Lab. He wrote the Very Short Introduction to Robotics and is director of the Science Communication Unit at UWE.

Raja Chatila is Professor at the University Pierre and Marie Curie, Paris, and the director of the "SMART" Laboratory of Excellence on human-machine interaction. He is particularly interested in robot navigation, motion planning and control, cognitive and control architectures, human-robot interaction, and robot learning.

In partnership with the Science and Technology Department of the French Embassy.

This talk took place in the Royal Institution on Thursday 22 June 2017.

Рекомендации по теме
Комментарии
Автор

great video, especially the second speaker.

Alan_Dler
Автор

I realize that this is all about robot ethics, but I think about the 'dangers' of trusting them in the first place before being brought into use. For excellent info, go to the Computerphile YouTube channel and search for 'Artificial Intelligence with Rob Miles.'

bazsnell
Автор

seems less like an ethical dilemma and more like being easily distracted- it was programmed to save one, so likely doesn't think about multiple- it notices one and starts toward it, then notices the other, etc. If it was indeed acknowledging multiple, its simulation has pretty horrible anticipation.

rbradhill
Автор

First we need to find a way of defining what is moral and right, which may be tricky because every society has different consensus (reflected by different laws and customs) and to a degree every individual has different view on what is moral.
Just look at something as obvious as murder, pretty much everyonee agrees that murder is bad, but people define it differently and in some scenarios killing someone is a good thing and not killing a bad one. For examplee killing a terorist who is about to kill large number of people. And then there are problematic cases of whether to shoot hijacked plane with innocent civilians on board which might cause huge casulties should it reach big city...

There are similar problems to deal with self driving cars, should they value all humans lives equally, should they prioritize the passengers, should they prioritize those who did not break any rules? Should they prioritize children, women, or persons with higher survival chance?

NetAndyCz
Автор

Capitalism + Pre-programmed Artificial Intelligence Systems = Moral outcomes in favour of those with the means of production. Overtime, it can only lead to a further removal of access to freedom of the proletariat. If we think capitalism is an honest model of human well-being then I think we should take a deeply informed view as to the legality of such machines in our lives. Has anyone ever considered that the Google pod or Amazon's speaking connection is serving us with information that hasn't just been censored, but also aligned with capitalistic ideologies?

CreativeContention
Автор

i think AI needs a flaw in it's program.

holdmybeer
Автор

I think these should not be released till the programmes are fully refined. It is good for practical basic jobs after programing them to do the job. A way needs to be found to implant a chip to a human brain for programming. So AI can act on instinct and isn't predictable..

markflood
Автор

People should learn the lessons from the Quarrians, , Just ask them, Dam the Geth are really annoying...And concerning the hole in the road...I have actually done this and the act of Intervention from you or I creates the out!...They turn, look at you and fall in, best to say nothing in my opinion

stewartsavage