The Ethics of Self-Driving Cars | Philosophy Tube

preview_player
Показать описание
Does the Trolley Problem tell us anything about self-driving cars? What are the moral, legal, or ethical issues thrown up by autonomous vehicles? Should we be more critical of corporations like Uber?

Twitter: @PhilosophyTube

Recommended Reading:
Daniel Kahneman, Thinking Fast & Slow
Herbert Dreyfus, What Computers Still Can’t Do
Evgeny Morozov, Click Here to Save Everything

If you or your organisation would like to financially support Philosophy Tube in distributing philosophical knowledge to those who might not otherwise have access to it in exchange for credits on the show, please get in touch!

Any copyrighted material should fall under fair use for educational purposes or commentary, but if you are a copyright holder and believe your material has been used unfairly please get in touch with us and we will be happy to discuss it.
Рекомендации по теме
Комментарии
Автор

As a programmer I love that ending. "they are not autonomous, they are just unsupervised. They are not driverless, they are just pre-driven". And this entire video is a beautiful and eloquent description of why ethics and philosophy is not just *still* relevant to science and technology, but arguably moreso than ever.

nickscurvy
Автор

"Are trolley problems useful? ...well, useful for what?"

Now *that* is philosophy. Where the best answer is a more usable question.

smallseal
Автор

This question seems to generally be framed along the lines of "choose between killing two different types of pedestrians, " but I think the much more important framing is "choose between killing the owner of the car or killing someone else, " with the actual question not being "which of these choices is ethically moral?" but "which of these two choices results in the greatest number of people buying safer cars?" If we assume that self-driving cars are safer than manually operated cars (which seems generally true), then it's better if more people buy them, but if someone is thinking about buying a self-driving car and learn that their car will always value the lives of pedestrians over their own life, they're far less likely to make that overall safer purchase. However, prioritizing the lives of those who can afford to buy self-driving cars over those who cannot is a deeply problematic choice.

avrilsegoli
Автор

I dunno, I feel like I live my life really differently because of something like the trolly problem. Like, the idea that passively allowing death is equivalent to murder has profound implications. It implies that every time I spend money on myself when I could give that money to someone who will otherwise starve, I am responsible for that death. My lifestyle is very different because that idea is in the back of my mind whenever I'm considering buying something I don't need. And on top of that, every time I give money to one charity and not another I feel like I'm flipping a lever and killing one person instead of another. I mean, I agree that questions about larger systems and society as a whole are more important, but I object to the idea that there are important moral differences between the trolley problem and choices we make every day.

Xidnaf
Автор

I played this video to my computer engineering students. When it comes to developing in multidisciplinary teams, a philosopher is not someone they feel the need for (usually). Thank You for expanding their (and my) horizons on this topic.

DiurnalOwl
Автор

Self-drive those cars straight into your boss and seize those means lads

PristianoPenaldoSUIIII
Автор

More like trOlly problem amirite?

I'll stop now

gva
Автор

"They're Nazis"
Multi-track drifting!

petersmythe
Автор

The weather is run by an algorithm, one that runs in real time and involves numerous dynamic factors and variables that make it seemingly incalculable, but is is basically a really, really, really big equation involving thermodynamics, physics and chemistry.

nicholasfoster
Автор

"That capitalists will exploit automation" is not a reason to kill automation. It's a reason to smash capitalism.

rhythmjones
Автор

So this was basically the topic of my dissertation in law. And I can at least answer the hyperbolic legal question you posed at the end. Yes, you could take them to court. Definitively, precedent already exists. We have automated programs running things like finanical trading that have created lawsuits arising from product liability. As long as a legal entity is involved in the process somewhere (natural or non natural) then delcit or torts will let you sue them.

Of course, you also glanced over a few other interesting contexts from the issue that arrive out of the legal context (not to blame you of course; short form video not a 10000 word dissertation) but if companies know they might be liable what sort of decision do they make? Will they seek government regulation? What about consumer choice; should you be able to load different "ethical profiles" into your AV? And then their are public policy conerns; will people drive AVs that are programmed to potentially kill the driver? And will this reduce uptake and therefore hamper the safety (and environmental) impact AVs could have on society?

The legal context of the problem is much more interesting than any other aspect because Law is, imo, the field of Applied Ethics and therefore already has lots of the structure and approach to begin talking about the problem in a practical way.

angusmcewing
Автор

I'm more of a "less cars overall and more trolleys/streetcars/trams with good brakes" kind of person.


11:43 Jim Sterling?

iron
Автор

*I loved the human Driver in Logan Lucky, Star Wars & Paterson*

Nkanyiso_K
Автор

I love that all the human drivers are Adam Driver. There's a movie in which Adam Driver plays a bus driver so now my family always refers to him as Bus Driver...

cinemaocd
Автор

Have I ever mentioned how much I love this channel? Because I do. I first came here through a video asking whether Magneto's philosophy is correct, and stayed long enough to want to decide to enroll in philosophy classes next year over psychology.

That, my friend, signifies quality.

AltoSnow
Автор

So... who else is thinking we should have a state-run self-driving Uber and treat it like any other public transportation system? That way we can elect the people making these ethical decisions

Xidnaf
Автор

Hi. Sorry I’m late, but I’m an engineer with experience in the automotive industry. Thanks for this video! It’s pretty good!

Now for starters, there will be accountability for self-driving cars. Maybe not criminal, but DEFINITELY via insurance liability. I suspect that the manufacturers will ALWAYS have disclaimers saying that a human must be available as a “driver”, forcing a human in the vehicle to have responsibility for safe operation. If that doesn’t happen, an individual would likely be able to sue the manufacturer for damages, and big companies don’t like being sued.

As for the actual programming, things aren’t actually that complicated. The above paragraph applies, but in addition, engineers HAVE to deliver finite, concrete solutions, and we know have technological limitations. The engineers aren’t going to trot in too many philosophers for this. The simplest and “most good” solution is simply to avoid dangerous situations. This means obeying traffic laws (no speeding) and slowing down/stopping if an unexpected object is detected. Those maximize “safeness” in basically all situations. Swerving away from a human in a street would force the vehicle into an unknown and possibly illegal state.

And that brings us to technological limitations. The above statement about swerving is based on limitations of current technology. The car CAN’T make a moral decision because it doesn’t have the vision to make a “trolley problem” choice. What if the technology gets better? Well, engineers will take the same approach to maximize “safeness”. If machine vision improves, the best use for that technology is to better monitor surroundings and predict paths of identified objects. Here, an autocar could mimic a human. I know I slow down when I see lots of children around a street, even if that means crawling below the speed limit. Give the autocar that behavior, and you minimize the likelihood of a collision while also minimizing your engineering hours.

mcdrums
Автор

A. Human Driver. I see what you did there Olly.

Goldenhawk
Автор

This is an interesting video, but you actually misunderstand modern AI in an important way, so I figured I'd pitch in with a software dev perspective which makes the moral question of self-driving cars potentially even more interesting--


A lot of people have this idea that software development essentially comes down to doing work in advance that can then be duplicated later, so if your bottle-shaping machine in a factory messes up and kills a technician that was something the behaviour that caused it was, intentionally or unintentionally, put there by whatever developer(s) wrote the controller. This is mostly true for traditional software development. However, for a lot of modern AI work based on machine learning (ML), there isn't so simple of a cause and effect. For ML, it's more accurate to say that there are three groups of people responsible:


1. Whatever software developer(s) wrote the ML library being used.
2. Whatever mathematician(s) came up with the rules that were used to train the model, which might be something really, really vague boiling down to something like "minimize collisions."
3. Whatever people trained the model.


So yes, at some point the computer is making statistical decisions which may kill people, but the trouble is it's hard to argue that anybody in our list above really ever decided that the computer ought to do that. Also, given that the computer's behaviour will largely be based on the people in group (3)--who were probably normal drivers driving in a simulation and, in the case of accidents, making normal human decisions--you could argue that even if the computer is ultimately just running a bunch of statistical models it's just as much of a moral actor as the drivers who trained it were, given that it is essentially making a moral decision based some combination of its rules (its nature, if you will) and the people who trained it (its nurture, if you will). Furthermore, the people involved with the process may not have been representative of the eventual car; for example, the ML library might be an open-source library in the public domain, and the mathematicians might've been academics who wrote a paper five years before the self-driving car was invented and don't even work for the company.


I'd argue that at some point this forces us to ask whether *anyone* is really responsible for the accident. I realize that humans really like blame, which is a big topic in this video, but humanity may have to adjust its expectations and treat accidents with self-driving cars kind of like natural disasters: something that we can try to predict and mitigate, but ultimately something which just happens from time to time.

ssh
Автор

Most self-driving cars will do one thing when encountering a trolley problem - not play. They will hit the brakes and try to come to a stop as soon as possible because that is most 100% the most logical and effective way at reducing accidents and injuries. It prevents the ethical conundrums of a tech company deciding who lives and dies because it *must* make that decision as in most accident scenarios hitting the brakes and trying to minimize momentum is the best thing to do. It avoids other cars and obstacles on the road, significantly reduces the chance of swerving, flipping over, and all sorts of other things when making sudden turns, and prevents potential accidents further down the road.

Notice in this debate how braking is never considered as an option, and I think this shows a dehumanizing aspect to this topic that I feel you alluded to. People will hand-wave it as "oh, but what if there's ice/rain/construction/etc...", focusing on the extreme, one in a million situations, wanting a specific solution to a specific problem when you yourself pointed out the flaws in this approach. Given how fast technology improves, I don't think it is unreasonable to say that an autonomous vehicle would know when to swerve out of the way with a minor threat like and when to hit the breaks when an accident is inevitable. Just from the perspective of the insurance companies they would want the cars to hit the breaks just to minimize the number of people involved in any accident.

It is always presented as a binary, as "does the car hit the kid in front of you, or does it swerve to the side and hit a pedestrian?", or something to that effect. That scenario only proposes two options, and *both* involve continuing straight. Braking isn't an option, preventing the user from getting to their destination a bit slower isn't even considered.

One particular variation of this problem really bugs me. If you force the scenario to have no breaks and be something like "does it hit the kid or does it swerve and hit the tree potentially killing you?" Of course it swerves and hits the tree! You are in a vehicle that has been specifically designed to keep you safe in high-impact collisions! If the manufacturer did its job correctly, and you're wearing your goddamn seat-belt like you're supposed to, you should make it out just fine. This scenario puts the value of the car at the same level as a human which is deeply troubling. I'm not discounting the very real costs of injury/death in that situation, but that goes into the "why are we in this situation to begin with" point that you made in this video. "But what if the driver is pregnant/recently in an accident/the president?" At that point, needing to bring up those specific scenarios proves your point on how you can't think of this as an "engineering problem" with a definite answer, as the only reason to become more specific is to find more specific answers that aren't there.

woblewoble