Utilitarians Don't Have to Be Constantly Calculating.

preview_player
Показать описание
This video is part of the playlist: "In Defense of Utilitarianism". This playlist is meant to be a lighthearted introduction to the Utilitarian theory, with some bad humor, where we analyze some of the strongest counterarguments and counterexamples that have been made against it.
The novelty and complexity of the playlist will scale up with the video number.
The intent is educational, both for me- I can be corrected or critiqued by the audience- than for the audience - that may learn something new.
(I am going to remake some of my earlier videos since I feel like I have learned a bit more on how to communicate more effectively in this medium and now find some of my starting videos subpar).

(Also, there are going to be videos on the repugnant conclusion, abstract counterexamples, and the complexity of the utilitarian framework along with a lot of other stuff)

Abstract:
A critique that sometimes gets brought up against consequentialist moral theories is that to follow them one would end up calculating all the time.
We provide a standard defense against this claim. Then we look at some
potential issues moral calculations face and see how our moral intuitions can play a role in solving them. Moral intuitions can thus be useful even in a Utilitarian framework.

Other Thoughts:

1) Of course, we do not mean to say that one should never calculate. Thinking a bit more of the consequences of actions could be useful to many people.

2) The video is based on the ideas of Professor Joshua Greene regarding moral intuitions, but it does not repeat them verbatim. There is some personal elaboration on how societal rules come into the picture (so don't go using this video as an overview of his thought on the matter).

Citations:

Beyond Point-and-Shoot Morality:
Why Cognitive Neuro-Science
Matters for Ethics,
Joshua D. Greene, Ethics, 2014.
Рекомендации по теме
Комментарии
Автор

Calculate too much causes actions being halt resulting in counter productive, which is the reason why some people put their expectations on a lower level instead of calculate everything.

freddyli
Автор

Haven't checked your channel in a hot minute but this is one I'd been waiting for you to touch on.

Some of this is new to me so I'm just talking through some thoughts here. While I think most issues can be answered with a general notion of "find the balance", I wonder about the social ramifications of a society built on these principles - even if executed perfectly and with all good intention. I have a bit of a kneejerk reaction against the idea of utilitarian lawmakers, because there seems to be an unavoidable removal from the rest of society, like a less extreme version of Plato's philosopher kings. One of the biggest issues in most (if not all) current governments is that the individuals making decisions for the whole do not live the same experiences as the majority of those subjects. Even in an egalitarian selection process, we would be entrusting the law to individuals who have gone through a very specific level of training, which by definition limits their lived experiences. In most societal models, we seem to have this underlying assumption that our moral philosophers are capable of infinite empathy and perfect understanding of vast populations. But unless we replace our philosophers with computers, it seems like an insurmountable task. Or maybe that vastness is the problem - would we be maximizing utility by keeping communities smaller, independent, and more individualized to their specific needs? Each county, each town? Is there an objective moral good gained from expansive, cohesive empires, or is the nation-state a failed model? I might be the only person interested in these questions.

You've talked about the role that societal norms play in these calculations (and vice versa) and this seems like another time to look at the relationship. In a society where rule of law is objectively, scientifically justified, what effect will that have on society's attitudes? I suspect that on average, the people would be less likely to question authority (assuming these utilitarian calculations were accurate and not hidden or mystified). Historically, total deference to authority is a dangerous trait in a population, as it increases the capacity for corruption and abuse. In this specific case, I feel there's a risk of societal norms becoming so trusting in the right of the greater good, that the moral objections to individual suffering erode. How do we stop ourselves from ending up in Omelas?

The one concrete thing I can say about this is that all of these concerns also exist in present-day societies, and we don't even get the mathematical certainties.

dougshakes
Автор

Nice video! What are your thoughts on the argument that utilitarianism is implausable because it denies the existence of supererogatory actions?

jacobharvey
Автор

Give me a second, I’m almost done calculating if should comment this or not

toesdoeswhoknows