A problem for effective altruism? | Dr. Travis Timmerman

preview_player
Показать описание
Effective altruism faces a serious dilemma. How might the effective altruist solve it? I'm joined by Dr. Travis Timmerman to explore this question as well as the actualism/possibilism debate in ethics.

OUTLINE

0:00 Intro
1:18 Effective altruism
9:22 Actualism and possibilism
18:20 Not Demanding Enough Objection
22:11 Bad Behavior Objection
25:25 Non-Ratifiability Problem
29:03 Worst Outcome Objection
34:34 Asymmetry Objection
38:58 Hybridism
51:40 Dilemma for effective altruism
1:08:21 A hybridist solution?
1:15:15 Other practical implications

RESOURCES

THE USUAL...

Рекомендации по теме
Комментарии
Автор

This episode of Majesty of Reason is brought to you by The Guardian

DigitalGnosis
Автор

Effective tribalism avoids this problem 😎

frederickdebian
Автор

Watching now, I like that tennis example lol. Hey Joe could you please do a discussion at some point on animal ethics and practicable applications? I really enjoy your deep dives into Peter Singer and find them helpful, and I’m wondering what your take is on how most of us ought to be living based on these implications. Thanks for all your content I’m learning a ton🙌

rudybonfini
Автор

This is so relevant for staying up late to work when there’s a significant probability you’re just gonna browse the internet aimlessly instead of working

It is also relevant for the ideas of “learned helplessness” and “weaponized incompetence”

jazearbrooks
Автор

I've tried to forward the macro-ethically utmost important info already more than 100.000 times, but I guess it cannot hurt trying 1 more time, even though it'll likely be missed despite the fate of humanity depending on it in the most critical of manners possible:

Premise 1: The evolution of life on exoplanets or solar system ice moons is a morally significant event.
Premise 2: The possible outcomes of evolution on these celestial bodies are numerous and varied.
Premise 3: The range of possible outcomes in terms of well-being or suffering is extremely large.
Premise 4: Near-future microbial contamination of planets is likely to result in a suboptimal or negative outcome for an enormous number of animals, leading to widespread suffering.
Conclusion: Therefore, humanity has a moral obligation to exercise restraint and prevent interplanetary and interstellar forward contamination for many years, lest it forfeit its moral justification for continued existence.


If you can make the Effective Altruism community as a whole finally aware of this and make sure of them to make this their primary focus to resolve this issue, then that's by far the most effective way of saving our and other worlds.

eternisedDragon
Автор

Hmmm, I'm still an actualist, because ought implies can. If I can't do something, like give my tennis opponent a friendly handshake, or swim through magma to save a puppy, then this shows that I don't have a moral obligation to do it. What you call practical obligation is just moral obligation, and what you call moral obligation is only a moral obligation for some other agent that isn't me (one that can shake the hand and swim through magma).

Xob_Driesestig
Автор

I am right that there's no dilemma for necessitarians, since actual and possible are the same ? Another argument for necessitarianism maybe.

STARSS
Автор

Seems pretty analogous to Newcomb's problem and decision theory in general, though with a different more ethical/normative framework baked in. As someone who has been fairly involved in EA for about 7 years now, I found it interesting that Joe and this guest find their alleged problems of actualism to be genuine problems. And I don't mean it in a snarky way at all, like it was quite fascinating to have such an unexpected diverging attitude towards what they consider 'bullets' (maybe I've been having them for too many meals per day in my ethical journey). The seeming switching between whether agents indeed 'had options' at a time t0 versus what they 'would do' if one step of an option put them in a situation t1 to *realize* the another contingent option at t2, seemed to motivate the alleged seeming contradictions. I think if that was sorted out, 'options' were defined clearly, one makes it clear as to whether they are a true consequentialist, and deciding option how agents *should* act under situations of uncertainty, then the proper implications of taking a given options would be clear, and the alleged 'problems' for actualism would disappear. But, I'm the prime example of someone who would be quite likely to be biased to not seeing a problem here of there truly was one, so could certainly be the case that I'm blind to the seriousness of this 'dilemma'.

NomadOfOmelas
Автор

i like effective altruism so this should be great

logicalliberty
Автор

I’m sorry but this problem seems really silly to me. We’re trying to maximize good so what we ought to do is the action with the highest expected value of good.
ie: [amount of good that results from medium action] vs [probability of doing good action]*[amount of good that results] –[probability of doing bad action]*[amount of bad that results]
Now, I understand that, practically speaking, you can’t literally run this calculation. But it’s ridiculous for this to even be a debate when it obviously depends on how good/bad and likely/unlikely an action is.

Carbon_Crow
Автор

Hey Joe, I know it's not what the video's about, but what do you think of the Transcendental Argument posed by people like Jay Dyer, Erik Sorem, etc.? I see you make videos on stuff Trent Horn says or stuff like the Ontological Argument or Cosmological Argument or ... a lot but haven't seen you make a video on this. It seems to be a new type of argument that I don't think there are many people who have responded to, except for Malpass

HiHi-puwz
Автор

Why am I not seeing your videos in my sub box

ILoveLuhaidan
Автор

I came here as soon as i saw what this video is about

shirou.
Автор

i cant wait for a video ''why i'm not a christian''

Truckszy
Автор

Hey Joe I love this channel I would love to hear your a video on your thoughts on the transcedental argument for God. It is very convincing I dont see much atheists respond to it online to give their perspective. If so what works would you recommed trying to refute it

gearoidryan
Автор

The biggest problem for effective altruism is Sam Bankman-Fried and Caroline Ellison.

Overonator
Автор

i'm gonna guess the problem is everyone is too lazy and don't live up to philosophers' lofty expectations? haha. Oooh tIme for another sausage despite those pro-vegan arguments I just heard

radscorpion