14. Causal Inference, Part 1

preview_player
Показать описание
MIT 6.S897 Machine Learning for Healthcare, Spring 2019
Instructor: David Sontag

Prof. Sontag discusses causal inference, examples of causal questions, and how these guide treatment decisions. He explains the Rubin-Neyman causal model as a potential outcome framework.

License: Creative Commons BY-NC-SA
Рекомендации по теме
Комментарии
Автор

Brilliant! Thank you for the video and I feel very blessed to be borned in this age when such brilliant lectures are available for free for everyone!

bobo
Автор

This is the most intuitive and comprehensive guide on causal inference. Thank you Prof. Sontag.

junqichen
Автор

The best lecture on causal inference online

yogeshsingular
Автор

Superb introduction for a non-mathematician “domain expert”
To understand what the technical expert needs. Unfortunately the underlying quality of the real world data we work with often is insufficiently standardized or machine actionable. This technology is needed for the problems that actually occupy most of a physicians time which is predicting and assessing the effects of treatments particularly once we get off the original guidance from guidelines which might not work in an individual patient.

edwardeikman
Автор

Simply a great lecture. I just recently started diving into this field, and with this lecture I think I have learned the most so far.

GarveRagnara
Автор

Fantastic explanation! Imma make a video on this topic too.

CodeEmporium
Автор

What a fantastic teacher and the lecture itself. Thanks for posting this, although I am pretty late to get to it!!

deepaksehra
Автор

At the 12:30 mark, X₂←X₁→X₃ is described as a v-structure that can be distinguished from a chain structure with data. That's not a v-structure in that sense, you would need X₂→X₁←X₃.

acceleratebiz
Автор

wow, awesome intro to causal inference!

TheRetrobek
Автор

Really nice explanations, you kept it simple in the beginning but explained the gist! Thanks for uploading these lectures

turboblitz
Автор

Thank you so much for sharing it with us. It was amazing :)

AradAshrafi
Автор

Great explanation. I wish I had teachers like him.

sanjav
Автор

I would say that Y1 is the red pill and Y0 blue, not the other way

vrda
Автор

I have two question and will be grateful for expert and practicioner answers:
1) When calculating CATE you subtract two regressions. This must increase the error considerably. Do we do anything about it?
2) I think in practice, when defining the parameters/independent variables, there's a risk of Simpson paradox. E.g. where's the line between exercising (1) and not exercising (0)? What can one do about it to sleep calmly? Could we do some sort of "hyperparameter tuning" to find the best parameter definitions? It can be tricky...

TheRilwen
Автор

Are the 1s and 0s represented by an Indicator function?

michaelmoore
Автор

Where does the counterfactual data come from?

McStevenF
Автор

Are the problem sets for the course available!? Can't seem to find them

ninadgandhi
Автор

It's confusing because he doesn't really mention that x is a vector of covariates

pibob
Автор

What if a confounder variable only influences the outcome?it's a violation or not

allena
Автор

Question: how do we infer the graphical causal model from data? In the lecture, and the one that follows, we assume a model already exists and use data to answer questions about this model. There are no model selection or model checking involved. Is there a way to infer the causal model from observational data?

offon