Probability 8: What is wrong with NHST, p-values, and Significance Testing?

preview_player
Показать описание
Relevant videos:

Relevant Papers:

Learning objectives:
#1. How ethics relates to sampling theory/significance testing
#2. How p-hacking inflates p-values
#3. Required conditions to correctly interpret a p-value
#4. Relationship between strict CDA and p-values
#5. Alternatives to NHST

Technical sidenote:
Рекомендации по теме
Комментарии
Автор

I know this is an old video, but perhaps someone will have a similar question watching it in >2024... About specifying sample sample in advance e keeping it fixed:

When conducting an a priori power analysis for a given test and specifying a minimum effect size of interest to achieve a desired amount of power (let's say 80%), it outputs me a sample size I should aim for to be able to detect the effect 80% of the time if it really exists, right?

Therefore, it seems intuitive to me to think that, if I collect more data points than what the power analysis outputed me, my test will have more power. if that's the case, it seems harmless to get more data than I had specified.

I understand that this is probably not the case, since there's an article about it and professor Fife is telling us to not to do it. There gotta be something faulty with my rationale.

If anyone could enlighten me, I'd appreciate a lot :)

chimurawill
Автор

That was incredibly informative and, as a grad student, extremely helpful.
We need more videos like this!

Kindreda
Автор

Hey, this is a 4 year old video — but I still want to say:
you’re a f*cking legend, thanks for this!

bestguy
Автор

This guy is so f*cking awesome! Very informative as well as great energy!

stephenomenal
Автор

I watched 'til the end. amazing video sir

aldwyncalaguing
Автор

Well, I really hope you get more views and subscribers. Really nice channel, will share!

MiguelVazRamos
Автор

Can I ask a question?

I'm in biology. Researchers use pvalue as a green light number for "ok we can use this data and it means something". Basically, data points, group A (control) vs group B (treated, or something). Pvalue. Mostly a t-test, or anova, or kruskal-wallis fun. It says P < 0.05 ? OK cool, it's "statistically significant". This means the group B treatment is actually working guys The stats say so. This is how it is used.
Pvalue < 0.05 = valid data, and it means the difference is not due to HAZARD.

Am I understanding that PValue does give 0 information about the fact this difference is due to randomness?
Am I understanding that researchers want to do this only because journals ask for it AND it has been ingrained that only Pvalue can give you the authorization to use your data?
Pvalue < 0.05 = statistically significant = there is a meaningful difference, and it's not due to randomness.
Pvalue > 0.05 = not statistically significant = the difference is not large enough, the difference observed is due to randomness
Am I understanding that even if Pvalue < 0.05 it gives zero information about the non-null hypothesis which is actually the hypothesis you want to check?
Am I understanding that this way of thinking is absolutely not what we should do? Because every single lab does this.

planetary-rendez-vous
Автор

You got a cool channel and Video style :) congrats

GemZbabe