6 Ways Scientists Fake Their Data

preview_player
Показать описание


Follow me:
Behavioral Science Instagram: @petejudoofficial
Instagram: @petejudo
Twitter: @petejudo
LinkedIn: Peter Judodihardjo

Good tools I actually use:
Рекомендации по теме
Комментарии
Автор

I really, really like your initial point. There is no reason why big journals shouldn't publish when something doesn't work. The fact they don't makes no sense at all

themartdog
Автор

Why did the urologist accept only certain patient specimens for his data set?

He was pee hacking!!! 😅

adam
Автор

I left academia altogether because after spending 500h in the lab collecting data, I was constantly pushed to re-do all my measurements with the sole intent of statistically faking them.

Previously to that I had spend a month making tests to validate how many samples I should take for each experiment. My results was that I needed at least 250 samples to have a good estimator of the average. I proposed moving forward with that number, but was told first to do 500 samples per experiment (doubling my time in the lab unnecessarily).

My measurements look good to my supervisor (they matched his simulations) one day, he left for a conference where a competing research group showed their results, then suddenly my numbers did not match anymore.

I asked if there was something wrong with previous simulations, the answer was no. He just told me "This numbers must be wrong", so he wanted me to "run the experiments again", only this time with 100 samples. I plainly refused. He ran the experiments using 80 samples, and then presented a culled set of less than 50 as his "results".

theondono
Автор

I'm a mathematician and I love that I don't have to do experiments or find "statistically significant" results. It's still publish or perish and it's difficult to establish important results but the logical nature of it makes it harder to fake.

Kagrenackle
Автор

Thank you for speaking on this. P-hacking is part of what makes reading scientific studies so confusing for the layperson, and causes results to be so often misinterpreted by communicators.

Batmans_Pet_Goldfish
Автор

"No. 3 Variable manipulation" is what I see happens the most in my field of research. Most of my eminent colleagues don't consider it a problem. It's called discovery. In fact, often there is not even a hypothesis, just run the analysis and see which variables work.

Sometimes, a student or a postdoc reports a particular set of variables give the highest correlation. The supervisor says great, here is the explanation and here is the story, it is reassuring this fits the theory. However, in a subsequent meeting the student comes back and says unfortunately he made a mistake in the analysis, here is the correct set of new variables. Well, don't you worry. We find another theory and story, it fits something else!

I don't even work in social science, psychology or behaviours science. I am in physical science. It is still possible to fish for any relationship in the data and find a physical explanation backed up by mathematics to explain it. These studies are published in top journals in my field, as well as those in the Nature family.

sunway
Автор

Feynmann summarized your video with s single quote : trying to find a theory in a pile of data

SugarBoxingCom
Автор

2:01 Stopping rules/'Data peeking' 3:00 Deleting Outliers/Data trimming 4:14-6:03 ad 6:03 Variable Manipulation 7:31 Excessive Hypothesis Testing 8:29 Excessive Model Fitting 11:36 Conclusion and Acknowledgment

stephenmcinerney
Автор

I have accompanied diploma, master's, and doctoral theses for 6 years as a computer scientist specializing in driving simulation. Removing outliers from studies was standard procedure for psychologists, and I have seen quite a bit. Since that time, psychologists and doctors of all kinds are no longer highly regarded by me. It is a shame how I now look down on these fields.
But I don't consider variable manipulation to be a problem because it can certainly happen that one comes across something much more interesting in their studies, and omitting that would be sad.

extraleben
Автор

P-hunting: The illogical in pursuit of the indefensible [KG]
Oscar Wilde [foxhunting: the unspeakable in pursuit of the uneatable]

psychotropicalresearch
Автор

Thanks Pete. I'm a 44 year old guy who reads a huge amount yet I have learnt so much from your videos. You have brought up many disturbing aspects to research which is a bit gutting, to be honest but a sad truth is better than a promising lie. After the first one I watched, Youtube has been throwing many similar vids my way and this needs to be more widely known. Even Dan Ariely! Deleted his book a week or so ago.

jivekiwi
Автор

i was tought to delete exactly one outlier on each end.
it was given a complicated justification which is roughly it is more LIKELY that we are deleting a measurement mistakes
but if we delete more outliers it becomes less likely.

sillysad
Автор

I had a statistically significant amount of drugs in my system so I had to resort to pee hacking....

markcarey
Автор

Scientists often run mathmatical models and try to claim their dumb software is an experiment.

kayakMike
Автор

“If you graph the numbers of any system, patterns emerge.”

AlvinRamasamy
Автор

This video was EXTREMELY fascinating and I was so captivated the whole time! Good work :)

GooseCee
Автор

Many studies are smaller than they should be, which means things like confidence intervals are large. Then the conclusion is going to be that there is no effect but there may be a large effect, we just don’t know. Journals should not publish this type of paper, because you are rewarding someone for doing bad science. However if I get a result like eating X increases the risk of Y by a factor of 1.01 (95% Ci 0.98, 1.04) then that tells me that it is only possible that it has a small effect. Compare with 1.1 (95% CI 0.5, 1.7) where it could have a sizeable result in either direction.

Ken-ercq
Автор

I don't see any excuse for reputable journals not instituting mandatory preregistration of any paper that might be published: the hypothesis, methods, etc... should be given to the journal before the study is run. That would vastly reduce the freedom to use questionable, and non-transparent techniques to make results look more significant than they are.

Guishan_Lingyou
Автор

Funny. Explaining this to my friends in 2021, and other types of scientific fraud, earned me a cussing out and the “contrarian” moniker.

WisdomThumbs
Автор

No data-trimming should be allowed. "I don't understand why it is that way" is not an argument in favor of throwing it out.

debasishraychawdhuri
welcome to shbcf.ru