'Please Commit More Blatant Academic Fraud' – A fellow PhD student's response.

preview_player
Показать описание
A broadcast of, discussion and response to "Please Commit More Blatant Academic Fraud" blog post by Jacob Buckman. Made by a fellow PhD student and her Coffee Bean.
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to boost our Coffee Bean production! ☕
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Discussed blog:
📚 "Please Commit More Blatant Academic Fraud" by Jacob Buckman, Posted on May 29, 2021

Referenced:

🗣️ Twitter discussion:

Outline:
* 00:00 "Please Commit More Blatant Academic Fraud"
* 00:36 The fellowship of the collusion ring
* 03:47 The day-to-day fraud in ML
* 08:13 What to do about it? – Says the blog
* 10:22 What to do about it? – The Coffee Bean

Music:

🎵 Bird Food - DJ Freedem

--------------------------------------------------------
🔗 Links:

#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research​ #MLblog
Рекомендации по теме
Комментарии
Автор

I love that Ms coffee bean videos = argmax humor per ml knowledge transfer unit.

tukity
Автор

Great video!! As a recent Ph.D. graduate in AI, I have seen this all too often in both AI conferences and journals. As someone else mentioned, it is a sad reality that the "publish or perish" mentality, heavily pushed by many university leaders, is entirely to blame.

I struggled to publish, getting multiple rejections, only to see very low-quality papers published in place of mine. Papers with poor literature reviews, little empirical analysis, no published code to reproduce the results, no hyperparameters mentioned, etc.

Doing a brief investigation, I noticed that authors that frequently publish in the same journals/conferences (especially in niche AI topics) usually know the editors or reviewers, where they have co-authored with them in the past. This is even more prevalent in AI workshop papers, where topics are even more niche, and there are only a handful of researchers in the area. New assistant professors or postdocs team up to all put each other as co-authors on dozens of papers (usually contributing little). And then, over time they volunteer as editors/reviewers to get these papers published.

Good luck to all the new Ph.D. students out there! Always strive for quality over quantity.

kmh
Автор

Wow, I did not expect this was happening. And as a quite recent researcher trying my best to use the widest range of dataset possible + validation techniques even due to my limited computation and time, this is extremely disappointing. Why does humanity always seem to make something worse...

WhatsAI
Автор

It would be easier to go back to the days before "publish or perish". "Publish or perish" mentality in academia results in these behaviors as people are rated on their paper output at select venues. Hence, academics work hard to put papers in these venues (for a PhD in 3-5 yrs, for an assistant prof in 5-7 yrs). Eliminating the "publish and perish" mentality, and focusing instead on quality of research output (ideas, methods, experiments), teaching, and community service (for academia), people could spend more time thinking, researching, and discussing ideas with collaborators and being productive members of the academic community. Not all academics are great researchers, not all academics are great teachers, and not all are great community members but we need all of them for a great university and academic community in general. Anyway, maybe if the review process changes then academic departments will change and people will now focus on strengths.

nkirukauzuegbunam
Автор

I like Yannic's social Arvix idea where papers are commented on, shared, recommended, etc, and conferences just take a networking role but not the sole driver of researchers success. PR is all you need.

That said, I don't do research, I read papers which get good PR on twitter or youtube and generally don't care about conferences.

CristianGarcia
Автор

About the "day-to-day fraud", I believe Buckman actually meant that most researchers will never fight the issue even though they know it exists, because somehow almost everyone has directly or indirectly participated in some form of this "day-to-day fraud".

Therefore, obviously master students are not to blame, but some experienced researchers could maybe be part of the problem in their own field.

EnricoMeloni
Автор

Problems like fraud in machine learning is why we need an AGI

sadface
Автор

The problem is that if this was not fraudulently presented it would genuinely interesting research, a model suited to a particular task or data set, or algorithms is sensitive to parameters selection, a new problem in the field exists, even some negative results.

sadface
Автор

my paper got rejected in cvpr as reviewers says, "u should put the best results in paper to make it more impactful " as i put bad and moderate cases also

jasdeepsingh
Автор

their can be others review (critic) papers, which can act as critic to the published papers(like analysis on other papers), like highlight mistakes of other papers, eg how hyperperparameter tunning effects their results, or how it work bad in different cases (make it compulsory to release the results on these papers). In that way, one gets to publish its own paper, so that will be their incentive and these papers will have more credibility. It can also help to highlight other important research problems, which needs to work and credible paper will automatically come at top.

jasdeepsingh
Автор

I appreciate how the Coffeebean has eyelashes when she/he/zer? blinks.

GameinTheSkin
Автор

Special love for not native English speaker

harumambaru
Автор

My concerns is not peer review but unreadable papers. The article Troubling Trends in Machine Learning Scholarship by Zachary C. Lipton and Jacob Steinhardt (youtube doesnt like links) gives an idea what is wrong. It would make for interesting video.

sadface
Автор

Great video!! I got your references :P

papayango