@raznaot8399

As the famous statistical saying goes, "If you torture data long enough, it will confess to anything"

@Campusanis

The most shocking thing to me in this video was the fact that some journals would blindly refuse replication studies.

@MrMakae90

For people freaking out in the comments: we don't need to change the scientific method, we need to change the publication strategies that incentive scientific behavior.

@ModernGolfer

As a very wise man once stated, "It's not the figures lyin'. It's the liars figurin'". Very true.

@GiRR007

"There is no cost to getting things wrong, the cost is not getting them published"
It's a shame this also applies to news media as well.

@josephmoya5098

As a former grad student, the real issue is the pressure universities put on their professors to publish. When my dad got his PhD, he said being published 5 times in his graduate career was considered top notch. He was practically guaranteed to get a tenure track position. Now I have my Masters and will be published twice. No one would consider giving you a post doc position without being published 5-10 times, and you are unlikely to get a tenure track position without being published 30 or so times. And speaking as a grad student who worked on a couple major projects, it is impossible to be published thirty times in your life and have meaningful data. The modern scientific process takes years. It takes months of proposal writing, followed by months of modeling, followed by months or years of experimentation, followed by months of pouring over massive data sets. To be published thirty times before you get your first tenure track position means your name is on somewhere between 25-28 meaningless papers. You'll be lucky to have one significant one.

@2ndEarth

My favorite BAD EXPERIMENT is when mainstream news began claiming that OATMEAL gives you CANCER. The study was so poorly constructed that they didn't account for the confounding variable that old people eat oatmeal more often and also tend to have higher incidences of cancer (nodding and slapping my head as I type this).

@14MCDLXXXVIII88

this happens because of "publish or perish" mentality. I hate writing scientific papers because it is too much of a hassle. I love the clinic work and reading those papers, not writing them. in this day and age it is almost an obligation that EVERYBODY HAS TO PUBLISH. if you force everyone to write manuscripts, flood of trash is inevitable. only certain people who are motivated should do these kind of work, it should not be forced upon everyone.

@qwerty9170x

I really think undergrads should be replicating constantly. They dont need to publish or perish, step-by-step replication is great for learning, and any disproving by an undergrad can be rewarded (honors, graduate school admissions, etc) more easily than publication incentives can change

@Deupey445

Gotta love when a published research article states that most published research findings are false

@Vathorst2

Research shows lots of research is actually wrong
spoopy

@etanben-ami8305

When I was in grad school for applied psychology , my supervising professor wrote the discussion section of a paper before the data was all gathered. He told me to do whatever I needed to do in order to get those results. The paper was delivered at the Midwestern Psychology Conference.  I left grad school, stressed to the max by overwork and conscience.

@darth0tator

we should open up a journal for replication studies only

@saeedbaig4249

This is why statistics should be a mandatory course for anyone studying science at university.
Knowing how to properly interpret data can be just as important as the data itself.

@psychalogy

It’s almost impossible to publish negative results. This majorly screws with the top tier level of evidence, the meta analysis. Meta analyses can only include information contained in studies that have actually been published. This bias to preferentially  publish only the new and positive skews scientific understanding enormously. I’ve been an author on several replication studies that came up negative. Reviewers sometimes went to quite silly lengths to avoid recommending publication. Just last week a paper was rejected because it both 1. Didn’t add anything new to the field, and 2. disagreed with previous research in the area. These two things cannot simultaneously be true.

@karldavis7392

This has influenced my thinking more than any other video I have ever seen, literally it's #1.  I always wondered how the news could have one "surprising study" result after another, often contradicting one another, and why experts and professionals didn't change their practices in response to recent studies.  Now I understand.

@NurseKillam

Interesting. I am adding this video to my research courses. My students don't always understand why we need to be critical of research.

@ColeJT

An engineer with a masters in nuclear engineering, a mathematician with PhDs in both theoretical and applied mathematics, and a recent graduate with a bachelors in statistics are all applying for a job at a highly classified ballistics laboratory. Having even been given the opportunity to interview for the job meant that each candidate was amply qualified, so the interviewers ask each the simple question, "what's one third plus two thirds?" 

The engineer quickly, and quite smugly calls out, "ONE! How did you people get assigned to interview me!?"

The mathematician's eyes get wide, and he takes a page of paper to prove to the interviewers that the answer is both .999... and one without saying a word.

The statistician carefully looks around the room, locks the door, closes the blinds, cups his hands around his mouth, and whispers as quietly as he can, "what do you want it to be?"

@MrFritzthecatfish

Publish or perish ... and quality goes to the drains

@callumc9426

As someone who studies theoretical statistics and data science, this really resonates with me. I see students in other science disciplines such as psychology or biology taking a single, compulsory (and quite basic) statistics paper, who are then expected to undertake statistical analysis for all their research, without really knowing what they're doing. Statistics is so important, but can also be extremely deceiving, so to the untrained eye a good p-value = correct hypothesis, when in reality it's important to scrutinise all results. Despite it being so pertinent, statistics education in higher education and research is obviously lacking, but making it a more fundamental part of the scientific method would make research much more reliable and accurate.