@KX36

As a scientist who does medical tests, I'm amazed that any of the doctors asked got the right answer. Every doctor I work with assumes tests are 100% accurate.

@RedShipsofSpainAgain

Some summary notes:

Sensitivity:  how often the test is correct for those WITH the disease.   
 So, Sensitivity + FNR = 100%.  The false NEGATIVE rate applies here because you're evaluating the samples that were ACTUALLY positive (i.e. WITH the disease), but TESTED negative.

Specificity:  how often the test is correct for those WITHOUT the disease.   
 So, Specificity + FPR = 100%.  The false POSITIVE rate applies here because you're evaluating the samples that were ACTUALLY negative (i.e. WITHOUT the disease), but TESTED positive.

This video illustrates why you have to keep in mind which evaluation metric you're using to evaluate your test.  Accuracy is not a great metric for the reasons described at 3:48.
For something like breast cancer or Covid-19 virus detection, we actually don't care about the test's accuracy as much as we care about its Sensitivity:  we want the test to have a high probability of detecting the ACTUAL positive cases for cancer/virus.  

We don't necessarily care (as much) about the Specificity of the test.  If we have a low Specificity, it just means the test will give more false alarms.  Having a false alarm is (I think we'd all agree) much better than missing an Actual Positive.


This is also similar to why Accuracy is a misleading metric for situations when your data is heavily imbalanced.  For example, say the airline industry wants a test to classify if a person is Terrorist vs Non-Terrorist.  Well we know that the vast, vast, VAST majority of passengers are Non-Terrorist.  Like 99.99% of passengers are not a terrorist.  So if you had a simple "test" or "model" that simply classified each passenger as Non-Terrorist, that test would technically be very accurate:  an accuracy of 99.99%.   Sounds great right?  Everyone agrees it's a great model with such high accuracy.  WRONG!  That test would naively miss Every.  Single.  Actual.  Terrorist. 

Evaluation metrics matter.

12:18:  Algorithm for doing Bayes Rule:
Step 1:  Convert your Prior Probability to Odds
Step 2:  Calculate your Likelihood ("Bayes Factor") :=  Sensitivity / FPR
Step 3:  Multiply

@connermcd

As a doctor I'm so happy you're using your platform to get this information out. Let me tell you though... it gets way more complicated! Unfortunately prevalence estimates aren't always known and are constantly changing (especially in pandemics). Another thing to consider is the gold standard. If your test looks for breast cancer you can cut out the lump and look at it under a microscope. Some diseases aren't as easily clarified. For instance, since we don't have a highly accurate, easy test for pancreatic cancer we rely on imaging, demographics, blood markers, symptoms (or lack thereof) as multiple things that form a conglomerate test to increase our Bayes factor. Despite all these things we can't always get a great prediction on whether that scar in your bile duct is cancer or just a residual scar from pancreatitis you had 10 years ago. So we offer the patient a huge surgery to remove the head of their pancreas and duodenum only to find that it wasn't cancer. You can imagine the patient is happy it's not cancer but not so happy they don't have half their pancreas and have abdominal pain and maybe diabetes. Medicine is a tricky thing. Another tricky thing is operator error. Some tests depend on the skill of the lab tech, radiologist, or surgeon. The complexity of the human body and the uniqueness of each individual also plays a role. Your test may be false positive in a particular patient 100% of the time because they have some strange protein mutation. It's tough!

@afarro

@2:10 Positive Predictive Value (PPV) := P(Cancer | +) => based on simple counting (no need to use Bayes rule)

    PPV = TP/(TP + FP) = TPR.r/ (TPR.r + FPR.(1-r))

Where:
TP := True Positive cases 
FP := False Positive cases
TPR := True Positive Rate  := Sensitivity
FPR := False Positive Rate := 1 - Specificity := 1 - TNR
r := Prior prevalence rate

@Qsie

Question: in this context, are medical and diagnostic tests the same?

@Jebusankel

I'm going to apply this to the world of dating. Everything I learn about a potential match updates my prior about our compatibility. I call this Bae's rule.

@martinezjw1

I'm a physician and I'll admit that I always knew about these facts (i.e. highly sensitive and specific test does not necessarily mean a high predictive value, the prevalence of the disease needs to be taken into account) and yet I always ignore what I (vaguely) know to be true and just assume that high sensitivity/specificity means that test has a high positive predictive value.  
I can tell you that a ton of physicians don't even bother to use these concepts at all (obviously that highly depends upon the institution and many other factors)
Thanks for explaining it well!!  It was nice refresher to what I learned in med school....

@egillandersson1780

As doctors, we use this every day, often without thinking about the mathematical foundations.
Unfortunately, very few diagnostic tests ou exams are indeed both sensitive AND specific.
So, if we think a diagnostic unlikely (based on prevalence, physical exam, previous tests, ...), we choose first the more sensitive test in order to exclude this diagnostic.
On the contrary, if a diagnostic is very probable, we choose first a specific test to confirm.
It is not always easy for technical exams, as we can often only choose between them (if there are several !), without changing their sensitivity and specificity. But for biological tests, we can adjust our cutting values to improve either sensitivity or specificity.

@Asdfgfdmn

I hope Grant read this 😇

I am an MD and Associate Professor of internal medicine. I teach medical students, residents, and fellows. I used to be a program director for a fellowship at a prestigious American university. This is a recurring lesson I teach. The example I usually use is the DNAJB9 kidney biopsy stain sensitivity and specificity for a disease called Fibrillary GN., and I do the exact walkthrough with my students. I never get bored when I see how surprised they are with the final conclusion. Which is, by the way, is: you can't use a test willy-nilly without considering the pre-probability (you are referring to it here as ”prior.” And I also tell my students that you can increase the prevalence of the disease is by applying it to the right population (signs and symptoms). I am thrilled that Grant validated this with this awesome video

@D4n21

Medical Student here, THIS IS GOLD. THANK YOU, this is going to help with my boards and future patients

@johnchessant3012

This is awesome. I have now updated my odds of correctly answering a Bayesian probability question.

@kingbradley3402

This is actual gold content being uploaded for free. It's like I'm unlearning what I learnt in all my classes and seeing Maths in a whole new way. I was asked in an interview one concept that is often confused but makes sense in general. I spoke about Bayes' Theorem. And this has given me something more to talk about. Quite possibly the best educational channel on YouTube.

@RasperHelpdesk

Certainly drives home the point of why running a test twice after getting a positive result is so important when possible.

@tejing2001

This perspective makes the 'update' concept so much cleaner. I've long believed that until something is utterly obvious to you, you still don't truly understand it. I just got much closer to understanding bayesian updating. I could already do it, and even explain it, but it wasn't the same. True understanding is precious. Thank you for what you do, and as always, I look forward to the next video.

@DJNHmusic

As a young doctor, thank you so much. I understood the distinction between the different accuracy parameters and PPV beforehand, but this has fundamentally changed how I view testing. This is a very useful thing to understand as a medical professional.

@matheussaldanharodriguesdu1850

I was just feeling bad seeing your videos when I`ve should be studying medicine. Now you`ve done the best of the both worlds.

@aytide5179

Grant never misses. He's always brilliant

@Osamahtahir

As a medical student who's done this exact thing in a FAR more complicated way, thank you! In medical terms, the post-test odds = pre-test odds (the prior) * the positive likelihood ratio (the Bayes factor)

This is an essential video for any medical professional to watch and understand! I'll be sending this to my instructor because you did such a great job at explaining an otherwise very confusing topic.

@obnubilacion.9516

You won me with the short of this video! I'm a psychology students whose knowledge in statistics is minimal but i needed someone that make me understand these topics. Amazing

@hjfreyer

This is the first presentation of Bayes' theorem that didn't leave me feeling both like it was trivial and like it was inscrutable magic.