-
Happy to have you chime in.
For the record, and I've said this to him, I've never doubted him being right about something. I just didn't understand what he was trying to express here:
So if 5% of the population have had it then using 98.5% and 99.5%
figures we get:-
-ve result would be 99.973% accurate
+ve result would be 91.284% accurateIf 1% of the population have had it then:-
-ve would be 99.995% accurate
+ve would be 66.779% accurateIf 10% of the population have had it then:-
-ve would be 99.944% accurate
+ve would be 95.673% accurateI've come to the conclusion that this is a claim about the test's result accuracy in the general public when deployed, and as a count (albeit expressed as a percentage). I noted this a few times yesterday, but it was never acknowledged that this was in fact the source of the misunderstanding (or I missed it as I slowly had more beers/was making dinner). I.e.:
"Just to reiterate, I do understand that the overall number of accurate results will depend on the prevalence of the disease in the population. But the accuracy of the test, in my understanding, should be independent of this."
This may all be down to clumsy language on my part, though. What I was trying to express was the difference between the accuracy of the test as a pharma company expresses it on the tin (presumably something akin to an F1 score). This is in contrast to the accuracy (which I think greenbank was expressing), which is that achieved in the population as is influenced due to other factors (prevalence being one of them). The latter does not impact the former.
Assuming I understand everyone now.
Greenbank is right, I think.
I'll have to read it all but the false positive, false negative is the whole problem. If the test is accurate and not specific etc etc.