• Greenbank is right, I think.
    I'll have to read it all but the false positive, false negative is the whole problem. If the test is accurate and not specific etc etc.

  • Happy to have you chime in.

    For the record, and I've said this to him, I've never doubted him being right about something. I just didn't understand what he was trying to express here:

    So if 5% of the population have had it then using 98.5% and 99.5%
    figures we get:-
    -ve result would be 99.973% accurate
    +ve result would be 91.284% accurate

    If 1% of the population have had it then:-
    -ve would be 99.995% accurate
    +ve would be 66.779% accurate

    If 10% of the population have had it then:-
    -ve would be 99.944% accurate
    +ve would be 95.673% accurate

    I've come to the conclusion that this is a claim about the test's result accuracy in the general public when deployed, and as a count (albeit expressed as a percentage). I noted this a few times yesterday, but it was never acknowledged that this was in fact the source of the misunderstanding (or I missed it as I slowly had more beers/was making dinner). I.e.:

    "Just to reiterate, I do understand that the overall number of accurate results will depend on the prevalence of the disease in the population. But the accuracy of the test, in my understanding, should be independent of this."

    This may all be down to clumsy language on my part, though. What I was trying to express was the difference between the accuracy of the test as a pharma company expresses it on the tin (presumably something akin to an F1 score). This is in contrast to the accuracy (which I think greenbank was expressing), which is that achieved in the population as is influenced due to other factors (prevalence being one of them). The latter does not impact the former.

    Assuming I understand everyone now.

  • This may all be down to clumsy language on my part, though. What I was trying to express was the difference between the accuracy of the test as a pharma company expresses it on the tin (presumably something akin to an F1 score). This is in contrast to the accuracy (which I think greenbank was expressing), which is that achieved in the population as is influenced due to other factors (prevalence being one of them). The latter does not impact the former.

    No, I don't think you're understanding what I've said. It doesn't have to involve the general population at all, the same problems occur when you use the same test group you used to measure the sensitivity/specificity.

    Imagine you have a test group of 10,000 people. You know everyone's exact status through other testing. 9,500 (95%) are negative. 500 (5%) are positive.

    Imagine the test has a specificity and a sensitivity of 95% (i.e. you get 5% of false positives and 5% of false negatives).

    Now apply your test to all 10,000 people in your test group.

    Consider just the positive results, where could these have come from?

    Firstly there are 95% of the 500 people who are truly positive. So that's 475 people. (The other 5% get a false negative result.)

    The other positive results will be the false positives from the people who are truly negative. How many of them will there be?

    That'll be 5% of the 9,500 people who are truly negative. That's another 475 people.

    How accurate is a positive result if it's only correct for 475 out of 950 people who test positive? 50%.

    So even if you apply your own test to the same test group you used to measure the sensitivity and specificity of the test you find that a positive result is not as accurate as you expected.

    It has nothing to do with the general public, the above numbers come from the same people who were used to calibrate the test, although the general population prevalence can skew the numbers even more if the prevalence in the population is different to that of the test group that was used to help measure sensitivity and specificity.

About

Avatar for   started