• The sensitivity and specificity aren't changing. The derived accuracy of a result changes depending on how prevalent the condition is amongst the tested population as the relative

    Ah, sorry. So the raw number of false positives/false negatives will shift depending on how many true negatives/true positives there are. Okay - I mistook your "-ve" and "+ve" to be analogues for specificity/sensitivity.

    Both of these factors affect the eventual accuracy.

    Again, sorry if I'm misunderstanding, but they don't affect accuracy, do they? The number of accurate/inaccurate results as a count will be different depending on shifts in the potential for false positives/false negatives. The accuracy of the test on any individual will remain the same.

    Your previous post doesn't demystify this for me because it also looks like you seem to be insinuating the accuracy of a test for an individual depends on the prevalence of the thing being tested in the population.

  • Your previous post doesn't demystify this for me because it also looks like you seem to be insinuating the accuracy of a test for an individual depends on the prevalence of the thing being tested in the population.

    In order to combine two accuracy figures (sensitivity and specificity) into a single 'test accuracy' figure it's necessary to know what at what ratio those individual accuracies need to be combined and that depends on the prevalence of the condition within the population being tested.

    If 100% of people have the virus then a test that gives 2% false negatives is only going to be 98% accurate.

    If 100% of the people don't have the virus then a test that gives 3% false positives is only going to be 97% accurate.

    That's an extreme but I hope it shows how a single test with two different accuracy figures can produce different overall accuracy results if the testing population is made up differently.

About

Avatar for   started