-
Their rationale seems to be that there is a 1 in 5 chance you're still positive even if the test comes back negative. I thought the false negative rate was much lower than that? Or is this one of those counter-intuitive Bayesian result things?
The figures look off anyway, and the counter-intuitive bayesian result thing concerns positive test results which is where I think they're getting confused.
The tests are supposed to have roughly 98% sensitivity and 98% selectivity.
Prevalence is currently estimated at 8%.
So the chances of a negative test result are:-
True negative:- 92% * 98% = 90.16%
False negative:- 8% * 2% = 0.16%So with these figures a negative test is likely to be correct > 99.8% of the time.
The problem is with positive results, doing those sums:-
True positive:- 8% * 98% = 7.84%
False positive:- 92% * 2% = 1.84%So a positive result is likely only to be correct 81% of the time. There's your 1 in 5, they've just applied it to the wrong thing, or assumed that the 1 in 5 change of an incorrect positive test also applies to negative tests.
[EDIT] The above figures will be wrong as the 8% prevalence assumption is way out. That's the ONS's figure for how many people have HAD the virus, which is not the same as the percentage of the population who currently have it. But it doesn't change my argument much as using a lower prevalence value only increases the odds that the negative test is actually correct.
I'm a school governor, and we've just had some guidance come through from the council: if someone has the 'classic symptoms' of Covid (high temp, continuous cough, lack of smell), they should self-isolate for 10 days (and household for 14 days), even if they get a negative test result. Their rationale seems to be that there is a 1 in 5 chance you're still positive even if the test comes back negative. I thought the false negative rate was much lower than that? Or is this one of those counter-intuitive Bayesian result things?