-
• #12902
As mentioned previously, this test just tells if you have antibodies, which proves that you had Covid. It doesn't prove immunity or that you aren't carrying the virus and can still pass it to others. These are all things that they don't know yet.
But if you have had Covid, then at least you know and can make more informed decisions.
-
• #12903
Thanks for the maths!
50/50 if you tested positive for antibodies, w00h... that's an expensive coin flip.
-
• #12904
It has a 100% sensitivity and 97.5% specificity.
That's what they claim, it later says:-
In the laboratory’s study, the test had a sensitivity of 98.5% which means that if 1,000 people who had previously been infected with coronavirus virus took the test, 15 of them would be told that they hadn’t had coronavirus when they had (a false negative result). It had a specificity of 99.5% which means that if 1,000 people who hadn’t had the virus took the test, 5 of them would be told they had been infected when they hadn’t (a false positive result).
So if 5% of the population have had it then using 98.5% and 99.5% figures we get:-
-ve result would be 99.973% accurate
+ve result would be 91.284% accurateIf 1% of the population have had it then:-
-ve would be 99.995% accurate
+ve would be 66.779% accurateIf 10% of the population have had it then:-
-ve would be 99.944% accurate
+ve would be 95.673% accurate(Caveat my fat fingers mistyping something into a spreadsheet.)
-
• #12905
So if 5% of the population have had it then using 98.5% and 99.5% figures we get:-
-ve result would be 99.973% accurate
+ve result would be 91.284% accurateIf 1% of the population have had it then:-
-ve would be 99.995% accurate
+ve would be 66.779% accurateIf 10% of the population have had it then:-
-ve would be 99.944% accurate
+ve would be 95.673% accurateWhat are you doing to get a shift in sensitivity/specificity based on an outside factor?
-
• #12906
who produces this "test"?
Jennifer ArcuriMedichecks are saying that their test has a sensitivity of 98.5% and a specificity of 99.5%. That's based on their own testing, the manufacturer states a sensitivity of 100%.
They don't name who supplies their test but based on that and the packaging looking identical apart from the branding it could well be Abbot.
-
• #12907
Bayesian statistics as described on the previous page. Interesting stuff.
-
• #12908
The sensitivity and specificity aren't changing. The derived accuracy of a result changes depending on how prevalent the condition is amongst the tested population as the relative
The less prevalent the condition is in the population the more false positives you'll get.
The more prevalent the condition is in the population the more false negatives you'll get.
Both of these factors affect the eventual accuracy. See: https://www.lfgss.com/comments/15296147/
-
• #12909
The sensitivity and specificity aren't changing. The derived accuracy of a result changes depending on how prevalent the condition is amongst the tested population as the relative
Ah, sorry. So the raw number of false positives/false negatives will shift depending on how many true negatives/true positives there are. Okay - I mistook your "-ve" and "+ve" to be analogues for specificity/sensitivity.
Both of these factors affect the eventual accuracy.
Again, sorry if I'm misunderstanding, but they don't affect accuracy, do they? The number of accurate/inaccurate results as a count will be different depending on shifts in the potential for false positives/false negatives. The accuracy of the test on any individual will remain the same.
Your previous post doesn't demystify this for me because it also looks like you seem to be insinuating the accuracy of a test for an individual depends on the prevalence of the thing being tested in the population.
-
• #12910
Your previous post doesn't demystify this for me because it also looks like you seem to be insinuating the accuracy of a test for an individual depends on the prevalence of the thing being tested in the population.
In order to combine two accuracy figures (sensitivity and specificity) into a single 'test accuracy' figure it's necessary to know what at what ratio those individual accuracies need to be combined and that depends on the prevalence of the condition within the population being tested.
If 100% of people have the virus then a test that gives 2% false negatives is only going to be 98% accurate.
If 100% of the people don't have the virus then a test that gives 3% false positives is only going to be 97% accurate.
That's an extreme but I hope it shows how a single test with two different accuracy figures can produce different overall accuracy results if the testing population is made up differently.
-
• #12911
I'm really sorry if I'm being obtuse here, but maybe this is a discipline thing and I don't get how things are done in medicine (or it's me being a moron), but here:
In order to combine two accuracy figures (sensitivity and specificity) into a single 'test accuracy' figure it's necessary to know what at what ratio those individual accuracies need to be combined and that depends on the prevalence of the condition within the population being tested.
you seem to be describing an F1 score. This would be calculated using test data - not population data - for which we know the features of the test population. Otherwise we wouldn't be able to give any measure of precision/recall (or sensitivity/specificity). So false/true positives/negatives can be reported with 100% accuracy. A test's accuracy will then be calculated based on how well it achieves results matching reality in every case.
I assume in medical tests they know who does/does not have the virus via other methods. The test is then measured against these. Therefore, the accuracy of the results are related to this information, not any information about the general public.
Just to reiterate, I do understand that the overall number of accurate results will depend on the prevalence of the disease in the population. But the accuracy of the test, in my understanding, should be independent of this.
-
• #12912
I might have missed the chat amongst all the stats but did anyone watch BBC Horizon special a couple nights ago? I thought it was quite good (serious, clear, accessible, range of topics) but haven't watched anything comparable.
-
• #12913
I assume in medical tests they know who does/does not have the virus via other methods. The test is then measured against these. Therefore, the accuracy of the results are related to this information, not any information about the general public.
If you have a test that has a specificity of one value (97%) and a sensitivity of another value (98%) then it will have a different accuracy if you give it all expected positive test inputs than if you give it all expected negative test inputs. Therefore the accuracy of a test depends on the prevalence of the disease in the population.
This page may help explain it: https://www.medcalc.org/calc/diagnostic_test.php
That page goes further and does 95% confidence intervals.
Fundamentally you're trying to combine two accuracy figures (one for getting it right with a positive outcome and one for getting it right with a negative outcome) into a single accuracy figure for the test. You can only do this if you know the prevalence population wide.
If you don't look at it population wide then you can't combine the two accuracy figures (sensitivity and specificity) into a single "accuracy" figure.
-
• #12914
Fundamentally you're trying to combine two accuracy figures (one for getting it right with a positive outcome and one for getting it right with a negative outcome) into a single accuracy figure for the test. You can only do this if you know the prevalence population wide.
If you don't look at it population wide then you can't combine the two accuracy figures (sensitivity and specificity) into a single "accuracy" figure.
Which is why you have a test/sample group. That's the population for which you know the number of false positives, false negatives, true positives, and true negatives. That's the population from which the accuracy of a test can be calculated.
Extrapolation to the world becomes more complicated. The prevalence in the population will impact raw results. But the accuracy at an individual level will remain the same (within reason/whatever p value).
As far as I understand! Not a doctor! Etc!
-
• #12915
Which is why you have a test/sample group. That's the population for which you know the number of false positives, false negatives, true positives, and true negatives. That's the population from which the accuracy of a test can be calculated.
OK, so if you have 10,000 people and you know exactly 500 of them are positive. That's 5%. So 95% are negative.
Nopw imagine you have a test that where sensitivity and specificity are 95%.
When you test the 9500 people who are negative how many -ves and how many +ves do you expect to get?
When you test the 500 people who are positive how many +ves and how many -ves do you expect to get?
What is the accuracy for those who were negative?
What is the accuracy for those who were positive?What is the accuracy for those that received a negative result?
What is the accuracy for those that received a positive result?
What is the accuracy of the test?(Bonus question: Why are none of the last 3 answers 95%?)
-
• #12916
I'm not sure why you're getting snarky. I'm really not having a go. I've been honestly asking why the results you're claiming you get are the results (how a test with n% accuracy for the person being tested on can drop down to 66% or whatever based on the amount of people who have the sickness independently from the test being administered).
I'll do the math and see what happens. Maybe. We'll see how much I want to ignore work tomorrow.
-
• #12917
Not being snarky, sorry if I'm coming over that way.
Maybe I'm just still in teacher mode from home schooling my 10yo who switches instantly between being gripped by something and utterly disinterested with everything.
-
• #12918
But the accuracy at an individual level will remain the same
The "accuracy" of the test remains the same but the accuracy of the result depends on your likelihood of having the thing in the first place.
This is because there are really 4 results: positive, false positive, negative, false negative. The proportion of positives that are false becomes relatively more or less important when compared with the proportion of results that are true positives.
If I, a biological male, take a pregnancy test with a 5% false positive rate, that doesn't mean that my chances of actually being pregnant are 5% - they are in effect 0% and all positives results are false positives. But the test is still correct 95% of the time.
Edit: I don't think my comment is going to help you, I'm just restating what you've already said.
-
• #12919
Absolutely. Agree with all of that.
But the part I'm trying to decipher follows from this: The accuracy of a pregnancy test does not change depending on how many women are currently pregnant.
What does change is the raw number of positive and negative results due to there being more people who can get false negatives. But any given man or woman will still have the same likelihood of getting a particular result.
-
• #12920
The accuracy of a pregnancy test does not change depending on how many women are currently pregnant.
It does if you compare the results of the test applied over a population with the actual number of results in that population, which is how you would define the accuracy of the results generally. The accuracy being referred to is, by definition, the accuracy of the results from a set group of people, not merely an individual. Because without referring to a group, you can't work out the actual probability of you being positive to whatever's being tested.
-
• #12921
I can save you the hassle of doing the maths:-
OK, so if you have 10,000 people and you know exactly 500 of them are positive. That's 5%. So 95% are negative.
Nopw imagine you have a test that where sensitivity and specificity are 95%.
When you test the 9500 people who are negative how many -ves and how many +ves do you expect to get?
With a 95% specificity (true negatives). You'd expect 9500 * 0.95 = 9025 to truly test negative. You'd also expect 500 * 0.95 = 475 to falsely test positive.
When you test the 500 people who are positive how many +ves and how many -ves do you expect to get?
-ve is 500 people * 0.05 (false negatives) = 25 people who are positive to incorrectly test negative
+ve is 500 people * 0.95 (true positives) = 475 people who are positive to correctly test positiveWhat is the accuracy for those who
wereare negative? (Fixed the wording slightly.)Simple one this. 95% of the people who are negative tested negative. So the accuracy for the people who are known to be negative is 95%.
What is the accuracy for those who
wereare positive?Likewise 95% of the people who are positive tested positive. So the accuracy for the people who are known to be positive is 95%.
What is the accuracy for those that received a negative result?
9025 people who are negative correctly tested negative.
25 people who are positive tested negative (false negatives)So 9025/9050 who got a negative result got the correct result = 99.72% (2dp) for them
What is the accuracy for those that received a positive result?
475 people who are positive correctly tested positive
475 people who are negative tested positive (false positives)So 475/950 who got a positive result got the correct result = 50% accuracy for them
What is the accuracy of the test?
If you measure it as how many people got the right result, it's 95%.
But, as I've shown, that doesn't mean that 95% of positive results are correct, so saying a positive result is 95% accurate is false. Only 50% of positive results are likely to be correct, but because there are many (19 in this case) times as many negative results that individual low accuracy gets drowned out in a single overall 'accuracy' stat.
(Nowhere did I give a single figure for the accuracy of a test, only the accuracy of a test result of a specific outcome.)
-
• #12923
It does if you compare the results of the test applied over a population with the actual number of results in that population
Which I've said a few times, including in the post you quote. "What does change is the raw number of positive and negative results due to there being more people who can get false negatives."
, which is how you would define the accuracy of the results generally.
If you mean sample population, sure. Not general population. Otherwise how would you ever be able to judge the accuracy of a pregnancy test? It would change by the minute. (well, not vastly. It would change based on geographical location though.).
The accuracy being referred to is, by definition, the accuracy of the results from a set group of people, not merely an individual. Because without referring to a group, you can't work out the actual probability of you being positive to whatever's being tested.
Yes. I've said this.
Am I the one who is being unclear?
-
• #12924
If I am understanding you correctly, I think the reason it works out is that we're talking about probabilities/proportions, which will always sum to 1. By adjusting the true incidence rate you simply just move the proportioning of false positives/negatives around leaving the same overall 95% accuracy figure.
-
• #12925
Which I've said a few times, including in the post you quote. "What does change is the raw number of positive and negative results due to there being more people who can get false negatives."
As I've shown above, the numbers derived are not what was expected even when you use the same sample set that was used to measure the sensitivity and specificity of the tests.
The non-intuitive bit is because of the question being asked, which is "How accurate is the test for a specific result?" and not "How accurate is the test overall?"
price anchoring punching you in the chops