×
all 6 comments

[–]DontWorryImADr 3 points4 points  (4 children)

Scientist with experience in healthcare on assay testing platforms, and determining these exact values for test methods. Long story short? You design your test to minimize false-negative results, and design a testing methodology where samples that are tested negative the first time are accepted while samples that test positive get re-tested. This designs for limited repeat testing as only very few tests will be positive (as few will actually have the disease and false-positive rates aren’t that high). You then monitor the Initial Reactive Rate (how often samples come back positive on the first test) versus Repeat Reactive Rate (how often samples test positive on subsequent tests). The test itself, in large-scale use, thus becomes an effective way to monitor accuracy despite the inherent limitations and errors possible with any individual test.

A real-world typical design is based on applying the realities of population scale, likelihood of having the tested issue (in this case, COVID), and the severity of the different kinds of incorrect test-results. After all, a false-positive is not the same danger as a false-negative.

So, let’s call your actual status your “disease state” and your test result your “test state”. Your disease state can be positive or negative. Your test state can be positive or negative. So any individual test in a population can fall into one of four categories:

  • True Positive: a positive disease state and a positive test result, good!
  • True Negative: a negative disease state and a negative test result, also good!
  • False Positive: a negative disease state, but the test result is positive. Not good.
  • False Negative: a positive disease state, but the test was negative. Not good for different reasons.

Now any test has a certain power to it, meaning that adjusting your thresholds in an effort to reduce one kind of false-result will cause an increase in the other. You could simply optimize for least number of false-results, but it’s usually preferred to optimize based upon what else can happen.

Most assays you’ll ever experience, a negative test is accepted as true while positive tests are simply grounds for repeat testing. Why? Because you put all the test power towards avoiding false-negatives. This is because:

  • Negative disease state is more prevalent. The vast majority of people at any given moment won’t have the disease you’re worried about, so the majority of your test volume will be this.
  • False negative tests are the worst situation. They get lost in your largest population of actually negative people, not treating them is the most detrimental. They’re either out spreading disease and/or getting worse without help.
  • False positive is way better than false negative. Holding someone back a tick while re-testing isn’t catastrophic, and should be pretty rare anyway. Plus re-testing might not require even bothering the original person if you have enough sample to re-test on what was originally collected. Even if it goes through, treatment to a healthy person is rarely as bad as no treatment for a sick person.

Due to how many more people will be disease state negative than positive, this actually means that a positive test the first time around is.. rarely very accurate. If your false-positive rate is 1% in a population where 1% of people have a disease.. 1% of the 99% is pretty close to 1% of 100%, meaning anyone carrying a positive test is only 50:50 likelihood of actually being positive. But retesting typically requires two retests done, possibly with a confirmatory separate step after (depending on the severity of the diagnosis, like cancer or HIV). The likelihood of two false-positive tests in a row suddenly drops to 0.01%, and you see the benefits of retesting this way.

So what would this mean at a population scale? You can monitor individual tests, but that’s very arbitrary and slow. Instead, keep a running tally of ALL people who have been tested, and their initial vs repeat testing status for positive results. Comparing the ratio of initial relative vs repeat reactive gives you a pretty good idea of false-positive rates.

False negative rates require testing known positive samples in high volume, typically, which is another reason all the test power is dumped into minimizing this effect.

[–]Mr__platypus[S] 0 points1 point  (3 children)

Thank you for the detailed response! I was wondering if there was an increased chance of receiving a second false result. If someone falsely tested positive, wouldn't there be an increased chance of it happening again for them? The only way to check for this would be to have some sort of other test, right? Otherwise, wouldn't there be no way to check for a mechanism that could be causing repeat false results in some people or under certain circumstances?

[–]DontWorryImADr 2 points3 points  (0 children)

Happy to share what I know!

Regarding the increased chance of false-positives, this is actually why ongoing monitoring of the reactive rates (initial and repeat) are important. Looking over a period of time for a population allows checking if these ratios are changing over time. This is more useful for diseases that aren’t having big changes in case loads like COVID, but any change could be grounds for investigation.

In a perfect world or treating statistics as independent tests, the assumption is each test has the same likelihood of being right. So your chance of getting two false-positives in a row would be the chance of one false-positive squared. Or, If fp is your false-positive rate and n is your number of tests, the probability = fpn. So in regards of independent tests, unlikely you’d get two tests wrong in a row.

That said, you’re right that there are limits if your only testing method is the same test repeatedly. If you got the wrong result because of an inherent weakness in the design that is less accurate with high-protein samples, diabetics, or when testing your typical MCU hero, repeat failures are a possibility. This is why even repeat reactive tests still often go to some form of confirmatory test that uses separate methodology from the primary test used. You may recall last year that lots of testing methods were getting approved. Some companies even made several tests. Part of that is because any potential mechanism for false-positive results is less likely to impact multiple test designs. So a testing strategy that includes several steps and at least one change before acting upon the result is desirable.

[–]SNova42 1 point2 points  (1 child)

It’s theoretically possible for there to be a separate condition that makes you test positive without having the disease being tested for, but then this condition should also present with other signs and symptoms that would tip you off. More often, false positives are pretty much simply down to random luck. And yes, for some diseases we use different tests to confirm a positive result. A common model is to have a screening test, cheap, fast, and non-invasive, which needs to be confirmed with a diagnostic test, which trades those advantages for greater accuracy. You don’t normally need this kind of setup for a respiratory virus though, antigen/antibody assays are already very specific by nature, and treatments for an asymptomatic/mild symptoms patient hardly do anything to a healthy person.

[–]EZ-PEAS 0 points1 point  (0 children)

It’s theoretically possible for there to be a separate condition that makes you test positive without having the disease being tested for, but then this condition should also present with other signs and symptoms that would tip you off.

More than theoretically possible in this case. The at-home COVID test I picked up last week said it would give a positive result for a couple different strains of coronavirus, not just COVID-19. One of the many reasons they then tell you to get a confirmatory PCR test.

[–]LuckyC4t 1 point2 points  (0 children)

These rates are probably not based on data, but come from controlled tests. They take a sample of animals or just particles that they know to contain covid, and the portion of tests that comes back negative is the false negative rate. Then they do the same with samples without covid, and the portion that comes back positive is the false positive rate.