The Times Australia
Google AI
The Times World News

.

Tests that diagnose diseases are less reliable than you’d expect. Here’s why

  • Written by Adrian Barnett, Professor of Statistics, Queensland University of Technology
Tests that diagnose diseases are less reliable than you’d expect. Here’s why

You feel unwell, and visit your doctor. They ask some questions and take some blood for testing; a few days later they call to say you have been diagnosed with a disease.

What are the chances you actually have the disease? For some common diagnostic tests, the answer is surprisingly low.

Few medical tests are 100% accurate. Part of the reason is that people are inherently variable, but many tests are also built on limited or biased samples of patients – and our own work has shown researchers may deliberately exaggerate[1] the effectiveness of new tests.

None of this means we should stop trusting diagnostic tests, but a better understanding of their strengths and weaknesses is essential if we want to use them wisely.

People are variable

An example of a widely used imperfect test is prostate-specific antigen (PSA) screening, which measures the level of a particular protein in the blood as an indicator of prostate cancer.

The test catches an estimated 93% of cancers – but it has a very high false positive rate, as around 80% of men with a positive result do not actually have cancer. For those in the 80%, the result creates unnecessary stress[2] and likely further testing including painful biopsies.

Read more: Prostate cancer testing: has the bubble burst?[3]

Rapid antigen tests for COVID-19 are another widely used imperfect test. A review of these tests[4] found that, of people without symptoms but with a positive test result, only 52% actually had COVID.

Among people with COVID symptoms and a positive result, the accuracy of the tests rose to 89%. This shows how a test’s performance cannot be summarised by a single number and depends on individual context.

Why aren’t diagnostic tests perfect? One key reason is that people are variable. A high temperature for you, for example, might be perfectly normal for someone else. For blood tests, many extraneous factors can influence the results, such as the time of day or how recently you have eaten.

Even the ubiquitous blood pressure test can be inaccurate[5]. Results can vary depending on whether the cuff is a good fit for your arm, if you have your legs crossed, and if you’re talking when the test is done.

Small samples and statistical skullduggery

There’s an enormous amount of research on new diagnostic models. New models frequently make the headlines as “medical breakthroughs”, such as how your handwriting could detect Parkinson’s disease[6], how your pharmacy loyalty card could detect ovarian cancer earlier[7], or how eye movements could detect schizophrenia[8].

But living up to the headlines is often a different story.

Many diagnostic models are developed based on small sample sizes. A review[9] found half of diagnostic studies used just over 100 patients. It is hard to get a true picture of the accuracy of a diagnostic test from such small samples.

For accurate results, the patients who use the test should be similar to those who were used to develop the test. For example, the widely used Framingham Risk Score for identifying people at high risk of heart disease was developed in the United States and is known to perform poorly[10] in Aboriginal and Torres Strait Islander people.

Similar disparities in accuracy have been found for “polygenic risk scores”. These combine information on thousands of genes to predict disease risk, but were developed in European populations and perform poorly in non-European populations[11].

Recently, we identified another important problem: researchers have exaggerated the accuracy of some models[12] to gain journal publications.

There are many ways to exaggerate the performance of a test, such as dropping hard-to-predict patients from the sample. Some tests are also not truly predictive, as they include information from the future, such as a predictive model of infection[13] that includes whether the patient had been prescribed antibiotics.

Read more: Elizabeth Holmes: Theranos scandal has more to it than just toxic Silicon Valley culture[14]

Perhaps the most extreme example of exaggerating the power of a diagnostic test was the Theranos scandal[15], in which a finger-prick blood test supposed to diagnose multiple health conditions attracted hundreds of millions of dollars from investors. This was too good to be true – and the mastermind has now been convicted of fraud.

Big data can’t make tests perfect

In the era of precision medicine and big data, it seems appealing to combine tens or hundreds of pieces of information about a patient – perhaps using machine learning or artificial intelligence – to provide highly accurate predictions. However, the promise is so far outstripping the reality.

One study[16] estimated 80,000 new prediction models were published between 1995 and 2020. That’s around 250 new models every month.

Are these models transforming healthcare? We see no sign of it – and if they really were having a big impact, surely we wouldn’t need such a steady stream of new models.

For many diseases there are data problems that no amount of sophisticated modelling can fix, such as measurement errors or missing data that make accurate predictions impossible.

Some diseases or illnesses are likely inherently random, and involve complex chains of events which a patient cannot describe and no model could predict. Examples might include injuries or previous illnesses that happened to a patient decades ago, which they cannot recall and are not in their medical notes.

Diagnostic tests will never be perfect. Acknowledging their imperfections will enable doctors and their patients to have an informed discussion about what a result means – and most importantly, what to do next.

References

  1. ^ deliberately exaggerate (bmcmedicine.biomedcentral.com)
  2. ^ creates unnecessary stress (theconversation.com)
  3. ^ Prostate cancer testing: has the bubble burst? (theconversation.com)
  4. ^ review of these tests (www.cochrane.org)
  5. ^ can be inaccurate (www.ama-assn.org)
  6. ^ handwriting could detect Parkinson’s disease (www.jpost.com)
  7. ^ detect ovarian cancer earlier (www.theguardian.com)
  8. ^ eye movements could detect schizophrenia (www.abdn.ac.uk)
  9. ^ A review (www.bmj.com)
  10. ^ perform poorly (pubmed.ncbi.nlm.nih.gov)
  11. ^ perform poorly in non-European populations (www.nature.com)
  12. ^ the accuracy of some models (bmcmedicine.biomedcentral.com)
  13. ^ predictive model of infection (www.statnews.com)
  14. ^ Elizabeth Holmes: Theranos scandal has more to it than just toxic Silicon Valley culture (theconversation.com)
  15. ^ Theranos scandal (theconversation.com)
  16. ^ study (osf.io)

Read more https://theconversation.com/tests-that-diagnose-diseases-are-less-reliable-than-youd-expect-heres-why-213359

Times Magazine

With Nvidia’s second-best AI chips headed for China, the US shifts priorities from security to trade

This week, US President Donald Trump approved previously banned exports[1] of Nvidia’s powerful ...

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

Tim Ayres on the AI rollout’s looming ‘bumps and glitches’

The federal government released its National AI Strategy[1] this week, confirming it has dropped...

Seven in Ten Australian Workers Say Employers Are Failing to Prepare Them for AI Future

As artificial intelligence (AI) accelerates across industries, a growing number of Australian work...

The Times Features

I’m heading overseas. Do I really need travel vaccines?

Australia is in its busiest month[1] for short-term overseas travel. And there are so many thi...

Mint Payments partners with Zip Co to add flexible payment options for travel merchants

Mint Payments, Australia's leading travel payments specialist, today announced a partnership with ...

When Holiday Small Talk Hurts Inclusion at Work

Dr. Tatiana Andreeva, Associate Professor in Management and Organisational Behaviour, Maynooth U...

Human Rights Day: The Right to Shelter Isn’t Optional

It is World Human Rights Day this week. Across Australia, politicians read declarations and clai...

In awkward timing, government ends energy rebate as it defends Wells’ spendathon

There are two glaring lessons for politicians from the Anika Wells’ entitlements affair. First...

Australia’s Coffee Culture Faces an Afternoon Rethink as New Research Reveals a Surprising Blind Spot

Australia’s celebrated coffee culture may be world‑class in the morning, but new research* sugge...

Reflections invests almost $1 million in Tumut River park to boost regional tourism

Reflections Holidays, the largest adventure holiday park group in New South Wales, has launched ...

Groundbreaking Trial: Fish Oil Slashes Heart Complications in Dialysis Patients

A significant development for patients undergoing dialysis for kidney failure—a group with an except...

Worried after sunscreen recalls? Here’s how to choose a safe one

Most of us know sunscreen is a key way[1] to protect areas of our skin not easily covered by c...