Article Text

Download PDFPDF
Is your clinical prediction model past its sell by date?
  1. Charles Reynard1,2,
  2. David Jenkins3,
  3. Glen P Martin3,
  4. Evan Kontopantelis3,
  5. Richard Body1,2
  1. 1 Emergency Department, Manchester University NHS Foundation Trust, Manchester, UK
  2. 2 Division of Cardiovascular Sciences, The University of Manchester, Manchester, UK
  3. 3 Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
  1. Correspondence to Dr Charles Reynard, Emergency Department, Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK M13 9WL; charlie.reynard{at}nhs.net

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The problem

Clinical prediction models (CPMs) have become an integral part of emergency medicine, such as the Wells score for deep vein thrombosis and APACHE II to predict mortality in intensive care.1 2

One problem with CPMs is that their predictive performance can deteriorate with time. For example, in 2013, Hickey et al 3 investigated the temporal change in prognostic performance of a CPM called EUROSCORE examining mortality after cardiothoracic surgery. They found that the model became increasingly mis-calibrated over time, in the sense that the observed mortality proportion diverged from the predicted mortality over time.

This phenomenon is known as calibration drift and is a particular issue for models implemented in clinical practice as a mis-calibrated model leads to inaccurate risk estimation and therefore potentially wrong decisions being made. This is particularly troublesome when considered in a clinical pathway with risk thresholds: the wrong course of action may be triggered because a patient who was predicted to be high risk was in fact low risk. For example, a patient with suspected acute myocardial infarction is predicted to be high risk and is admitted for additional investigations but in fact the model was mis-calibrated and they were actually low risk. That could lead to patients receiving unnecessary treatment or investigations that are of little benefit to them, exposing the patient to additional risk and the system wasting valuable resources. At a system level when CPMs are incorporated into national standards, it could lead to inaccurate information being provided to senior decision-makers.

Drivers of divergence

The causes of calibration drift are complex and context specific, but key drivers are the ageing but comparatively healthier population and medical advances (new treatments).3–5 There is a broader perspective that these are all consequences of a fundamental flaw: CPMs are static. That is, the CPMs and their underlying …

View Full Text

Footnotes

  • Handling editor Edward Carlton

  • Twitter @richardbody

  • Contributors CR, RB, DJ, GPM and EK contributed to the conception, drafting and reviewing the article.

  • Funding National Institute of Health (NIHR300246).

  • Competing interests RB receives funding from the National Institute of Health Research, Asthma UK and the British Lung Foundation for the COvid-19 National DiagnOstic Research and evaluation programme (CONDOR). He has consulted for Siemens, Roche, Beckman, Singulex, LumiraDx and Abbott but not relating to COVID-19. CR received funding from the National Institute of Health Research (UK) as a clinical doctoral research fellow. CR is employed by Pfizer limited, the final revised manuscript was submitted prior to the start of this role.

  • Provenance and peer review Not commissioned; externally peer reviewed.