Intended for healthcare professionals

Editorials

Mortality indicators used to rank hospital performance

BMJ 2013; 347 doi: https://doi.org/10.1136/bmj.f5952 (Published 25 October 2013) Cite this as: BMJ 2013;347:f5952
  1. J Nicholl, professor of health services research,
  2. R Jacques, research fellow,
  3. M J Campbell, professor of medical statistics
  1. 1School of Health and Related Research (ScHARR), University of Sheffield, Sheffield S1 4DA, UK
  1. j.nicholl{at}sheffield.ac.uk

Should include deaths that occur after discharge

There is considerable debate about the value of using hospital mortality rates adjusted for case mix as an indicator of the quality and safety of care provided by hospitals. A linked paper by Pouw and colleagues (doi:10.1136/bmj.f5913) investigates the inclusion of post-discharge deaths in these mortality indicators.1 The main doubts about their value are that standardisation for differences between hospitals in the characteristics of their patients (the case mix) doesn’t work, and that these indicators do not measure performance because they are not related to avoidable mortality. There is no doubt that the case mix adjustment is problematic. We know that different adjustment models lead to different results,2 and that important measures of case mix are missing from models based on routine data.3 We also know that these measures are at best weakly related to avoidable mortality—models show that they would begin to be useful for identifying poor quality of care only when at least 16% of hospital deaths are avoidable.4 Recent studies have shown that in the United Kingdom this figure is closer to 5%.5

Nevertheless, hospital standardised mortality ratios are being used to identify failing hospitals thanks to considerable social, political, and media pressure.6 We must therefore make the measures as robust as possible. The Department of Health in England has recently introduced a revised measure, the summary hospital mortality indicator.7 The main differences are that it includes nearly all conditions and mortality is recorded not only in hospital but up to 30 days after discharge. Whether deaths after discharge should be used when calculating hospital mortality indicators has been discussed for years. Studies that have compared results using both approaches for some specific clinical conditions have concluded that they give similar results overall but detect different statistical outliers.8 9 Recently, it was estimated that using mortality up to 30 days after admission, rather than in-hospital mortality, changed the quality rankings for only about 10% of hospitals, but that in-hospital measures are biased in favour of hospitals with shorter lengths of stay.10

Pouw and colleagues examined this question using data on more than one million admissions to 60 Dutch hospitals.1 They compared in-hospital mortality with mortality at 30 days after discharge and 30 days after admission. They found that 20-30% of hospitals change their quality ranking when post-discharge deaths are included and confirmed a substantial correlation between the in-hospital measure and the average length of stay of patients in hospital. They concluded that in-hospital measures are subject to “discharge bias,” and that post-discharge mortality should be included in hospital mortality indicators. It is now clear that if post-discharge deaths are included the relative performance of some hospitals changes, and that short lengths of stay are associated with low in-hospital mortality and a discharge bias, so that it is not appropriate to use only in-hospital mortality. But this leaves at least three questions unanswered.

Firstly, should a fixed time frame after admission or after discharge be used? The Department of Health chose the post-discharge option in the summary hospital mortality indicator because part of the care of patients who stay in hospital longer than 30 days is not assessed if 30 days post-admission is used. This might also lead hospitals to focus only on the quality of the first 30 days of care. However, fewer than 5% of patients stay longer than 30 days, and using a post-discharge time frame means that there is still a bias in favour of hospitals with shorter lengths of stay, albeit a smaller and possibly negligible bias compared with an in-hospital mortality measure. Pouw and colleagues have not published the correlation between length of stay and 30 day post-discharge mortality, which might help us judge how important any bias might be.

Secondly, what time frame should be used? All the studies we know have used 30 days after discharge or after admission, but why 30 days? Clearly, the longer the time after discharge the smaller the influence of the quality of hospital care and the greater the influence of community care, or care in any subsequent hospital admission. It follows that as short a time frame as is necessary to pick up all the effects of the quality of hospital care should be used. English hospital episode statistics data for 2005-10 show that, for all deaths that occur up to 30 days after discharge, 7% occur in the first week, then 5%, 4%, and 4% in weeks two to four. This suggests that a two week window after discharge might be more appropriate.

A third question is whether post-discharge mortality should be combined with in-hospital mortality at all. Deaths after discharge are an indicator of the quality of care during the stay in hospital, the appropriateness of the discharge decision, and the quality of care provided by post-discharge community services. English hospital episode statistics for 2005-10 show that deaths in the 30 days after discharge varied from 12% to 30% of all deaths from admission to 30 days after discharge. This suggests that the appropriateness of discharge decisions or follow-up care may vary greatly. It might therefore be better to have two indicators of performance—an in-hospital measure and a two week post-discharge one. This would enable hospitals and commissioners to identify any problems with discharge decisions and post-discharge care.

Notes

Cite this as: BMJ 2013;347:f5952

Footnotes

  • Research, doi:10.1136/bmj.f5913
  • Competing interests: We have read and understood the BMJ Group policy on declaration of interests and declare the following interests: None.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References

View Abstract