Article Text
Abstract
Hospital mortality rates could be useful indicators of quality of care, but careful statistical analysis is required to avoid erroneously attributing variation in mortality to differences in health care when it is actually due to differences in case mix. The summary hospital mortality indicator is currently used by the English National Health Service (NHS). It adjusts mortality rates up to 30 days after discharge for patient age, sex, type of admission, year of discharge, comorbidity, deprivation and diagnosis. Such risk-adjustment methods have been used to identify poor performance, most notably at mid-Staffordshire NHS Foundation Trust, but their use is subject to a number of limitations. Studies exploring whether variation in risk-adjusted mortality can be explained by variation in healthcare have reached conflicting conclusions. Furthermore, concerns have been raised that the proportion of preventable deaths among hospital admissions is too small to produce a reliable ‘signal’ in risk-adjusted mortality rates. This provides hospital managers, regulators and clinicians with a considerable dilemma. Variation in mortality rates cannot be ignored, as they might indicate unacceptable variation in healthcare and avoidable mortality, but they also cannot be reliably used to judge the quality of healthcare, based on current evidence.
- audit
- comparitive system research
- emergency care systems
- research, methods
- quality assurance
Statistics from Altmetric.com
What do hospital mortality rates tell us about quality of care?
High-quality care has been defined as care that is safe, effective, patient-centred, timely, efficient and equitable.1 The hospital mortality rate (the proportion of patients who die during or shortly after admission to hospital) would be expected to reflect the safety, effectiveness and, in emergency medicine, timeliness of care and would intuitively seem to be an important measure of quality. Routine administrative data from the English National Health Service (NHS) show that there is substantial interhospital variation in this measure.2 If this variation were due to differences in healthcare, such as the treatments provided, service organisation, workforce or human resource management, then higher mortality rates would reflect poor quality of care. However, mortality rates are also determined by case mix, so high mortality rates may simply reflect a sicker patient population.
Statistical methods can be used to produce risk-adjusted mortality estimates that take case mix into account, but these may be subject to limitations that may render them inadequate or inaccurate. Misinterpretation of mortality estimates could lead to misleading conclusions being drawn about the quality of care provided. Any data that suggest that patients are dying unnecessarily due to poor quality care will inevitably attract attention. Clinicians and managers working in emergency care, therefore, need to understand the potential strengths and limitations of hospital mortality data. This article describes the use of hospital mortality data in the English NHS and explores whether these data can be used to judge quality of care.
Hospital mortality data in the NHS
Mortality data are increasingly being used to make inferences about hospital performance and quality of care in the English NHS. Concerns about poor care at Mid Staffordshire Hospital were initially identified as a result of high mortality rates. Following the inquiry into events at Mid Staffordshire Hospital,3 a further 14 hospitals have been investigated on the basis of having high mortality rates.4 The Dr Foster organisation uses mortality rates to inform its Hospital Guide,5 and the NHS Information Centre publishes mortality rates as a quality indicator for hospital care.6
Several steps must be taken before we can attribute variation in mortality rates to differences in healthcare and make practical use of the available information. First, we must identify and adjust for differences in case mix that may explain variation in mortality (risk adjustment). Then, we must investigate the process of risk adjustment to ensure that variation in adjusted mortality rates are not artefacts of the methods of data collection or analysis. Finally, we must identify how differences in healthcare explain variation in mortality rates and whether mortality rates can be improved by intervention. The potential explanations for variation in mortality rates are outlined in the table below and will be explored in this article (table 1).
What is risk-adjusted mortality?
Differences in crude mortality rates may be due to differences in case mix, with hospitals serving sicker populations having higher mortality rates. Risk adjustment is used to provide estimates of mortality rates that take case mix into account. Patient characteristics that predict risk of death, such as age or number of comorbidities, are measured on all patients admitted to hospital. Statistical analysis then applies an appropriate ‘weight’ to each predictor variable to reflect the strength of association with mortality, and then estimates the expected risk of death for each patient. The expected risk of death for all patients in the admitted population are then added together to give the expected number of deaths over a specified time period, such as a year. This can then be compared with the observed number of deaths to determine how the actual hospital mortality rate compares with the expected mortality rates.
The summary hospital mortality indicator (SHMI)7 is currently used by the English NHS. It adjusts mortality rates up to 30 days after discharge for patient age, sex, type of admission, year of discharge, comorbidity, deprivation and diagnosis. Between April 2005 and September 2010, there were 36 453 419 admissions to hospitals in England, of whom 1 577 803 (4.3%) died in hospital or within 30 days of discharge. The SHMI varied across the hospitals from 50% below expected to 30% above expected indicating considerable unexplained variation after adjustment. Emergency cases accounted for 75% of admissions and 95% of deaths, so investigation of variation in the SHMI needs to focus on emergency care.
Trends in risk-adjusted mortality data support the inference that some mortality is potentially avoidable. Mortality rates are higher among patients admitted to hospital at weekends compared with weekdays,8 which may reflect reduced availability of staff or services at weekends. Mortality rates have fallen in England over the last 5 years,9 and in the Australian state of Victoria over the last 10 years10 which may reflect improved care. Alternatively, these trends could reflect differences in case mix or changes in coding, such as increased use of palliative care codes.
What are the limitations of risk adjustment?
The SHMI was developed to address issues identified with a previous risk-adjustment method, the Hospital Standardised Mortality Ratio (HSMR).11 ,12 Alternative methods, such as the QUality and Outcomes Research Unit Measure (QUORUM) could provide better risk prediction than the SHMI, but not necessarily better discrimination between hospitals.13 However, none of these methods take illness severity into account. A study of adult emergency hospital admissions14 showed that the addition of measures of illness severity, such as physiological measures or blood tests, markedly improved model prediction of risk-adjustment models (c-statistic improved from 0.81 to 0.90) and changed the ranking of nine participating hospitals, based on their standardised mortality ratio (SMR). These findings suggest that variation in illness severity may explain at least some of the variation in the SHMI. However, measures of illness severity are not routinely captured by current data systems, so it is not currently possible to adjust routine mortality data for illness severity, except for specific patient groups, such as major trauma or intensive care (and even in these groups, adjustment for illness severity may be incomplete). This means that variation in mortality rates due to differences in illness severity could be wrongly attributed to differences in healthcare.
Risk-adjusted mortality estimates may also be limited by problems of data coding and analysis. The process of collecting data for risk adjustment requires judgments to be made about inclusion of cases in the analysis, the diagnostic categorisation of cases and the recording of important covariates (such as the number of comorbidities). Variation in these judgments between hospitals15 or over time16 could explain some variation in risk-adjusted mortality. For example, palliative care cases may be excluded from analysis because mortality would be an inappropriate measure of quality of care. However, after public release of HSMR data in Canada, national HSMR declined while rates of palliative care coding increased dramatically.16 This suggests that the use of palliative care coding to exclude cases can be used to improve risk-adjusted mortality rates. The method of analysis, for example, the way in which readmissions are handled,17 can also substantially influence estimates of risk-adjusted mortality.
The process of risk adjustment assumes that each variable in the model has a constant association with mortality, yet studies have shown significant variation between hospitals in this association for key variables used for risk adjustment.14 ,18 In these circumstances, the assumption of a constant association with mortality is known as the ‘constant risk fallacy’ and may result in risk adjustment paradoxically worsening the effect of differences in case mix upon risk-adjusted mortality.19 These limitations of risk-adjustment methods have led to criticism of the use of risk-adjusted outcomes to judge quality of care.20 ,21
Can variation in risk-adjusted mortality be explained by variation in healthcare?
If risk-adjusted outcomes are used to make inferences about quality of care, then the ‘attributional validity’ of the outcome needs to be demonstrated. This is defined as the degree to which variation in the risk-adjusted outcome can be attributed to the quality of care provided.22 The attributional validity of risk-adjustment methods is rarely formally evaluated, and methods are poorly developed.22 ,23 Review methods can be used to evaluate quality of care at institutions, and then compared with risk-adjusted outcomes. These may be explicit, comparing care with a checklist of quality criteria, or implicit, involving a more general reviewer assessment of quality of care on a scale measure; typically based on a 1–5 Likert rating. Similar methods can be used to evaluate the care of individual patients to determine whether discrepancies between predicted and actual outcome are explained by characteristics of the care provided. However, all these methods are subject to rater variability, requiring careful training of raters and the use of unambiguously labelled scales to reduce variability.
Most of such studies have been undertaken in the USA and have produced mixed results. Thomas et al24 studied patients hospitalised for cardiac disease, acute myocardial infarction or septicaemia and only found an association between quality of care and outcome for cardiac disease. Dubois et al25 studied patients with cerebrovascular accident, myocardial infarction and pneumonia, and Park et al26 studied patients with heart failure or myocardial infarction. Both failed to show association between quality of care and mortality rates. A study of Medicare patients27 found a weak statistical correlation, and a study of five medical conditions28 found some association between quality of care and outcomes. Other studies have focussed on surgical patients, again with mixed results.29–31 A study of emergency medical admissions from the UK, Australia and Hong Kong32 found little evidence that deaths occurring in patients with a low predicted mortality from risk adjustment could be attributed to the quality of healthcare provided. Simulation studies33 ,34 also failed to show that risk-adjusted mortality estimates predicted quality of care.
Another way of assessing attributional validity is to explore whether mortality rates correlate with other measures of the structures, processes or outcomes of care. A recent study35 calculated a general quality rating for NHS hospitals (the MHP Health Mandate Quality Index) using routinely available data on 10 quality indicators, including staff and patient surveys, infection rates, waiting times and recorded complaints. Analysis showed no correlation between SHMI and the quality index. By contrast, a survey of human resource directors from 61 acute hospitals in England showed an association between mortality and human resource practices, such as performance appraisal, training and teamworking.36 A study from the USA showed that hospitals that were able to attract and retain good nurses, and provided opportunities for good nursing care, had lower mortality rates than comparator hospitals.37 Meanwhile, Jarman et al38 showed an association between mortality rates and the number of doctors per hospital bed in the NHS. Associations have also been shown between risk-adjusted mortality and measures of emergency department performance, such as overcrowding39 and waiting times.40
These studies provide mixed evidence to support the inference that variation in risk-adjusted mortality can be attributed to variation in quality of care. This may reflect limitations in study design, differences between study settings or problems in measuring quality of care.
Does risk-adjusted mortality change in response to intervention?
If risk-adjusted mortality changes in response to intervention then this could provide evidence of avoidable mortality that can be reduced by intervention, provided sources of bias and confounding are addressed. Sutton et al41 evaluated the implementation of a pay-for-performance programme called Advancing Quality in the northwest of England, and showed an absolute reduction in mortality of 1.3% (95% CI 0.4% to 2.1%) after implementation. Another study of 12 hospitals showed reductions in HSMRs following interventions aimed at reducing avoidable mortality.42 Meanwhile, a study specifically of patients with sepsis showed that risk-adjusted mortality improved after the introduction of sepsis bundles (a selected set of elements of care distilled from evidence-based practice guidelines).43 By contrast, Collum et al44 found no change in risk-adjusted mortality after a reduction in junior doctors’ working hours.
These studies suggest that mortality may be reduced by interventions but they may be subject to bias. Uncontrolled before versus after intervention studies carry a high risk of bias due to changes in patient selection accompanying the intervention resulting in a different risk population after intervention that is not fully accounted for by risk adjustment. For example, centralisation of services will lead to hospitals receiving a different patient population with a different risk of death. Selecting hospitals for intervention on the basis of a high mortality rate also creates potential for regression to the mean. If random variation is at least partly responsible for the high mortality rate that prompted intervention, then there is a high probability that subsequent random variation will produce an apparent fall in mortality rate.
Are there sufficient preventable deaths to explain variation in risk-adjusted mortality?
Ultimately, any attempt to attribute variation in risk-adjusted mortality to the quality of care provided depends upon assuming that a proportion of deaths are preventable and that this proportion is large enough to explain variation in mortality rates. It has been estimated that 6% of hospital deaths are preventable.45 Girling et al46 developed a model to estimate the proportion of the variation in SMRs that could be accounted for by variation in preventable mortality. They found that if only 6% of hospital deaths are preventable then the predictive value of the SMR could be no greater than 9%, that is, if a hospital has a SMR in the highest 2.5% of all hospitals then there is only a 9% probability that the preventable mortality rate was in the top 2.5%. This suggests that risk-adjusted mortality provides a poor means of identifying hospitals with high rates of preventable mortality, unless the proportion of preventable hospital deaths is much higher than the 6% previously estimated.
Should hospital mortality rates be used to judge quality of care?
Mortality rates have powerful face validity that makes them an attractive choice for a quality indicator. Even if it is unlikely that high risk-adjusted mortality reflects preventable mortality, many would argue that investigation of quality of care is essential. Investigation could involve evaluating a range of measures, including process measures or staff and patient feedback, rather that relying on risk-adjusted mortality alone. However, such investigation needs to take into account the limitations of risk-adjustment and the weak and conflicting evidence that variation in mortality rates can be attributed to variation in healthcare. The process of investigation may itself be subject to confirmation bias, whereby examples of poor care that can be observed in any hospital are used to explain a high mortality rate in a hospital selected for investigation. Any improvement in the mortality rate after intervention may be due to bias or regression to the mean. Failure to take these issues into account could lead to a wasteful cycle of investigation, intervention and apparent improvement that is all based on random variation or bias.
Risk-adjusted mortality data present hospitals and the health service with a complex and challenging problem. The investigation into Mid Staffordshire NHS Foundation Trust3 has shown that they cannot be ignored, but the available evidence suggests that they cannot be used to reliably judge quality of care. This uncertainty suggests that presentation of mortality data in the form of league tables is inappropriate, and estimates of numbers of preventable deaths based on risk-adjusted mortality estimates are unlikely to be accurate. Further investigation of the causes of variation in risk-adjusted mortality is required, but this should not be limited to hospitals with high mortality rates, and should involve mixed research methods with in-depth exploration of context, quality of care and workforce dynamics. In the meantime, care should be taken not to label hospitals with high mortality rates as ‘failing’ as this may lead to a spiral of decline, demoralisation and a drift away from the culture of caring demanded by the Francis report.3 ,47
Acknowledgments
We thank Marcin Klingbajl, Suzanne Mason, Fiona Lecky, Richard Wilson, Jon Nicholl, Alicia O'Cathain, Ravi Maheswaran and Beryl Darlinson for commenting on this paper and contributing references.
References
Footnotes
-
Contributors SG conceived the idea for the paper and wrote the first draft. All authors contributed to redrafting the paper and approved the final draft.
-
Competing interests None.
-
Provenance and peer review Not commissioned; externally peer reviewed.