Article Text
Statistics from Altmetric.com
It is estimated that approximately 60 000 out-of-hospital cardiac arrests (OHCA) occur in the UK each year.1 2 Resuscitation is attempted by emergency medical services (EMS) in <50% of cases, with non-resuscitation decisions being undertaken according to national guidance.3 The Ambulance Service Association first noted variability in outcomes from cardiac arrest between 2004 and 2006 with return of spontaneous circulation rates ranging from 10% to 25%.1 Recent data from the Scottish and London Ambulance Service confirm similar variability in survival to discharge rates of 1%4 and 8% respectively.5
As part of the focus on improving quality of care, the Department of Health for England introduced survival from cardiac arrest as part of the Ambulance Service National Quality Indicator set in April 2011. Return of spontaneous circulation and survival to hospital discharge rates are reported for all patients who have resuscitation (advanced or basic life support) started/continued by an NHS ambulance service after an out-of-hospital cardiac arrest.6 The first results were published in September 2011 and are summarised in figure 1. Incidence data are not reported, however, as a surrogate measure comparing the number of cardiac arrests with total number of category A 999 calls shows more than threefold differences between services (range 5.2–17.6 per 1000 category A 999 calls). Survival rates similarly show 3–5 fold variability (13.3–26.7% for return of spontaneous circulation upon arrival at the emergency department and 2.2–12% for survival to discharge). This variability persists even after adjustment for the Utstein comparator group (cardiac arrest of presumed cardiac origin, where the arrest was bystander witnessed and the initial rhythm was ventricular fibrillation or ventricular tachycardia).
The finding of variation in outcomes from cardiac arrest is not new. Nichol et al identified evidence of regional variation in both incidence and outcomes from OHCA in 10 North American sites.8 There was a more than twofold difference in incidence (rates ranging from 71 to 160 per 100 000 population) and similar variability in the number of cases where the decision was taken to start resuscitation. When resuscitation was attempted there was marked variation in survival rates (range 3.0–16.3%).8 The systematic review of Berdowski's et al in 2010 of global incidence and outcomes from OHCA identified 67 studies. There was more than a 10-fold global variation in OHCA incidences and outcome with average survival to discharge of 7%.2
One purpose of the Quality Indicators is to benchmark performance between ambulance services thus serving as a tool to drive improvements in quality of care.9 However, before this can be achieved a better understanding of the sources of variability is required so that resources can be appropriately targeted. Variability may arise from differences in the administrative processes—for example, methods used for case ascertainment and identifying outcomes, strategies for managing missing data and ensuring data quality. Patient factors (eg, body mass index,10 race,11 social deprivation,12 location of cardiac arrest13) and factors related to the emergency response (eg, bystander cardiopulmonary resuscitation (CPR),14 access to automated external defibrillators,15 EMS response time16 and quality of CPR).17 Treatments provided in hospital after return of spontaneous circulation can also affect survival to hospital discharge.18 19
So how can these data be used to improve patient outcomes? First, we need to understand the source of variability. As an initial step consistency in case and outcome identification processes is required. Although guidance is provided on the identification of cases,6 the way this is implemented in practice may vary. In the context of the PaRAMeDIC trial20 we have identified at least four routes for case identification within a single ambulance service. These include (1) crew reporting; (2) screening advance medical priority dispatch system codes (previously reported to have a sensitivity of 65%1); (3) electronic searches of patient report forms for diagnostic codes for cardiac or respiratory arrest; (4) screening case report forms for key variables (zero pulse, zero respiration, defibrillation, intravenous epinephrine). Evaluation of the first 100 cases showed at least 20% variability depending on which data source was used for case identification. Of similar importance to case identification, consistent systems for measuring outcome are required. There are currently multiple approaches in use within ambulance services. These vary from review of entries on the case report form to contacting hospital emergency departments, or central records departments, local registry offices checks, coroner reports, searching electronic patient record (summary care record) or using the Medical Research Information Service. The reliability and comparability of these different approaches is unknown.
Once greater reliability has been introduced into case identification and outcome identification, one can start to search for reasons for any remaining variability in incidence. A higher incidence may prompt review of primary and secondary prevention strategies for ischaemic heart disease21 or encourage targeted intervention—for example, education programmes in high-risk communities. As cases (denominator) are defined by the decision to start CPR, looking for regional variation in the application of recognition of life extinct criteria22 may be illuminating. Examining unsuccessful resuscitation attempts may identify gaps in end-of-life care provision or failed communication between primary, secondary care and the ambulance service with respect to do not attempt resuscitation decisions.
Exploring process variables in the emergency response to cardiac arrest may be revealing. The quality of CPR is known to determine outcome in OHCA.23 Measuring compliance with CPR guidelines is now feasible through either manual data downloads from defibrillators or remotely by telemetry.24 If gaps in performance are identified then this can inform future training strategies. Examining the frequency of cardiac arrests and timeliness of emergency response (EMS arrival time, time to first shock) at specific locations may identify areas where the targeted deployment of automated external defibrillators and/or community first responder schemes may be beneficial.
The patient pathway for OHCA involves critical interventions in the pre-hospital and in-hospital setting, yet it is rare for an evaluation to take a combined systems approach. Consideration should be given to data linkage of pre-hospital data to hospital-based audit systems (eg, MINAP, National Cardiac Arrest Audit, Intensive Care National Audit Research Centre Case Mix Programme) to allow a more complete picture to be built about the process of care.
The strategic transition by the Department of Health from time focused performance standards to Quality Indicators heralds an important change in direction and allows us to start to focus on outcomes that are important to patients and their families. Inclusion of cardiac arrest survival as one of the indicators marks an important step towards quality improvement in OHCA. To build on what we can learn from these early insights necessitates extending data collection to include other key variables identified by the Utstein group.25 Collating these centrally in the form of a cardiac arrest registry, similar to initiatives in the USA26 and Europe27 would allow adjustment for case mix and provide additional insight into ways in which we can improve outcomes from cardiac arrest through continuous data collection, analysis, and reporting. Improving outcomes from OHCA needs a whole systems approach with high quality comparable data to allow investigation of the quality of care as well as the timeliness of its delivery.
References
Footnotes
Funding GDP is funded by a Clinician Scientist Award from the National Institute for Health Research.
Competing interests MWC was involved in establishing the new DH ambulance national Quality Indicators and their continuing development.
Provenance and peer review Not commissioned; internally peer reviewed.