Article Text

Download PDFPDF

Twenty-five years of the trauma audit and research network: a continuing evolution to drive improvement
  1. Fiona Lecky
  1. Correspondence to Dr Fiona Lecky, University of Sheffield, Trauma Audit and Research Network University of Manchester, Salford Royal Hospitals NHS Foundation Trust, EMRiS Group, Health Services Research, School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK; Fiona.lecky{at}manchester.ac.uk, f.e.lecky{at}sheffield.ac.uk

Statistics from Altmetric.com

In 1988, the Royal College of Surgeons (England) Committee on Trauma—chaired by Professor Sir Miles Irving—reviewed the anonimised case notes of 1000 UK patients with major injuries who had died after reaching hospital alive. Each patient record was scrutinised by four clinical experts in trauma care with a brief to state whether or not the death could have been prevented by better care—such as that available in a late 1980s North American level 1 trauma centre. The majority of reviewers found that one-third of trauma deaths were preventable by this yard stick—two-thirds of those dying from haemorrhage. This ‘preventable death review’ has since been consigned to a 20th century research technique as it is highly prone to bias (impossible not to conclude errors were contributory if death is the known outcome)—however, it is nevertheless a seminal study—not least because the authors understood the limitations of their findings. In making their recommendations the Committee highlighted failings throughout the trauma patient pathway (scene to rehabilitation) and a general lack of evidence on institutional (hospital/network/system) performance.1 Their recognition of the need to address this gap led David Yates (DWY)—Professor of Emergency Medicine and Maralyn Woodford (MW)—co-author on the RCS report; both from Hope (now Salford Royal) Hospital—to pick up the baton.

To this end Hope Hospital became the sole subscribing UK member of the then USA Major Trauma Outcome Study (US MTOS)—at that time in the vanguard of using risk-adjusted mortality rates to inform quality of care comparisons. The US MTOS led by Howard Champion, Wayne Sacco, William Boyd and colleagues at the Washington Hospital Centre built on the extraordinary development of the Injury Severity Score (ISS) by Susan Baker et al and the Abbreviated Injury Scale (AIS) from the Association for the Advancement of Automotive Medicine.2 ,3 We all know that no two injured patients are alike and therefore comparing hospitals in terms of crude mortality—or any crude outcome—is therefore meaningless given the potential for infinite case-mix heterogeneity. AIS and ISS allowed quantification of the threat to life from this infinite variety and severity of injury to organs and tissues. Champion et al4 combined the ISS with measures of host frailty (age less than or greater than 55 years), physiological derangement (Revised Trauma Score) with modification for mode of injury (blunt/penetrating) to generate ‘survival probabilities’ for each individual patient—using binary logistic regression of their large trauma patient database. Rather than stating whether each injured patient ‘should or should not survive’ a probability from 0% to 100% was given, taking account of all these characteristics. This in turn enabled an expected rate of survival to be calculated by summing each patient's survival probability within each hospital which could be subtracted from the observed number of survivors—with appropriate confidence limits.5

Yates et al promoted this statistically based approach to quantifying preventable mortality and received a Department of Health grant to collect data and measure this in 13 North West Hospitals in 1989. By 1990 there was a meaningful dataset; the UK Major Trauma Outcome Study (UK MTOS) was born. All agreed that measurement of outcomes without information on care processes was less than helpful; hence the dataset contained early process indicators such as seniority of clinician in the emergency department (ED) and operating theatre, times from scene to ED arrival, imaging and surgery. By 1992, 33 hospitals had joined and the BMJ published the first report of UK MTOS highlighting a discrepant outcome for blunt trauma patients when compared with the US-driven norm.6 By 1996, 100 trauma receiving hospitals in England, Wales, the Republic of Ireland and Copenhagen were members receiving quarterly reports comparing their own risk adjusted outcomes and process indicators with that of other hospitals in a confidential anonimised fashion. Risk adjustment had been refined by Hollis et al, moving from Z statistics to direct standardisation of each hospital's performance on the national case-mix (standardised W score).7 The MTOS Steering Group of expert trauma clinicians and academics oversaw these developments and advised on how the team addressed the many and varied challenges that this project has encountered over the subsequent 20 years—the first, of course, being the need to move from a ‘study’ that implied closure in the foreseeable future to an ongoing endeavour; hence the rebranding as the Trauma Audit and Research Network (TARN) with an executive led by MW with DWY as Board Chair. At this point Department of Health made it known that they expected trusts to fund their own participation and MW liaised very successfully with member trusts to this effect. Laura White joined TARN over this time as a young injury coder and now—as national training manager—trains hospitals in data collection and quality assures the AIS coding of injuries by the TARN team.

At this point interrupted time series analyses became possible and it became clear than significant improvements in risk-adjusted mortality in England and Wales had occurred in the first years of the study (up till 1993)8—probably attributable to advanced trauma and paediatric life support (ATLS, APLS) training of clinicians and nurses. A further analysis from 1989 to 2000 suggested that a patient mortality outcome plateau was occurring from 1994 onwards, perhaps not surprising as little had changed in terms of trauma care processes and the Department of Health had decided not to move forward with the trauma centre/trauma system approach.9 Not long after I became research director (2000), I had the privilege of working with the Society of British Neurosurgeons who recognised TARN's high-quality case ascertainment/coding of traumatic brain injury (TBI). It became clear at that point that TBI was the major driver of mortality within TARN and that a post code lottery applied in terms of access to specialist neuroscience care—particularly for patients with severe diffuse brain injury. It was this rather than undiagnosed haemorrhage that probably represented the biggest systemic trauma care failure in England at this point. The Lancet published these findings in 2005.10 National Institute of Health and Care Excellence (NICE) based a recommendation on this finding in 2007 in the updated head injury guidance and a halving in TBI case fatality has been associated with increasing transfer to neuroscience over the 2003–2009 period, although NICE's other recommendations around better access to early CT imaging also probably contributed to these improved outcomes.11

TARN and the trauma community faced many challenges over the noughties; NHS acute trusts became ‘star rated’ in 2004. Trauma outcomes were not part of this rating and TARN membership declined somewhat impacting on funding/staffing but the faithful—including long-standing member hospitals who continued to subscribe and submit data—never lost hope. TARN was able to demonstrate its value to the then Care Quality Commission (CQC) through updating its risk adjustment model to better fit European Trauma and detecting/addressing trauma care mortality outliers in a professional way that protected patients while engaging clinicians and commissioners. The CQC provided funding to move from paper-based data collection with all its limitations to the current electronic collection of data and reporting system—which owes much to the vision and expertise of Peter Oakley—and worked with TARN/member trusts to deanonimise TARN reporting. In 2007, TARN became the second national clinical audit (NCA)—after the cardiac surgeons—to publicly report trust risk-adjusted mortality and care process figures after MW had rolled out a staged programme of deanonimisation with the member trusts. Trusts use this TARN analysis to quality assure their trauma care at multidisciplinary trauma meetings, the public availability aids transparency and accountability. Internationally, TARN was the first trauma registry to publicly identify trust performance. During this time DWY also demitted from chairmanship of the TARN Board; this role since 2006 has been filled by Professor Tim Coats who has had a pivotal role in promoting TARN and developing its strategy and structure over the past decade.

Although outcomes were improving TARN research—where data were merged with that of other countries’ regional trauma registries—indicated that the failure to specifically commission trauma networks was hampering progress and preserving suboptimal outcomes,12 this concurred with the findings from the 2007 NCEPOD report ‘Trauma—who cares’ (which through a more sophisticated peer review process than 1988 found that less than 50% of severely injured patients were receiving good care) and the findings of the National Audit Office (NAO)—which used TARN data—in their investigation of NHS Trauma Care. The NAO's findings were presented to the Parliamentary Public Accounts Committee, and Professor Keith Willet—then National Clinical Director for Trauma—was given a green light to forge ahead with developing a specification for the new NHS England Trauma Networks. Trauma is no longer the NHS Cinderella we now have systems and networks with the transparency and accountability that TARN reporting provides to 100% of English and Welsh Trauma Receiving hospitals. TARN continues to work with the current National Clinical Director Professor Chris Moran and specialist commissioning going forward. Early time series analysis of risk-adjusted outcomes is promising particularly for the Major Trauma Centres13 and TARN is working with Trauma Units to improve data quality. TARN's audit methodology has been endorsed by the National Office of Clinical Audit in the Republic of Ireland where >90% of trauma-receiving hospitals now subscribe to TARN under this programme.

So TARN over its first quarter century has evolved from an interest group into part of the NHS quality assurance furniture. The exploitation of TARN data for research also continues to grow thanks to the enthusiasm of investigators (including past and present PhDs), effective academic/clinical collaboration nationally and internationally, the continuing high-quality data collection, injury coding and analysis provided by the TARN staff in Salford—and the participating hospitals. TARN owes much to this committed constituency of members and researchers, and to outstanding leaders in Yates and Woodford. TARN is also fortunate to have the unique injury coding and training skills of Laura White, honed over two decades, Dr Omar Bouamra (and his modelling expertise) as its full time statistician since 2001, Tom Jenks, who has been registry manager for a decade, and Antoinette Edwards (AE), who also joined in 2001 and is now deputy to MW. AE convenes the research committee, maintains the research portfolio and is leading on the NHS England-funded Trauma PROMS pilot among other roles. We see in this current EMJ issue some of the potential research uses of TARN data which range from updating risk adjustment to include meaningful measures of comorbidity, a gap analysis of the potential application of resuscitative endovascular balloon occlusion of the aorta, characterising the changing major trauma demographic and the differences between children suffering major injury from suspected child abuse when compared with accidental injury; others are described in a separate article ‘Top ten TARN papers’ as voted for by the TARN Research Committee. The biggest research impact from these 10 was the aforementioned collaborations with Society of British Neurological Surgeons,10 ,11 which in 2005 provided real-world evidence for the NICE 2007 recommendation that improved access to neuro critical care for patient with diffuse brain injury—then in 2011 closed the loop by showing that access and mortality had improved. TARN is part of an international family of trauma registries, remains in the University of Manchester with strong links to the School of Health and Related Research at the University of Sheffield and the University of Leicester. TARN also has a thriving paediatric interest group ‘TARNlet’, which was initially chaired by Dr Evelyn Dykes and now co-led by Dr Ian Maconochie and Mr Ross Fisher. TARN has recently renewed its Board with increasing patient and public involvement, appointed Directors of Clinical Audit Dhushy Kumar and Iain MacFadyen with members of the new Audit Committee to enable maximum benefit from TARN reporting.

So what have we learnt from this evolution? My personal view is that it represents possibly the best NHS evidence that NCA can work for patients, clinicians, hospitals, clinical networks and funders. Effective NCA needs rigorous methodology, independence and a publicly available reporting product that is meaningful for all these contributors—that they can buy into with interest, data and resources where needed to keep updating and evolving the enterprise. Any NCA that succeeds in this regard will always provide a vital real-world research resource. There have been giants on which the foundations of TARN were laid—the current successes are a tribute to their endeavours and a legacy to be celebrated. This was recognised by our silver anniversary award to David Yates at the 2015 Trauma Care meeting. As long as TARN recognises the lessons of its evolution, then it can look forward to the future with humility and purpose.

References

Footnotes

  • Contributors The author drafted the whole of the commentary which was requested for this anniversary edition by the Journal Editorial Team.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.