Article Text

Download PDFPDF

Uncontrolled before-after studies: discouraged by Cochrane and the EMJ
  1. Steve Goodacre
  1. Correspondence to Professor Steve Goodacre, Medical Care Research Unit, University of Sheffield, 23 Nairn Street, Sheffield S10 1UL, UK; s.goodacre{at}sheffield.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The simplest way to evaluate an intervention is to compare outcomes before and after implementation of the intervention. This method is known as an uncontrolled before-after study. The term ‘uncontrolled’ is used to distinguish this design from a controlled before-after study in which the before-after effect of implementation in the intervention group is compared with a control group that has no intervention. With the recent emphasis on improvements in healthcare delivery in particular, the number of uncontrolled before-after studies is increasing. Unfortunately, these present a problem.

The use of historical controls in clinical trials has long been recognised to overestimate the benefit of new treatments,1 and has provided misleading information in emergency medicine.2 ,3 For example, uncontrolled before-after data suggested that intravenous thrombolysis could reduce mortality in cardiac arrest,2 whereas subsequent randomised data showed that this treatment increased intracranial haemorrhage and did not reduce mortality.3 Emergency physicians would not generally consider basing their clinical practice on uncontrolled before-after data.

However, many of the important research questions in emergency care relate to the delivery and organisation of services. Emergency department crowding is one of the biggest problems we face4 and there is a desperate need for carefully evaluated interventions to reduce or mitigate crowding. Controlled evaluation of health system interventions is difficult and there are theoretical reasons to anticipate that the selection biases that render before-after comparisons of individual patient treatments unreliable may be less severe for service level interventions. For example, if outcomes are measured over time across a whole population (eg, all emergency department attendees) then there is little potential for selection bias. Furthermore, it may be impossible or impractical to randomise patients to service level interventions, creating a justification for non-randomised methods in this situation.

These are reasonable points but should not go unchallenged. The risk of selection bias is only reduced if there is no opportunity for patients, providers or researchers to select into or out of the intervention group. The design cannot control for contemporaneous changes in case mix, referral patterns or other elements of care. The intervention may be accompanied by other changes in care or additional resources that confound any attempt to infer causality directly. Most before-after studies do not continue long enough to determine whether the intervention and its apparent effect are sustainable. In the light of such concerns, the Cochrane Effective Practice and Organisation of Care (EPOC) Group strongly discourage inclusion of uncontrolled before-after studies in EPOC reviews because it is difficult, if not impossible to attribute causation from such studies.5

The justification of necessity, based on this difficulty of undertaking randomised trials, should also be questioned. Randomising individual patients to different methods of service delivery may be impractical or impossible but cluster randomised methods, such as randomising periods of time to one service or another, are feasible and have been used in emergency care.6 Controlled before-after studies can be used to adjust for changes over time by, for example, collecting data from other hospitals that did not implement the intervention.7 Interrupted time series and repeated measures studies can be used to show that a change in outcome is clearly related to implementation of the intervention rather than a general ongoing trend over time.8 These methods may take more time, resources and expertise (especially statistical) but they will provide much more robust and convincing evidence.

The study by Borde et al,8 published online provides an example. They used uncontrolled before-after data to evaluate the impact of an antibiotic stewardship programme upon antibiotic use in the emergency department. A simple before-after comparison would not have differentiated between the effect of intervention upon antibiotic use and trends in antibiotic use over time. However, by using interrupted time series analysis they were able to determine the effect of the intervention while taking into account baseline trends over time. Ideally they could have used cluster randomisation to compare hospitals implementing the programme with those not implementing the programme but this would be prohibitively difficult and expensive to undertake.

At the EMJ, we realise that a before and after design may be the most practical method for studying an intervention, and we will continue to publish these. However, priority will be given to those that use cluster randomisation, a control group or an interrupted time series analysis where a control is not used. Strong justification will be needed as to why these methods are not used, and, in those cases, conclusions will need to avoid inappropriately inferred causality, while still convincing readers that they provide meaningful new knowledge., Publication of such uncontrolled studies is therefore likely to be limited to exceptional circumstances, such as evaluation of a national change in policy that would otherwise go unevaluated or pilot studies of important innovations that provide evidence of proof of concept prior to robust evaluation.

References

View Abstract

Footnotes

  • Competing interests None.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles