Intended for healthcare professionals

Editorials

Please bypass the PORT

BMJ 1994; 309 doi: https://doi.org/10.1136/bmj.309.6948.142 (Published 16 July 1994) Cite this as: BMJ 1994;309:142
  1. T A Sheldon

    Struggling to contain health care costs, the United States has devoted considerable attention to health technology assessment, producing clinical guidelines and measures of appropriateness, organising consensus conferences, and carrying out studies of practice variation and “outcomes research.” Serious reservations have been expressed about the scientific validity of some of these approaches. Controversy focuses particularly on the use of routine health records to assess the effectiveness of treatments. Such studies conflict with the British tradition of using randomised controlled trials. Last year the New York Academy of Sciences brought together the exponents of different approaches to evaluating health care. The results have now been published in a book, which also provides edited accounts of the often heated discussion.1

    When Congress created the Agency for Health Care Policy and Research in 1989 it was anxious for quick results, assuming that analysis of databases would provide technical solutions to the United States' health care crisis.2 Fifteen patient outcomes research teams (PORTs) with a planned budget of nearly $200m were established in clinical areas such as ischaemic heart disease, acute myocardial infarction, diabetes mellitus, prostatic disease, and back pain. A team works principally by combining the findings from a literature review with an analysis of routine observational data generated mainly from databases held by Medicare, insurance companies, and hospitals. Though many of the reviews and guidelines produced by the Agency for Health Care Policy and Research are excellent, the central element of the outcomes research teams' strategies is fundamentally flawed,3 and this book reveals the weak scientific foundations of the programme.

    One of the most important methods for assessing whether treatments really do more good than harm is the randomised controlled trial, made famous by Bradford Hill half a century ago. As long as they are sufficiently large, such trials are valid methods of evaluating interventions because if patients are randomised to alternative treatment groups differences in outcome between the groups can be more confidently attributed to the difference in treatments received.4

    In non-randomised observational studies, however, patients receiving different treatments may differ systematically with respect to any number of known and unknown factors that affect prognosis. These include the severity of the main and accompanying disease, clinical setting, and clinician. Although statistical adjustments may be made in an attempt to exclude the effects of these confounders (and thus isolate any differences due solely to the treatment), this assumes both a complete knowledge of the confounding variables and their comprehensive and accurate measurement. Neither is likely to be possible, and at least a moderate bias will remain.5 As most common treatments that interest us will probably have only moderately sized effects (though with a large absolute benefit in large populations) the ability to exclude even moderate effects of confounding is vital.

    Despite its considerable cost the American programme does not seem to have made a substantial contribution to our knowledge of effectiveness in any field through the analysis of observational data. This is in stark contrast to the contribution made by large scale simple randomised controlled trials and properly conducted overviews of such trials. For example, in this book Peto and colleagues describe four major examples - the survival gains incurred with thrombolysis for acute myocardial infarction, use of aspirin for people at high risk of thrombotic events, adjuvant treatment for early breast cancer, and the evidence for a lack of benefit with infusions of magnesium in suspected myocardial infarction. Some of the overviews in the Cochrane Pregnancy and Childbirth database have contributed to the quality of care for women using maternity services.6

    In Britain the lack of routinely collected data on health care process and outcome, though a national disgrace, has largely protected us from the mirage of quick and easy answers from analyses of databases. The government's misuse of the limited data that it collects, such as school examination results and hospital waiting times, to produce crude and meaningless league tables for comparing institutions should sound a cautionary note.7

    No short cuts exist for obtaining reliable information on effectiveness. Large multicentre simple randomised controlled trials and meta-analyses of trails can answer reliably a wide range of questions about the effectiveness of treatments. The challenges now are to design trials that provide answers to more clinically relevant questions, such as which patients stand to benefit most.8 We need to ensure that meta- analysis is used appropriately and incorporates analysis of adequate sensitivity9; we need to include the outcome measures that matter to patients; and we need to work harder to get the results of research into practice.

    Last year the patient outcomes research teams seemed secure. In the face of the sort of criticism summarised in this book, however, a major shift was recently announced for the second phase, emphasising other methods such as clinical trials. This is a welcome change: diverting money from relatively cost effective trials to uninformative analyses of databases may do more harm than good.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.