Statistics from Altmetric.com
Frequently, having formulated a three part question, searched and read the literature, we find that the evidence does not answer our original question well. Describing what was found and how close it came to answering our question is usually left to the authors who may tabulate the results (such as those presented by the BETs from Manchester Royal Infirmary). An alternative is to present the results in a graphic form and this maybe thought of as complementary to the tabulated method in much the same way as forest plots have been used to present the data of meta-analyses.
The method consists of constructing four axes (fig 1). The first three relate to the question. Specifically the patient population, the intervention (and its comparison) and the outcome(s). Each of these axes is equally important as failure along any one of them will seriously affect the usefulness of that paper. The fourth axis relates to the quality of the paper using the grading system advocated by the Centre for Evidence Based Medicine.
Next, how can each of the axes be used to indicate how good the evidence is? Experience indicates that having read the literature the different categories of patient populations, interventions and outcomes that were studied will become clear and we will have a sense of how close or how far they are from our original question. The axes can be arbitrarily marked to indicate how close they are to our ideal study. In this issue (see page 453) the use of lignocaine (lidocaine) as a pretreatment for rapid sequence intubation in head injuries is presented both in the BET format and this graphical method. It was clear that the patient population was patients with acute traumatic brain injury, not those with brain injury studied after 24 hours or those with other brain disorders. Similarly the intervention was rapid sequence intubation with or without pretreatment and not other anaesthetic techniques that did or did not include intubation. The outcome of interest was neurological status on discharge and not the surrogate markers of intracranial pressure or spinal pressure. It is clear that one can get a sense of how close the literature is to our question—in other words how good is the “fit” of the literature to the question. What intervals we use between the different categories along each axis is arbitrary and while most will agree with the ranking many will not agree that the intervals between each category are equal as presented here. However, the reader can use his own experience and that of his patients to decide the size of these intervals. The final axis relates to the quality of the paper and this likewise has been presented with equal intervals between the categories. This axis is as important as the others—if the design is poor the validity of the results will be weakened.
Potential criticisms of this method could include:
(1) The categories along the axes were derived after the literature review and therefore there is a risk of bias. While this is true clearly the literature has to be read first! However, the importance of constructing a clearly formed question (with its ideal population, intervention and outcome) before searching and sticking to it provides some protection against this criticism.
(2) Because the interval between the categories is arbitrary the method adds little to what already happens. However, by making the intervals explicit and applying them in the same way to all the papers the authors have made their judgements transparent and how they have arrived at their conclusions should be clear.
It is hoped that this device will be of help. The reader must decide.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.