Intended for healthcare professionals

Other

The challenges of systematic reviews of educational research

BMJ 2005; 331 doi: https://doi.org/10.1136/bmj.331.7513.391 (Published 11 August 2005) Cite this as: BMJ 2005;331:391
  1. Jill Morrison, professor of general practice (jmm4y{at}clinmed.gla.ac.uk)1
  1. 1 University of Glasgow, General Practice and Primary Care, Glasgow G12 9LX

    Littlewood et al present the results of a systematic review of the evidence in the medical education literature about how early experience contributes to the basic education of health professionals.1 Increasingly, emphasis is being given to basing decisions about teaching practice on evidence because the alternative is the PHOG approach: prejudices, hunches, opinions and guesses.2 The review was carried out under the auspices of the Best Evidence Medical Education (BEME, http://www.bemecollaboration.org/) collaboration, which aims to promote best evidence medical education through dissemination of information, producing systematic reviews and the creation of an evidence based culture. It attempts to synthesise the available evidence in a format that can be used by curriculum planners and others involved in medical education to enable them to make decisions about how to provide the best learning opportunities for students.

    What are the readers of the BMJ to make of this review? Its readers are accustomed to a rather different kind of systematic review that predominantly evaluates the results of a number of randomised controlled trials. As Littlewood et al say that early experience is part of a complex curriculum intervention.1 It, therefore, does not lend itself to evaluation using simple experimental designs such as randomised controlled trials. BEME recognises that systematic reviews should not be restricted to randomised controlled trials, which may have high validity from the perspective of research methods but are expensive to undertake and may not be the most appropriate type of study to answer the questions raised.3

    Norman and Schmidt go further and say that educational trials are ill founded, ill advised, and a waste of time and resources.4 They argue that there is no such thing as a blinded intervention or a pure outcome or a uniform intervention in educational trials.

    What is needed is for “multiple lenses to look at data from different perspectives,”3 but Harden and Lilley have described the challenge of identifying and evaluating the evidence as formidable.2 The evidence may not be available; the research method, the outcomes investigated, or the replication of the evidence may not be optimal; and the applicability of the conclusions to the individual teacher in their particular setting may not be appropriate. Of course, this is true of much clinical evidence. We don't know the answers to many clinical questions because the evidence is not available or not convincing and often research carried out on a population of highly selected patients cannot be generalised to an individual patient.

    The BEME collaboration endorses the principle that medical educators should implement the practice of methods and approaches to education based on the best available evidence. Littlewood et al have identified and evaluated the evidence about early experience for us. They freely discuss the limitations of the review but point to the rigour of its methods. The evidence in this review is as good as it gets for medical educators but, as Harden points out, it is still up to the individual teacher to evaluate the evidence and to arrive at the best approximation of the truth for his or her teaching practice.2

    Footnotes

    • Competing interests None declared.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.