Education/original research
Improved Medical Student Satisfaction and Test Performance With a Simulation-Based Emergency Medicine Curriculum: A Randomized Controlled Trial

Presented as an abstract at the ACEP 2008 Research Forum, October 2008, Chicago, IL.
https://doi.org/10.1016/j.annemergmed.2009.03.025Get rights and content

Study objective

We determine the effect of a simulation-based curriculum on fourth-year medical student test performance and satisfaction during an emergency medicine clerkship.

Methods

This was a randomized controlled study using a crossover design for curriculum format and an anonymous end-of-rotation satisfaction survey. Students were randomized into 2 groups. One group started the rotation with simulation and the other with group discussion. Midrotation, they each crossed over to the opposite format. All students subsequently completed the same multiple choice examination. We assessed paired samples of the number of questions missed for material taught in each format. Students rated satisfaction with a 5-point Likert scale framed as attitude toward simulation compared with group discussion. Scores ranged from 5, signifying strong agreement with a statement, to 1, signifying strong disagreement.

Results

Ninety students (99%) completed the multiple choice test. Significantly fewer questions were missed for material presented in simulation format compared with group discussion, with a mean difference per student of 0.7 (95% confidence interval [CI] 0.3 to 1.0; P=.006). This corresponds to mean scores of 89.8% for simulation and 86.4% for group discussion. Eighty-eight (97%) students completed the satisfaction survey. Students rated simulation as more stressful (mean 4.1; 95% CI 3.9 to 4.3), but also more enjoyable (mean 4.5; 95% CI 4.3 to 4.6), more stimulating (mean 4.7; 95% CI 4.5 to 4.8), and closer to the actual clinical setting (mean 4.6; 95% CI 4.4 to 4.7) compared with group discussion.

Conclusion

A simulation-based curriculum yielded measurable benefits. Students demonstrated a small improvement in learning and were more satisfied with the simulation-based curriculum compared with group discussion.

Introduction

In a 2005 systematic review of high-fidelity medical simulations, Issenberg et al1 identified the need for additional research to demonstrate the effectiveness of simulation in education and assessment. Their review found that only 20% of high-fidelity simulation studies addressed the effectiveness of learning or reported findings “that are clear and probably true.” The remaining 80% were judged to be “equivocal at best.” They concluded that although the quality of studies conducted to address learning with high-fidelity simulation is a limitation, “it works under the right conditions.” Several subsequent studies that demonstrated simulation to be an effective training tool for clinical and teamwork skills were not designed to rigorously compare simulation with other method of instruction.2, 3, 4, 5, 6, 7, 8, 9, 10 Ali et al11 evaluated the effect of simulation on written test results and demonstrated a statistically significant improvement in medical student performance on a multiple choice posttest after the addition of 2 hands-on trauma resuscitation stations to a trauma evaluation and management program. However, their intervention was an addition to the standard curriculum and not a comparison of simulation to the standard curriculum. Studies evaluating the effect of simulation on test performance compared to didactic formats have not been conclusive. Wong et al12 conducted a randomized controlled study to compare the effectiveness of instruction with a human patient simulator with that of a standard lecture for second-year medical students learning about the pathophysiology of shock. The simulator group performed slightly better on the posttest compared with the lecture group, but the sample size was small and the results did not achieve statistical significance.12 Gordon et al13 compared a single intervention of simulator-based teaching with traditional lecture-based instruction for third-year Harvard medical students. They found a significant difference between preintervention and postintervention performance on a short answer test in both groups, but the difference between the simulation and the lecture groups was not significant. Studies evaluating student satisfaction with human patient simulator sessions have demonstrated high satisfaction levels.14, 15, 16, 17 However, conclusive quantitative assessment of the degree of learning and student satisfaction achieved with simulation compared with didactic forms of teaching is lacking in the literature. To more comprehensively evaluate the effect of simulation on student test performance and satisfaction, we compared a simulation-based format for a medical student curriculum in emergency medicine with an established case-based group discussion format.

Section snippets

Goals of This Investigation

Our primary goal was to compare the effectiveness of a simulation-based format for a fourth-year medical student emergency medicine clerkship curriculum with a case-based group discussion format using the scores on a standardized multiple choice final examination. The independent variable was teaching format and the dependent variable was the number of incorrect answers on the final examination. Our secondary goal was to evaluate student satisfaction with simulation compared with the case-based

Results

During the study period, 91 of 91 eligible subjects were enrolled and completed the informed consent. Ninety (99%) students successfully completed the clerkship, providing 90 sets of scores on the multiple choice questions. Two students completing the clerkship did not complete the satisfaction survey, and thus there were 88 responses (97%) constituting the students' satisfaction with simulation compared with the case-based group discussion format.

The total number of questions answered

Limitations

This study had several important limitations. First, although the number of students enrolled in the study was adequate to demonstrate a statistically significant difference in performance on a multiple choice test, the results are representative of students from a single medical school. There were a few visiting fourth-year students from other institutions, but the number was too small to assess external validity. Second, the mean examination score for the class was 88%, which was higher than

Discussion

Both group discussion and the simulation laboratory offer settings that are controlled, recordable, and safe.21 In addition to providing an interactive method of knowledge transfer, both formats allow students to gather information, develop a differential diagnosis, and solve problems. However, the simulation laboratory offers students the opportunity to practice additional clinical, technical, and teamwork skills compared with group discussion while still providing a safe environment in which

References (27)

  • Wayne DB, Didwania A, Feinglass J, et al. Simulation based education improves quality of care during cardiac arrest...
  • J. Ali et al.

    The simulated trauma patient teaching module—does it improve student performance?

    J Trauma.

    (2007)
  • G. Wong et al.

    A trend toward improved learning of cardiovascular pathophysiology in medical students from using a human patient simulator: results of a pilot study

    Adv Physiol Educ.

    (2007)
  • Cited by (0)

    Provide feedback on this article at the journal's Web site, www.annemergmed.com.

    Supervising editor: Peter C. Wyer, MD

    Author contributions: RPTE conceived the study, designed the trial, and obtained institutional review board approval. RPTE, MT, and JMB designed the curriculum, determined the learning objectives, and created the individual session debriefing slides. RPTE wrote the simulation programs and MT conducted a pilot session. RPTE and JMB conducted subject recruitment, simulations, and some of the group discussion sessions. RPTE managed the data, including quality control, and worked with the university's statistical consulting center for analysis of the data. RPTE drafted the article, and all authors contributed significantly to revisions. RPTE takes responsibility for the paper as a whole.

    Funding and support: By Annals policy, all authors are required to disclose any and all commercial, financial, and other relationships in any way related to the subject of this article that might create any potential conflict of interest. See the Manuscript Submission Agreement in this issue for examples of specific conflicts covered by this statement. No external funding was involved in completion of this study, and none of the authors have any financial interest in the product studied or the company that produced it.

    Publication date: Available online April 25, 2009.

    View full text