Quantifying learning in medical students during a critical care medicine elective: a comparison of three evaluation instruments

Crit Care Med. 2001 Jun;29(6):1268-73. doi: 10.1097/00003246-200106000-00039.

Abstract

Objective: To compare three different evaluative instruments and determine which is able to measure different aspects of medical student learning.

Design: Student learning was evaluated by using written examinations, objective structured clinical examination, and patient simulator that used two clinical scenarios before and after a structured critical care elective, by using a crossover design.

Participation: Twenty-four 4th-yr students enrolled in the critical care medicine elective.

Interventions: All students took a multiple-choice written examination; evaluated a live simulated critically ill patient, requested data from a nurse, and intervened as appropriate at different stations (objective structured clinical examination); and evaluated the computer-controlled patient simulator and intervened as appropriate.

Measurements and main results: Students' knowledge was assessed by using a multiple-choice examination containing the same data incorporated into the other examinations. Student performance on the objective structured clinical examination was evaluated at five stations. Both objective structured clinical examination and simulator tests were videotaped for subsequent scores of responses, quality of responses, and response time. The videotapes were reviewed for specific behaviors by faculty masked to time of examination. Students were expected to perform the following: a) assess airway, breathing, and circulation; b) prepare a mannequin for intubation; c) provide appropriate ventilator settings; d) manage hypotension; and e) request, interpret, and provide appropriate intervention for pulmonary artery catheter data. Students were expected to perform identical behaviors during the simulator examination; however, the entire examination was performed on the whole-body computer-controlled mannequin. The primary outcome measure was the difference in examination scores before and after the rotation. The mean preelective scores were 77 +/- 16%, 47 +/- 15%, and 41 +/- 14% for the written examination, objective structured clinical examination, and simulator, respectively, compared with 89 +/- 11%, 76 +/- 12%, and 62 +/- 15% after the elective (p <.0001). Prerotation scores for the written examination were significantly higher than the objective structured clinical examination or the simulator; postrotation scores were highest for the written examination and lowest for the simulator.

Conclusion: Written examinations measure acquisition of knowledge but fail to predict if students can apply knowledge to problem solving, whereas both the objective structured clinical examination and the computer-controlled patient simulator can be used as effective performance evaluation tools.

Publication types

  • Comparative Study

MeSH terms

  • Adult
  • Analysis of Variance
  • Clinical Clerkship
  • Clinical Competence
  • Critical Care*
  • Education, Medical, Undergraduate*
  • Educational Measurement / methods*
  • Female
  • Humans
  • Learning*
  • Male
  • Students, Medical*