Intended for healthcare professionals

Papers

Clinical experience, performance in final examinations, and learning style in medical students: prospective study

BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7128.345 (Published 31 January 1998) Cite this as: BMJ 1998;316:345
  1. I C McManus (i.mcmanus{at}ucl.ac.uk), professor of psychology and medical educationa,
  2. P Richards, medical directorb,
  3. B C Winder, research coordinatora,
  4. K A Sproston, research assistanta
  1. a Academic Department of Psychiatry, Imperial College School of Medicine at St Mary's, London W2 1PG
  2. b Northwick Park and St Mark's Trust, Northwick Park Hospital, Harrow HA1 3UJ
  1. Correspondence to: Professor I C McManus Centre for Health Informatics and Multiprofessional Education, University College London Medical School, London N19 5NF
  • Accepted 5 August 1997

Abstract

Objective: To assess whether the clinical experience of undergraduate medical students relates to their performance in final examinations and whether learning styles relate either to final examination performance or to the extent of clinical experience.

Design: Prospective, longitudinal study of two cohorts of medical students assessed by questionnaire at time of application to medical school and by questionnaire and university examination at the end of their final clinical year.

Subjects: Two cohorts of students who had applied to St Mary's Hospital Medical School during 1980 (n=1478) and 1985 (n=2399) for admission in 1981 and 1986 respectively. Students in these cohorts who entered any medical school in the United Kingdom were followed up in their final clinical year in 1986–7 and 1991-2.

Main outcome measures: Students' clinical experience of a range of acute medical conditions, surgical operations, and practical procedures as assessed by questionnaire in the final year, and final examination results for the students taking their examinations at the University of London.

Results: Success in the final examination was not related to a student's clinical experiences. The amount of knowledge gained from clinical experience was, however, related to strategic and deep learning styles both in the final year and also at the time of application, five or six years earlier. Grades in A level examinations did not relate either to study habits or to clinical experience. Success in the final examination was also related to a strategic or deep learning style in the final year (although not at time of entry to medical school).

Conclusions: The lack of correlation between examination performance and clinical experience calls into question the validity of final examinations. How much knowledge is gained from clinical experience as a student is able to be predicted from measures of study habits made at the time of application to medical school, some six years earlier, although not from results of A level examinations. Medical schools wishing to select students who will gain the most knowledge from clinical experience cannot use the results of A level examinations alone but could assess a student's learning style.

Key messages

  • Medical students with the most clinical experience do not perform best in final undergraduate examinations, throwing some doubt on the validity of the examinations

  • The amount of knowledge a student gains from clinical experience correlates positively with deep and strategic learning styles as measured not only in the final year but also at the time of selection, five or six years earlier

  • If it is important for students to obtain as much clinical experience as possible then final examinations require restructuring to assess and reward clinical experience, and selection should emphasise deep learning which cannot be assessed from grades at A level

  • The use of deep and strategic learning styles in the final year of medical school predicts better performance in the final examination, but the same measures at the time of selection for admission to medical school do not predict examination performance

Introduction

“To study the phenomena of disease without books is to sail an uncharted sea, while to study books without patients is not to go to sea at all.” Sir William Osler1

Medical training—and particularly British medical training2 3 4 5—has always emphasised not merely learning from textbooks but also gaining experience directly from patients themselves. Although clinical experience is perceived as central to medical education, there has been little assessment of how and why medical students differ in the knowledge they gain from clinical experience and how clinical experience relates to success in clinical examinations.6 Educational theory predicts that clinical experience and examination success should both relate to study habits.7 There are large variations between students in the amount and range of their experience,8 9 and differences also occur in postgraduate training.10 11 Comparison of entry cohorts of medical students in 1981 and 1986 in their final year of undergraduate training in the United Kingdom confirmed that there were large individual differences in clinical experience as well as secular trends and regional differences.12

A levels and other secondary school examinations are comparatively poor predictors of students' success at university.13 Better predictors are students' approaches to their work, or their study habits and learning styles.14 15 The skills needed to do well at A level are less sophisticated than the critical analytical skills that universities instil, and the higher correlation between entry qualifications and final grades in science and medicine than in other university subjects13 suggests an undue emphasis on factual recall rather than deeper cognitive analysis. Universities generally value deep learning (table 1) that is motivated by a desire for personal understanding and vocational relevance and demonstrated by the student's searching for principles and the integration of knowledge across different domains. In contrast, surface learning is motivated principally by fear of failure and is dominated by rote learning for regurgitation in examinations and is followed by forgetting. Strategic learning is motivated by a desire for success and by competition with other students. In strategic learning students use whichever method—deep or surface learning—is more appropriate for a particular topic; the result is patchy understanding and a lack of integration across topics. Empirically, deep and strategic learning styles predict success in final examinations at university, whereas surface learning predicts failure.15 16

Table 1

Summary of the differences in motivation and study process in surface, deep, and strategic learning

View this table:

This paper assesses the relation between clinical experience, examination success, and study habits in two cohorts of medical students.

Subjects and methods

Study design—This study began as two surveys of medical student selection and assessed applicants who had applied for admission to St Mary's Hospital Medical School in 198117 and 1986 (fig 1).18 All applicants with addresses in the United Kingdom in the first study and addresses in the European Community in the second study were sent a questionnaire which included measures of study habits; there was a response rate of 85% (1151/1362) in the first study and 92% (2043/2209) in the second. Of the 1478 applicants in the first study, 517 were admitted to medical schools in the United Kingdom. Of the 2399 applicants in the second study, 871 were admitted to medical schools in the United Kingdom. In 1986-7, 463 final year students from the first cohort were sent a second questionnaire, as were 761 from the second in 1991-2; the questionnaire measured study habits and clinical experience, and the response rate was 65% (301/461) for the first study and 50% (383/766) for the second.

Fig 1
Fig 1

Study design for survey of learning styles and clinical experience in two cohorts of medical students in 1981 (1986)

Study habits and learning styles—Members of the two cohorts received an abbreviated 18 item study process questionnaire developed by Biggs19 20 21 22 which assesses learning styles on surface, deep, and strategic scales.7 Reliability coefficients (α) were 0.535, 0.737, and 0.703 in final year students in the first study and 0.567, 0.715, and 0.701 for final year students in the second study. For applicants in 1986 the reliability coefficients were 0.556, 0.713, and 0.647. Applicants in 1981 were assessed on the less reliable measure of syllabus-boundness23 (α=0.497).

Performance in final examinations—Altogether, 330 of the students in the first cohort and 361 of those in the second took final examinations at the University of London. Detailed information was obtained from records in the faculty of medicine. The examination had five separate sections (medicine, which included psychiatry, paediatrics, and public health; surgery; pathology; clinical pharmacology; and obstetrics and gynaecology), all of which had multiple choice and essay questions. All subjects except pathology had an oral examination (viva voce), and all except clinical pharmacology had a practical or clinical examination or both. Principal component analysis, based on a candidate's individual scores, was used to calculate a single measure of performance, the overall finals score (first cohort: α=0.879; second cohort: α=0.868).24 Additionally, scores were calculated for the five separate subjects and the four modes of assessment (multiple choice question, essay, oral, and clinical and practical examinations).24

Clinical experience—Students in the first cohort reported their experience of 15 acute medical conditions and those in the second cohort of 20 conditions. Students in each cohort reported their experience of 18 surgical operations. Students in the first cohort reported their experience of 17 practical procedures, and those in the second cohort reported their experience of 29 practical procedures. Scores on the 50 items in the first cohort and 67 in the second were combined into a single total experience score (first cohort: α=0.861; second cohort: α=0.907).12

Results

Clinical experience and final examination performance

The correlation between overall performance in final examinations and total clinical experience was not significant (first cohort: r=0.048, P=0.48, n=215; second cohort: r=-0.024, P=0.75, n=176) (fig 2). There was no evidence of specific associations. There were non-significant correlations between experience of treating medical conditions and performance on the medicine section of the final examination (first cohort: r=-0.041, P=0.55; second cohort: r=-0.059, P=0.44); between surgical operations seen and performance in the surgery examination (first cohort: r=0.088, P=0.20; second cohort: r=-0.118, P=0.12); and between overall experience and performance in clinical examinations (first cohort: r=0.026, P=0.70; second cohort: r=-0.042, P=0.58). These results suggest that clinical experience does not influence performance in final examinations.

Fig 2
Fig 2

Association between performance in final examination and total amount of clinical experience in 1986 cohort (r=-0.024, P=0.75). The measure of clinical experience12 comprised 20 items scored 1 to 3 and 47 items scored 1 to 4 (total possible range 67 to 230). Performance in the final examination was the principal component of the examinations, scaled to have a mean of 100 and a standard deviation of 15

Study habits and performance in final examination

Study habits in the final year predicted overall examination performance; the correlations with surface, deep, and strategic learning in the first cohort (n=213) were r=-0.204 (P=0.003), r= 0.157 (P=0.022), and r=0.178 (P=0.009) respectively and in the second cohort (n=171) r=-0.081 (P=0.28), r=0.235 (P=0.002), and r=0.266 (P<0.001). The negative associations with surface learning and positive associations with deep and strategic learning were in the expected directions. The pattern was similar for different subjects and examination methods. Although study habits in the final year predicted performance in the final examination in the 1986 cohort (n=344) there were non-significant correlations between examination performance and surface, deep, and strategic processing as assessed at the time of application (r=-0.068, P=0.21; r=0.031, P=0.57; r=-0.004, P=0.94). Similarly, syllabus-boundness at time of application in the 1981 cohort (n=285) showed a non-significant correlation with examination performance (r=-0.007, P=0.90).

Measures of surface, deep, and strategic learning at application and in the final year were correlated in the 1986 cohort (n=361) (r=0.420, r=0.370, r= 0.336 (all P<.001)), confirming moderate long term trait stability. Syllabus-boundness at time of application in the 1981 cohort (n=307) correlated with final year surface, deep, and strategic learning (r=0.205, P<0.001; r=-0.175, P=0.002; r=-0.039, P=0.49), confirming construct overlap of syllabus-boundness and learning style.

Study habits and clinical experience

Table 2 shows the correlation of clinical experience and study habits assessed both in the final year and at the time of application. Students with higher scores on deep and strategic learning show significantly higher levels of overall experience (fig 3) whether study habits are measured in the final year or at the time of selection—five or six years earlier. Surface learning (and syllabus-boundness) scores at application showed significant negative correlations with clinical experience.

Table 2

Correlation (r) between amount of clinical experience and measures of study habits assessed at time of application and in final year in 1981 and 1986 cohorts

View this table:
Fig 3
Fig 3

Association between total amount of clinical experience and score on Biggs's measure of deep learning in applicants in the 1986 cohort (r=0.262, P<0.001). Clinical experience measured as in fig 2

A level examinations

Grades at A level are a potential confounder of the associations reported here. Mean A level grade17 18 showed a significant correlation with performance in final examinations (first cohort: r=0.336, P<0.001, n=329; second cohort: r=0.281, P<0.001, n=359); these values are compatible with those reported elsewhere.25 26 However, grades at A level showed almost no association with syllabus-boundness in the 1981 cohort (r=0.047, P=0.12, n=1104) and with surface, deep, and strategic learning at the time of selection in the second cohort (r=-0.021, P=0.34; r=-0.051, P=0.024; r=0.024, P=0.29; n=1932, NS). There was no correlation with learning styles in the final year (first cohort (n=333): r=-0.076, P=0.26 for surface learning; r=0.080, P=0.15 for deep learning; r=0.099, P=0.07 for strategic learning; second cohort (n=371): r=0.091, P=0.080; r=-0.005, P=0.93; r=0.003, P=0.96); grades at A level did not correlate with clinical experience (first cohort: r=0.023, P=0.68, n=335; second cohort: r=-0.034, P=0.51, n=370).

Non-respondents

A potential risk in a study in which only 51% of students in the first cohort and 65% in the second replied to the questionnaire during their final year is that respondents may represent a biased subset of those in the initial study. In the initial study, however, this bias is unlikely to be a serious problem since response rates were satisfactory at 85% and 93% respectively. We assessed possible bias by comparing baseline measures and the final examination performance of those who did and those who did not return our questionnaire during their final year. In neither cohort were there significant differences between respondents and non-respondents in the final year in terms of study habits at the time of selection, final year examination performance, or in mean grade at O level, and number of O and A levels taken. In the 1981 cohort the non-respondents had slightly lower mean grades at A level (3.81 (SD 0.77), n=177 v 3.98 (0.78), t=2.44, df=513, P=0.015); the effect was not significant in the 1986 cohort (4.16 (0.741), n=377 v 4.19 (0.64), t=0.67, df=513, P=0.051). Our respondents were probably a representative sample of the students as a whole.

Discussion

Medical students work hard to acquire clinical experience.27 If clinical problem solving is the key to learning for medical students and doctors,28 potential doctors should see many patients and medical students with clinical experience should be seen to benefit from it, not least by performing better in clinical examinations. If clinical experience is educationally desirable then students who are likely to learn most from their contact with patients should be selected to study medicine.

Validity of final examinations

Our study shows that students with more clinical experience do not do better in final examinations either generally or specifically in the clinical sections of examinations. This conclusion is robust, being found in two cohorts of reasonable size studied five years apart. We do not believe our results reflect any specific failing of the University of London's examinations, which are typical of those in most medical schools in the United Kingdom and have external examiners at all levels. We also recognise that the radical educational and curricular changes being introduced into medical schools since the publication of Tomorrow's Doctors by the General Medical Council29 may invalidate our findings for future generations of medical students. That, however, is an empirically testable hypothesis, and our current study provides the baseline data needed for assessing the reforms.

The lack of correlation between the amount of clinical experience and performance in final examinations calls into question the clinical validity of the examinations; our conclusions may also apply to similarly structured postgraduate examinations. Examinations may be failing to assess the skills and knowledge acquired directly from clinical experience, such as carrying out practical procedures, communicating with patients, formulating differential diagnoses, ordering tests, evaluating their results, and deciding on management. An alternative possibility is that students are failing to learn properly from their experience since “[clinical] experience without training increases confidence not competence.”30 31 Whatever the mechanism, it must be a concern that examinations send the wrong messages to undergraduates. Assessments determine what and how students choose to study. If skills and experiences acquired from patients are not seen to be relevant to success in examinations then fear of failure will drive some students to ignore clinical experience and resort instead to what is perceived as the real curriculum—that is, the information contained in textbooks. These attitudes seem unlikely to encourage the lifelong learning necessary for doctors, or to foster the development of “reflective practitioner[s]”32 who can modify their practice in relation to experience.

Since strategic and deep learning styles correlate positively with performance in final examinations and surface learning correlates negatively, the present examinations are probably encouraging a deeper understanding of medicine and medical practice. That other studies have failed to find this probably indicates that they had too small a sample size.33 That study habits at the time of selection for admission to medical school did not predict results in final examinations reflects the fact that study habits are states as much as traits. There is a well documented trend towards surface learning in conventional medical education,34 35 a process driven in part by failure in previous examinations.36 Courses structured less conventionally, perhaps using problem based learning,34 35 may find a correlation between study habits at entry and performance in final examinations.

Clinical experience and study habits

An important finding of this study is that the knowledge acquired by a clinical student can be predicted from the learning style measured at the time of application to medical school—half a decade earlier. This implies that the acquisition of knowledge from clinical experience—and the ability to continue to gain experience throughout a professional career—is a characteristic that can be identified at the time of selection. However, selection of medical students is based primarily on the results of examinations that do not correlate with the learning styles that are desirable in medical students, and these examinations do not predict the successful acquisition of clinical experience. The implication is that the greater the dependence of a selection system on grades at A level the more limited is its capacity for selecting doctors who will gain the most knowledge from clinical experience and therefore probably continue to benefit from clinical experience throughout their careers.29 Evaluation of the effectiveness of medical training should concentrate on characteristics other than the ability to pass examinations, both as an input measure for selection and as an output measure of the quality of medical education.

Acknowledgments

We thank the Faculty of Medicine of the University of London for permission to analyse the examination results of undergraduates, and we thank our many respondents for completing our questionnaires.

Funding: The survey of the 1981 cohort was funded by the Economic and Social Research Council, and the survey of the 1986 cohort by the Economic and Social Research Council and the Leverhulme Trust.

Conflict of interest: None.

Notes

Contributors: ICM and PR initiated and designed the cohort studies; ICM was responsible for management of the studies and for statistical analysis; BCW coordinated the acquisition and processing of the data; KAS and BCW collected much of the data; the manuscript was written by ICM and PR with help from BCW and KAS. ICM is guarantor for this paper.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.