Article Text

Download PDFPDF

Competency in the new language of medical education
Free
  1. Darren A Kilroy
  1. Dr D A Kilroy, Stockport NHS Foundation Trust, Stockport SK2 7JE, UK; darren.kilroy{at}stockport.nhs.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The chances are that, even within the past week, you have been involved in some discussion of competence. As concepts and as terms, “competence” and “competency” have crept into everyday medical vocabulary to a point where it is difficult to determine how, when and why they first appeared.

This is a real problem, given the importance that has been placed upon “competency” in what I call the “New Language” of medical education. How can we even begin to decide on the value of “competencies” if we don’t even know what they are, what they really represent, or what they were first designed to be?

I want to take a step back to look at the roots of “competence”, discuss some very real issues in its current usage and propose a new model of terminology which better reflects what we should be aiming for in our teaching and assessment of clinical practice.

ORIGINS OF COMPETENCE

Professional thought and behaviour is an inherently complex combination of knowledge, application and reflection, based on a continuum of lifelong learning. Social convention has for many years rightly demanded public evidence of a professional’s ability to practise wherever that thought and behaviour impacts upon society. In the late 18th and early 19th centuries, this led to the development of specialist societies and associations of “properly qualified persons”. These associations developed as part of a sociological transition from a “professionalism” based solely on social standing and peer approval to one involving the acquisition of some examinable knowledge.

For their part, the Medical Royal Colleges sought evidence of such knowledge in the form of postgraduate examinations. “Competence” was unrecognised as a word: up until the 1930s the term had still not made an appearance in medical educational terminology at all.

One of the earliest examples of the use of the word is actually to be found in Carr-Saunders and Wilson’s The Professions, published in 1933.1 In this book the concept and status of “competence” is discussed, but without suggesting an explicit definition. By inference, their determination of its meaning in the book was bound up in terms of intellectual capability specially tailored to the profession at hand alongside an expression of relevant procedural skills.

In the public domain, the word “competence” has a general air of adequacy, of being “good enough” without excelling. The Oxford English Reference Dictionary defines competence as “… adequately capable, satisfactory”.2 By implication, a competent practitioner—in any walk of life—might be considered suitable for undertaking routine, fairly undemanding tasks. A competent bricklayer might reasonably be expected to be able to build a garden wall, but might not be chosen to take on the task of constructing a reproduction period property.

The distinction between Carr-Saunders and Wilson’s early work and the common public perception of competence is a fundamental one when we come to use the term in modern professional practice. Those early authors saw competence as a formative combination of ability and intellect: these days we tend to see the whole thing as a binary phenomenon where one either is, or is not, competent.

Box 1 Model of skills acquisition

Level 1 Novice
  • Rigid adherence to taught rules or plans

  • Little situational perception

  • No discretionary judgement

Level 2 Advanced Beginner
  • Guidelines for action based on attributes or aspects

  • Situational perception still limited

  • All attributes and aspects are treated separately and given equal importance

Level 3 Competent
  • Coping with crowdedness

  • Now sees actions at least partially in terms of longer-term goals

  • Conscious deliberate planning

  • Standardised and routinised procedures

Level 4 Proficient
  • Sees situations holistically rather than in terms of aspects

  • Sees what is important in a situation

  • Perceives deviations from the normal pattern

  • Decision-making is less laboured

  • Uses maxims for guidance, whose meaning varies with the situation

Level 5 Expert
  • No longer relies on rules, guidelines or maxims

  • Intuitive grasp of situations based on deep tacit understanding

  • Analytical approaches used only in novel or problematic situations

  • Has a vision of what is possible

Carr-Saunders and Wilson are long forgotten, but what they said remains extremely important because competence is not one-dimensional in the way it is often now talked about. For the vast majority of everyday tasks—let alone medical ones—there is no “point of competence”. When are you “competent” to boil an egg? Can one really talk about being “competent” to manage acute asthma? What does it really mean in plain language? This binary approach to the concepts is where we start to see real problems with the way in which we now glibly handle the words.

Much is made of “competency” as a measurable index of skill proficiency. We spend a considerable length of time “signing off” competencies in the New Language of medical education. But how does it really relate to the acquisition of a skill? In the research origins of skills acquisition, the short answer is that it competency alone doesn’t tell us enough about professional practice and we should not place much emphasis upon it. Competency in procedures was not originated as a one-dimensional phenomenon, but as a reflection, just like Carr-Saunders & Wilson held, of a point along the journey of professional expertise.

To explain this, we need to look at some of the original work relating to skills acquisition in a non-medical environment. Pioneering in this field were (and are) the Dreyfus brothers—one a Harvard Professor of Philosophy, the other an engineering researcher—who, in 1986, articulated a five-level model of skills acquisition which has gone on to become a pillar of modern educational thinking.3 They laid out an ascendancy ladder of proficiency, starting with the greenest of practitioners and ending with experts. Their model is reproduced in box 1. Notice where “competent” sits in the ascendancy.

Where do you “see” yourself? And based on the Dreyfus model, where would you like to “see” your trainees? Would you want to see them attain Level 1, 2, 3, 4 or 5? Is it appropriate to choose Level 3 as the one which we aim to “sign off” in training and, if so, why?

The model is important in its emphasis on the development of perception, decision-making and reflection rather than aggregated stand-alone routine actions. The concept of competence within the model involves both routines and the decision of when to action them. Competence is seen as a triad of:

  • recognition of routine situations;

  • execution of routines under situational pressure; and

  • the planning of future actions.

In other words, it represents the intelligent application of rule-guided learning within crowded and pressurised contexts—places like the Emergency Department! To use an example cited by Eraut,4 Dreyfus competence depicts “… not so much the simple skill of riding a bicycle, but the more complex process of riding a bicycle through heavy traffic”. There is an intellectual element to the thinking here which is overlooked in the ways we handle the terms “competence” and “competency” in the New Language of medical education.

I mentioned competency as an intellectually rooted element of the professional journey. We are led to believe that a “competent” doctor is by default also “professional”. But the two are distinct, and have assumed this distinction because competence has been introduced as an artificial construct by virtue of its assumed measurability, whereas in reality it is merely a component of professionalism.

Wherever measurement is possible in the traditional professions, central government’s interest is awakened. And whenever their interest is awakened, the fundamental issues behind the measures at hand tend to be quite quickly forgotten or de-emphasised. This is an unstable basis for such important matters as medical teaching and learning, but has pervaded the recent history of the professions.

MISUSE OF COMPETENCE AS A MARKER OF LEARNING

I have briefly outlined a historical context for competence, which throws up a tension between the traditional understanding of professional expertise and newer state-driven demands for measurable demonstrations of ability. It is already apparent that competence is a fluid term that goes beyond technical skill and must encompass social and intellectual domains. So how did “competency” come to be so attractive?

Medical education has a predominantly behaviourist tradition. This translates in practical terms into an historical emphasis on training rather than its assessment. A similar methodology historically underpinned teacher training, but in the 1970s a new move toward “competency-based education” arose in the USA, based upon an attempt to turn the “role requirements” of teachers into “behavioural objectives”.5 6 The fundamental problem was that the methodological basis to the exercise was taken from industry, where arguably there are many situations and tasks that can be reduced to a measurable series of goals which, when seen together, could constitute the successful management of this or that situation or task.

Taking this approach to professional education appealed to a longstanding North American tradition of state governance of qualifications, but it raised serious professional issues in relation to how such “objectives” could really be measured. In particular, it was felt that “competency-based education” (of teachers) was a contradiction in terms, and atomised the teaching profession by trying to articulate a complex practice in terms of thousands of individual components. Teaching involves the contextual application of skills and knowledge. Trying to assess such complexity using lists of this or that discrete objective was widely thought to be at best impractical and at least unwise in any professional sense.

Training based around “competencies” has been contentious in teaching ever since, yet the wide use of the term within the New Language of medical education makes little reference to this important professional dilemma. The documents of the Postgraduate Medical Education and Training Board (PMETB)7 articulate a central role for “competencies” based upon specific measures of specific tasks and procedures and highly structured “discussions” in an extremely atomised manner. The “professionalism” of the job is rather glossed over or left aside based on its unwillingness to assume a “measurable” form.

On the face of it, employing a “competency-based” approach to medicine delivers a straightforward public outcome: all doctors who “pass through” the training will be sufficiently able to perform a wide but necessarily basic range of tasks appropriate to their field. Having this team of doctors out there can only be a good thing: we have measured them, and they measure up. What’s not to like? The dilemma is that competencies are being used as a proxy for professional development in a way that is not justified by the research base. The North American example should have taught us that. Competencies may be of relevance to the assessment of industrial skills or procedures; to apply them to clinical medicine is a philosophical leap unsupported by evidence.

COMPETENCE, COMPETENCY, PERFORMANCE OR CAPABILITY?

Nowhere have I argued that competence does not have a place within the broad understanding of professional development. It’s just that we don’t seem to know what we are talking about. Assuming, though, that you do want to make some assessment of clinical ability, what do you want your trainee to demonstrate—what he could do when faced with a given situation or what he can do?

I have suggested that competence is an articulation of how we structure and restructure our knowledge and skills as we progress through professional life. We accommodate new situations by application of this knowledge and these skills. Doing so involves affect and behaviour more than utilisation of a stand-alone ability to do this or that procedure or recall of a particular discrete fact.4 By contrast, competencies are the simplistic, extremely task-related elements that caused so much aggravation in North American teaching when they were first mooted.

A fascinating (though largely overlooked in the UK) review of clinical competence in emergency medicine was undertaken in 1990 by the now-retired Professor Jack Maatsch at Michigan State University.8 He reviewed the basis of effective clinical performance for the first time and determined that it had three linked elements:

  • medical knowledge;

  • clinical problem-solving; and

  • a “general competence component” (GCC).

The GCC comprises intelligence, motivation, learning skills, general knowledge base and personality and was derived from psychometric analyses of practising doctors. Notice how Maatsch incorporated intelligence, motivation and personality into the GCC, but made no attempt to give them measurability.

The fact that Maatsch presented his work in terms of a performance analysis—within which lies competence—is very important, and underlines another important issue with the New Language of competence. When we assess Doctor X’s “competency” using a Direct Observation of Procedural Skills document we are actually assessing his performance. The two are not the same and the difference is interesting.

“Performance” is a common term which has an arguably simple shared understanding, yet has received surprisingly little attention in the medical educational literature. When we say “He or she performs very well”, we tend to be implying an all-round sense of someone “doing well”, probably being quite self-reliant and able to cope under a degree of pressure. It is based largely upon what is “seen” directly when watching the person in action. Most of us know what we mean when we talk in this way.

When we talk of “competencies”, we are often actually talking about types of “performance standards” and this links in to an important distinction made by Messick. In 1984 he stated: “… Competence refers to what a person knows and can do under ideal circumstances, whereas performance refers to what is actually done under existing circumstances”.9

Messick viewed performance as the entity of interest, since it was enacted in the “real world” of clinical practice. There is an important message here for the New Language of medical education.

Personally, I would be much more comfortable with helping trainees to attain performance standards than I am with helping them attain “competencies”. There is a sense with the former of “raising your game”, of achievement and learning, but with the latter a sense of “getting to be adequate”. Which sounds more realistic: “I thought you did (performed) really well” (in that “real world” trauma resuscitation case) or “I thought you were really competent back there”?

Capability has been described as the “complement” to performance: performance is that which we can see in daily practice, whereas capability is that additional ability which we may possess but which cannot be directly derived from on-the-job observation.4 The potential for the use of “capability” rather than “competency” in medicine is interesting yet largely overlooked.

Proponents of the use of the capability concept argue that it helps us determine how well a trainee is developing by the gathering of “capability evidence”. In simple terms, its aim is:

  • to give some indication of how well equipped a trainee is to face a wider set of challenges than those which can realistically be directly observed;

  • to get a sense of their analytical reasoning skills;

  • to check their knowledge base; and, importantly,

  • to understand the position of themselves and their profession in society.

We have not, as a profession, thought about the role of “capability” as a concept, and have instead lumped much of its substance under the heading “competencies”. In so doing, inappropriate attention has been paid to efforts to try to design “measurable competencies” out of what are, in reality, “capability indicators”, forming part of the wider picture of true “competence”. The best ways to get a sense of capability elude us as long as we view them as reductive tick-box processes.

A WAY FORWARD

I have not made any mention of “excellence” in this paper, although much has been made of it in the recent and well-publicised Tooke Report.10 This is a striking departure from recent government philosophy and its new prominence hopefully signals a shift toward clearer and more grounded approaches to medical learning. Excellence never “went away”, but has been de-emphasised in the stampede to create measurability within education. It needs to re-establish itself as a routine aspiration within training—and it can, but only if we are allowed to re-focus our energies toward a new approach to the articulation of medical learning and away from obsessive acquisition of competency documents.

With immediate effect, we need to revise and clarify the terminology. Our aim is to promote a culture of training, not assessment, that develops and promotes general professional competence and to recognise that a properly constituted idea of professional competence incorporates professionalism (not “competencies”) at its core. Teaching common and important clinical skills, alongside subtle experiential exposure to problem-solving and management which only time can provide, should be the re-energised principles of medical practice.

There is quite rightly a need to demonstrate evidence of our high standards to the public. In-service evaluations and the MCEM and FCEM examinations need to be reflective of all the essential components of professional development and seen as lines in the sand along the journey of professional competence.

There are two simple but important principles of measurement which professional evaluation should enshrine: (1) consensus-determined performance standards and (2) imaginatively designed capability indices.

With seniority, the assessment balance between these two principles should vary such that in the junior years there is heavy emphasis on performance routines based on predetermined standards, and toward the senior years there is a significant element of capability in the assessment. Performance standards should be assessed based on real-time review of the trainee in action, in the workplace, across a defined range of clinical scenarios. They should relate to scenario management, not stand-alone demonstrations of skills acquisition. Capability indices should take real-life clinical and management problems and develop them into worked assessment exercises that allow trainees to demonstrate that they have acquired the ability to think laterally and imaginatively to find solutions.

Inherent in my thinking is a move toward “regionalisation” of teaching and learning within emergency medicine, such that trainees are from the very beginning immersed in a regionally and locally rooted ethos of high-quality learning coupled with structured but measured assessment. It may well be the case that, in the long term, the entirety of postgraduate training and assessment can be delivered on a regional or supraregional basis. Empowering local clinicians to develop learning tools, safe in the knowledge that they will have a significant role in the assessment tools that go with them, can only be a good thing.

This vision does not really incorporate “competencies” as we currently know and try to use them. This is entirely deliberate. Introduction of the term may have achieved a purpose in enabling scrutiny of the profession in a fairly reductive sense, but it has done nothing to foster pride in our or anyone else’s specialty. It is possible to map a way forwards which uses competence in its purer form—that of a reflection of professional performance, capability and ultimately excellence. The lack of an immediate “measurability” in much of this makes it an unappealing prospect to government. This should strengthen professional resolve to see how it can be done, rather than a resignation to accept a future determined by portfolios of observed cannulations.

REFERENCES

View Abstract

Footnotes

  • Competing interests: DAK is a member of the Education & Examinations Committee of the College of Emergency Medicine. The content of this paper reflects personal opinion and should not be interpreted as being the view of this Committee.

Linked Articles

  • Primary survey
    Darren Walter