Article Text

Download PDFPDF

Our better angels and black boxes
Free
  1. Pat Croskerry
  1. Correspondence to Dr Pat Croskerry, Department of Emergency Medicine, Dalhousie University, Halifax Infirmary, Suite 355 1796 Summer St, Halifax, Nova Scotia, Canada B3H 3A7; croskerry{at}eastlink.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Four recent EMJ papers address patient safety.1–4 It remains an important topic that has attracted much interest since the turn of the century. Not that quality and safety of patient care weren't always uppermost in most medical minds, but the Institute of Medicine (IOM) report To err is human: building a safer health system5 brought them into sharper focus and spawned a variety of worthwhile and, as we see, continuing initiatives. Fifteen years later, the third of the IOM quality chasm series Improving diagnosis in health care has now appeared,6 with the IOM (now the National Academies of Sciences, Engineering, and Medicine) acknowledging that it had missed the obvious, or to be fair, the less than obvious in the first report. While there are a number of antecedents to diagnostic failure, chief among them must be the clinician's thinking, reasoning, problem solving, and decision making. However, a major problem for patient safety has always been that these processes are not obvious; they are invisible. They are not unknown, but we cannot see them in the obvious way that tangible issues such as equipment failures3 can be seen. Similarly, the major steps in medicating patients are also well known and highly visible, probably accounting for why medication error was mentioned 70 times in the first IOM report, whereas diagnostic error was only mentioned twice.7 Visibility amounts to measurability, and leads to the question: how can we make the processes that underlie clinical reasoning and decision making less opaque? This is important because the single unifying theme underlying all aspects of patient safety is human cognition, and the primary output of cognition is decision making, the engine that drives all behaviours involved in patient care. Cognition is a precious resource in the emergency department,8 and we need to know it well.

Historically, medicine has not come to terms with cognition very well. The subject was traditionally the preserve of philosophers and, more recently, cognitive scientists. For many it has been, and still remains, a black box—there was no need to know its contents, only its output. This approach characterised the behaviourists, a powerful and dominant group in psychology who flourished in the last century. They argued that we only needed to observe and understand behaviour and that consciousness and processes within the brain were not usable concepts. Although many of the principles of behaviour analysis and modification remain, behaviourism began to lose its dominance in the late 1960s. However, the black box has now taken on new significance.9 Analogies with the airline industry are widely used in the patient safety literature. It is seen as a high reliability organisation with an enviable safety record. One of the important features of such organisations is that they learn from their mistakes. The flight recorder is a different kind of black box. With access to objective, reliable data preceding the crash, investigators can develop an evidence-based account of the causes of failure. Flight data analysis has significantly reduced the number of plane crashes.

While we are a long way from having full access to the processes that underlie failures in clinical decision making, a first step would be to more reliably measure the outcomes of the diagnostic process. How often is it unreasonably delayed, missed or wrong? Isolating a particular diagnosis and exploring it in depth1 surely has some merit. Aggregate studies of the type reviewed by Ramlakhan et al3 may lose important detail, and are now known to have had significant methodological shortcomings.10 The overall adverse event rate of 10% quoted was a significant underestimate; it is currently put at closer to 30%.11 Even the Institute for Health Improvement Global Trigger Tool used in the study by Classen et al11 doesn't measure diagnostic failure.12 Lacking reliable data, it might be premature to speculate about how safe the emergency department really is. Estimates of diagnostic failure in emergency medicine are more often put at about 10–15%,13 many of which may be inconsequential, but certainly some would have significant adverse outcomes, as Okafor et al4 report here. A comprehensive trigger tool for adverse events in emergency medicine, which include diagnostic failure, would be most welcome.

A second important issue has been our failure to accept what is known about reasoning and decision making and how these processes come to fail. A plethora of studies in cognitive science over the last four decades has roundly demonstrated that human reasoning and decision making are oftentimes flawed processes. In particular, significant failures result from the unconscious intrusion of bias into our decision making. In every field of human endeavour, decisions are not nearly as valid as those who make them like to think. Yet, there has been a slow uptake of these concepts in medicine, at individual, disciplinary and professional levels. This can be attributed to a combination of recognisable factors,12 but less obvious ones are ‘blind spot bias’the tendency to believe that we are less biased than others14—and the NIH (not invented here) syndrome, which is an ‘attitude-based bias against external knowledge’.15

While we may believe that certain groups are biased—for example, lobbyists, advertisers, politicians, pharmaceutical representatives, journalists, sports fans and others—we frequently do not accept our own vulnerability to bias. Decades of experimental work in the behavioural sciences do not seem to have crossed the inter-professional boundary into medicine to persuade us that clinicians’ thinking may be just as biased as that of others, yet we can easily demonstrate that professionals and experts are as biased in their judgements as the person in the street.16 This has been a major stumbling block. Historically, this meta-bias has deterred us from coming to terms with many cognitive and affective biases that may influence our decision making, and we have lost some time.

One major consequence of blind spot bias is that the teaching of clinical judgement and decision making has suffered. Traditionally, we have not promoted explicit training for medical trainees in decision making nor delved into the processes that distort our reasoning and decision making. A few resources such as Kassirer and Kopelman's Learning clinical reasoning17 were isolated beacons in an otherwise dark seascape. While there are encouraging signs of change,18 even today most of those in training, and indeed their instructors, would not be able to identify the principal model of decision making that has emerged over the last 30 years (dual process theory), or give an account of the properties of common biases, or logical fallacies.

We have also been thwarted by the NIH syndrome. It reflects a profound attitude-based bias toward knowledge (ideas, concepts, technologies) from a source that is considered external or outside of one's usual affiliation. Not only is it a black box, it is someone else's black box. For many in medicine, cognitive science will appear irrelevant to their clinical practice. Yet, progress in areas that are crucial to optimal decision making requires overcoming the rigidity and inertia of traditional training, as well as our ‘collegial protectiveness’,19 to allow the adoption of newer, innovative approaches.

Our reluctance to accept cognitive science means that we can get stuck when we try to uncover the real determinants of failure, satisfying ourselves with proximal rather than distal causes. We feel comfortable with proximal explanations because we can describe them in our own language, whereas distal explanations may require terminology and descriptors that may not be familiar to us. For example, an analysis of diagnostic failure in a clinical case might yield an explanation in terms of failures in history taking and/or physical examination,20 or ‘problem with history’ or ‘problem with physical’,1 but these are all proximal explanations, much like saying ‘the ship sank because it had a hole in its bottom’. This may be useful as a first approximation or starting point for where to look when things go wrong, but we need to go deeper to distal causes: ‘the ship sank because the captain was cognitively impaired due to sleep deprivation and steered it on to a rock that punched a hole in its hull.’ Similarly, while ‘judgement lapse’ tells us that a failure in judgement has occurred, it says very little about the particulars of reasoning and decision

As Okafor et al4 note, more prospective methods might better explain physicians’ cognitive processes. While it is understood that some studies may be satisfied with proximal explanations if they are beginning to explore an area and are simply looking for fruitful areas of study, there has to be recognition that, if we are to develop sufficient understanding of the true determinants of failure, ultimately more cognitive reductionism will be necessary.

The important tool that cognitive science provides is the means for reductionism. It allows us to develop a detailed exposition and explanation of the properties and operating characteristics of biases, which, in turn, allow us to explain and predict cognitive failures. Thus, instead of saying that a diagnosis was missed because it was an atypical presentation4 (proximal), we can say that it was missed because of representativeness error (distal). The distal explanation encourages us to take the analysis deeper by looking at the properties of the particular cognitive bias involved. The distinction is important because solutions for proximal causes may not fix distal problems. The reduction or elimination of cognitive bias, cognitive bias mitigation (CBM), cannot be performed effectively unless the actual bias is known, and it is unlikely that one generic debiasing strategy will work for several different biases.21 ,22 The challenge in achieving cognitive reductionism and taking things from a proximal to a distal level lies in accessing the necessary level of expertise. Do we make cognitive error analysis part of every physician's training or do we develop local experts who can provide this service?

A significant body of evidence has now made it clear that cognitive biases manifest themselves automatically and unconsciously over a wide range of human decision making. Besides their psychology and sociology origins, they are now acknowledged in business, marketing, the judicial system and many other domains. Events on the world stage are influenced by them. It is important for everyone to recognise just how pervasive biases are and the need to mitigate them. Although medicine was fairly quick out of the gate23 when cognitive biases were first systematically reported in the psychology literature 40 years ago, progress since has been glacially slow. However, the imperative is now well recognised: at the report release webcast for Improving diagnosis in health care in September 2015, George Thibault, one of the members of the committee, said ‘The critical thinking in understanding the common causes of cognitive errors can be and should be taught to all health professionals, particularly physicians, nurse practitioners and physician's assistants who will be in a primary diagnostic role and who will work in the diagnostic process’.24 To date, our slow uptake has led to delays in the development of CBM. We need now to start building our toolbox of CBM strategies in medicine to minimise the myriad of cognitive biases known to impact on our decision making.

Views on CBM remain polarised, as Burton notes.19 In Thinking fast and slow25 Daniel Kahneman provides a detailed exposition of the failings of the human mind. His overall view is a gloomy one that has been echoed by other psychologists. He appears deeply pessimistic to the possibility that we can debias ourselves against the formidable influence of cognitive biases, and we must therefore accept our inherent flaws and the ‘tragic state of the human condition’.19 In contrast, in The better angels of our nature,26 Steven Pinker provides numerous examples and considerable historical data that dispel many popular myths about human nature. He demonstrates how we have changed our behaviours for the better in many different areas. Gloom and doom give way to room for refreshing optimism, and recent developments in CBM suggest such optimism is warranted.21 ,22 ,27 ,28 The unique milieu of emergency medicine is to our advantage. We were among the first to explore the impact of cognitive bias on clinical decision making, and it continues as one of our major interests. It would be the ideal setting to test the efficacy of CBM.

Decision making has driven our evolution as a species and remains the currency of all human activity. Gigerenzer, the German cognitive psychologist, opined that the most important decision we have to make in life is how we are going to make decisions. We can make them critically or otherwise. That the Krebs cycle is firmly entrenched on medical school entrance exams whereas, until the last year or two, candidates for medical school did not need to demonstrate any particular competence in critical thinking, says something about our historical priorities. It would be ideal to have explicit training in critical thinking in secondary education so that good thinking habits were established before entering medical training. In the meantime, we have an ethical obligation to provide specific training in critical thinking and decision making in undergraduate, postgraduate and continuing medical education,29 as well as training in the recognition and mitigation of common cognitive biases.

References

View Abstract

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles