Article Text

Download PDFPDF

Diagnosing “vulnerable system syndrome”: an essential prerequisite to effective risk management
  1. J T Reason, formerly professor of psychology 1,
  2. J Carthey, lecturer in human factors 2,
  3. M R de Leval, consultant cardiothoracic surgeon2
  1. 1Psychology Department, University of Manchester, Manchester M13 9PL, UK
  2. 2Cardiothoracic Unit, Great Ormond Street Hospital for Children NHS Trust, London WC1N 3JH and Institute of Child Health, London WC1N 1EH, UK
  1. Professor J T Reason, 6 Red Lane, Disley, Cheshire SK12 2NP, UK reason{at}redlane.demon.co.uk

Abstract

Investigations of accidents in a number of hazardous domains suggest that a cluster of organisational pathologies—the “vulnerable system syndrome” (VSS)—render some systems more liable to adverse events. This syndrome has three interacting and self-perpetuating elements: blaming front line individuals, denying the existence of systemic error provoking weaknesses, and the blinkered pursuit of productive and financial indicators. VSS is present to some degree in all organisations, and the ability to recognise its symptoms is an essential skill in the progress towards improved patient safety. Two kinds of organisational learning are discussed: “single loop” learning that fuels and sustains VSS and “double loop” learning that is necessary to start breaking free from it.

  • vulnerable system syndrome
  • risk management
  • patient safety
  • learning

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Key messages

  • Accident investigations in various hazardous domains suggest that a cluster of organisational pathologies—the “vulnerable system syndrome” (VSS)—render some systems more liable to adverse events.

  • VSS has three interacting and self-perpetuating elements: blaming front line individuals, denying the existence of systemic error provoking weaknesses, and the blinkered pursuit of productive and financial indicators. The need to achieve the latter targets is often cited as the reason why necessary systemic improvements cannot be made.

  • VSS is present in some degree in all organisations. Recognising its presence and taking remedial action is an essential prerequisite of effective risk management.

  • A crucial remedial step is to engage in “double loop” organisational learning that goes beyond the immediate unsafe actions to question core assumptions about human fallibility and to identify and reform the organisational conditions that provoke it.

Healthcare institutions are complex, tightly coupled systems. Their complexity derives from several factors, but perhaps the most significant is the presence of many defences, barriers, safeguards, and administrative controls designed to protect potential victims from the local hazards. As in all well defended systems, a mishap requires some assistance from chance in order to bring about such a low probability event. The greater the complexity of the system, the more likely it is that some measure of bad luck is involved in achieving the precise conjunction of defensive gaps and weaknesses necessary to permit an adverse event. This view of accident causation has been described elsewhere1 and is summarised in fig 1.

Figure 1

The “Swiss cheese” model of accident causation.

Notwithstanding this chance element, however, evidence gathered from the analysis of many disasters in a wide range of complex systems—particularly those such as nuclear power plants and modern commercial aircraft in which catastrophes are extensively and publicly investigated—suggests that there is a recurrent cluster of organisational pathologies that render some systems more vulnerable to adverse events than others. We have termed this the “vulnerable system syndrome” (VSS) and will apply it here to the issue of patient safety. Our argument is that there are sufficient similarities between the aetiology of adverse events in different complex systems to offer managers of healthcare institutions the chance to benefit from the organisational and cultural lessons that are being learned in these non-medical domains. The ability to recognise the symptoms of VSS is an essential skill in the progress towards improved patient safety. We conclude with a brief discussion of two kinds of organisational learning: “single loop” learning that fuels and sustains VSS and “double loop” learning that is necessary to start breaking free from it.

Core elements of the “vulnerable system syndrome”

At the heart of VSS lie three pathological entities: blame, denial, and the single minded pursuit of the wrong kind of excellence. Each of these systemic pathologies is deeply rooted in human psychology and, as a consequence, tends to be present in varying degrees in all organisational cultures. Before we can counteract their harmful consequences we need to understand why they have such a widespread influence on the way institutions deal with their hazards. Each core pathology interacts with and potentiates the other two so that, collectively, they form a self-sustaining cycle that will continue to impede and undermine any risk management programme that does not attempt to eradicate, or at least moderate, their malignant influence. They conspire to ensure that those whose business it is to manage the system and preserve patient safety will generally have their eyes firmly fixed on the wrong ball.

Blame

Of the three core pathologies, the very human tendency to blame individuals for bad outcomes—or an excessive adherence to the “person model”2—is the most tenacious and perhaps the most pervasive in its harmful effects upon organisational safety. It has its origins in a quartet of psychological factors: the fundamental attribution error, the illusion of free will, the just world hypothesis, and hindsight bias.

THE FUNDAMENTAL ATTRIBUTION ERROR

The fundamental attribution error3 is one of the main reasons why people are so ready to accept the phrase “human error” as an explanation rather than as something that needs explaining. When we see or hear of someone performing less than adequately, we tend to put it down to the individual's personality or ability. We say that he or she was careless, silly, stupid, thoughtless, irresponsible, incompetent, or reckless. But if you were to ask the people in question why they acted in that way, they would almost certainly describe how the circumstances had constrained their actions. Everyone is capable of a wide range of actions, sometimes ill judged, sometimes inspired, but mostly somewhere in between. One of the basic principles of error management is that the best people can make the worst mistakes.

THE ILLUSION OF FREE WILL

Another reason why we are so inclined to blame people rather than situations stems from the illusion of free will.4 People, especially in Western cultures, place great value in the belief that they are, in large part, the controllers of their own destinies. They can even become mentally or physically ill when deprived of this sense of personal freedom. Feeling ourselves to be capable of choice naturally leads us to assume that other people are the same. They, too, are seen as free agents, able to choose between right and wrong and between correct and erroneous courses of action. When people are presented with accident reports and asked to judge which causal factors were the most avoidable, they almost invariably pick out the human actions. These processes act in concert to drive the blame cycle (fig 2).

THE JUST WORLD HYPOTHESIS

Another factor is the just world hypothesis.4 This is the belief shared by most children and many adults that bad things only happen to bad people, and conversely. In the safety context, the healthcare professionals implicated in an adverse event are seen as bad by virtue of the unhappy outcome.

HINDSIGHT BIAS

Hindsight bias—or the “knew-it-all-along” effect—is the universal human tendency to see past events as somehow more foreseeable than they actually were.5 When we look back at some salient event, our knowledge of the outcome unconsciously colours our perceptions of how and why it occurred. Those blessed with outcome knowledge see all the lines of causality homing in on some clearly defined happening, but those equipped only with foresight do not see this convergence. One of the reasons why (what appear to the retrospective observer to be) obvious signs of an impending tragedy are often ignored is that such warnings are only effective if the participants realise what kind of bad outcome they could have, and this is not always the case.

At the organisational level there are further processes at work that reinforce these psychological tendencies to regard front line practitioners as both the primary cause of mishaps and as the main target for remedial efforts. The first is the principle of least effort. It is usually relatively easy to identify the proximal errors of the individual at the sharp end and to consider these to be the “cause” of the mishap. That being the case, investigation of the adverse event need proceed no further. The second is the principle of administrative convenience. By restricting the search to the actions of those directly in contact with the patient, it is possible to limit the blame accordingly and thus minimise any institutional responsibility. This response is especially compelling when the actions of the individual in question are believed to deviate from some established protocol—a view that equates non-compliance with guilt and overlooks the fact that any pre-programmed procedure can be inappropriate in certain circumstances.

PENALTIES OF A BLAME CULTURE

The attractions of the “person model” are many and obvious, so why is it so wrong? Firstly, the institution fails to learn that errors and non-compliances mark the starting point of an investigation, not its conclusion. As shown in fig 1, adverse events result from a cascade of factors at many levels of the system. Evidence from various hazardous domains shows that the same situations keep provoking the same kind of errors in a wide variety of people.2 For example, there have been 13 fatal incidents of intrathecal administration of medication since 1975. Analysis of these incidents has identified common factors including transportation practices between the pharmacy and wards, poor training and risk awareness among junior doctors, and the design and labelling of drug syringes.6

Secondly, the organisation also limits its remedial efforts to attempts at changing the behaviour of an individual clinician or nurse by blaming, shaming, naming, and retraining. But the fleeting psychological precursors of fallibility—for example, inattention or forgetting—are the last and the least manageable aspects of the error producing sequence. Despite all the intuitive evidence to the contrary, it is far easier to fix situations than to change people, and this is the only way to achieve institutional resilience in health care.7

Finally, in institutions where the focus is on the “person model”, the end result of an investigation into an adverse event is a maladaptive mindset in which the institution lives happily with the illusion that it has improved patient safety. Lacking any reliable information about the true nature of the dangers or the actual manner of their occurrence, those who manage the institution feel safe. There may be the occasional bad apple, they think, but the barrel itself is in good shape. Having identified and “dealt with” the “wrongdoers”, it is then a very short step to the view that it could not happen here again. And this has a corollary: the belief that anyone who says differently is a troublemaker. Blaming thus fosters denial.

The net effect of these processes is illustrated in box 1 which is based on a hypothetical institutional response to a real life incident described by Carlisle et al.8

Denial

The American social scientist Ron Westrum distinguished three kinds of safety culture: pathological, bureaucratic, and generative.9 The main distinguishing feature is the way in which an organisation handles safety related information. Generative or high reliability organisations “encourage individuals and groups to observe, to inquire, to make their conclusions known and, where observations concern important aspects of the system, actively to bring them to the attention of higher management”.9 In sharp contrast, pathological organisations muzzle, malign, or marginalise whistle blowers, shirk collective safety responsibility, punish or cover up failures, and discourage new ideas. In short, they do not want to know. Bureaucratic or calculative organisations (the large majority) lie somewhere in between. They will not necessarily shoot the messenger but new ideas often present problems. Safety management tends to be compartmentalised. Failures are isolated rather than generalised, and are treated by local fixes rather than by systemic reforms.

1 Box 1 Symptoms of VSS in hospital A.

The incident

During a syringe change over a nurse incorrectly recalibrated a syringe pump delivering a morphine infusion to a patient with stomach cancer, resulting in a fatal overdose.

The immediate response

The institution suspended the nurse pending an investigation. She was subsequently given a formal written warning, reinstated, and retrained in the use of syringe pumps.

The incident investigation

The incident investigation showed that a Graseby MS26 syringe driver was being used. Whereas this pump is calibrated in millimetres per hour, a second widely used pump in the institution, the Graseby MS16A, is calibrated in millimetres per day. During the syringe change over the nurse applied the calibration principles for the MS16A to a MS26 pump.8 Such errors of transference, where the principles for operating one type of device are incorrectly applied to another, have been identified as common failure modes in other domains.3

Early warning signs

Data from previous incident reports showed that two similar errors had recently been reported. On the first occasion the nurse quickly realised her error and corrected it. On the second occasion the re-calibration error was spotted by a ward sister who immediately corrected it. Following these errors the chief pharmacist and two consultants wrote to management and asked for a single standard pump to be used throughout the Trust (as far as possible). This scheme was not implemented because the high cost would have made it impossible for the institution to remain within the financial targets set by the regional health authority. Management also felt that retraining the nurses involved was a more appropriate solution. They did, however, send a memorandum to all senior nursing staff to warn them of the differences between the pumps with instructions to pass this information on to their teams.

Recurrent problems of the system

In all three cases (the fatal overdose and the warning events) the nurses had been working on understaffed shifts. Ward sisters had complained to management about the increased workload. No action was taken because management accepted nursing shortages as a sad fact of working life. Hence, key situational factors, including differences in equipment design between syringe drivers, heavy workload, and staff shortages were not considered relevant during the incident investigations. The focus was solely on the individual nurses involved and the institution lived with the illusion that they had created safety by naming, blaming and retraining nursing staff who made errors.

Having thus dispelled any nagging concerns about the institution's stance on patient safety, the top managers of a pathological—or sometimes even a bureaucratic—organisation are now free to pursue the efficiency and cost saving targets that feature so prominently in the delivery of modern health care. Managing by such objectives is what professional managers have been trained for and, not unreasonably, they feel that their performance will be judged primarily by the extent to which they achieve these goals. This opens the way to the single minded pursuit of the wrong kind of excellence.

The wrong kind of excellence

Even those bureaucratic organisations with their eyes firmly on the “safety ball” can pursue the wrong kind of excellence. In industry there are many companies engaged in hazardous operations that still measure their plant safety by the lost time injury frequency rate. Unfortunately, this relates specifically to personal injury accidents and provides little or no indication of a system's liability to a major disaster. The road to organisational catastrophes is paved with falling or very low lost time injury rates.10

The corollary in healthcare institutions is a singular focus on critical numerical indices. Hospital managers live by numbers but they do not always appreciate their limitations. A myopic focus on manipulating specific indicators—such as waiting times/lists for clinics and surgery, number of operations carried out, percentage bed occupancy rates, frequency of cancelled procedures—does not readily lead to detection of the subtle interactions of the system that could end up as adverse events.

Dietrich Doerner, a German psychologist, has spent many years studying the strengths and weaknesses of human cognition when managing richly interconnected dynamic systems.11 His findings throw considerable light on the mental origins of this blinkered pursuit of excellence. When dealing with complex systems people have a tendency to think in linear sequences. They reason in causal series rather than in causal networks. They are sensitive to the main effects of their actions upon the progress towards an immediate goal, but frequently remain unaware of their side effects upon the rest of the system. In a highly interactive, tightly coupled system, the knock-on effects of interventions radiate outwards like ripples in a pool, but people can only “see” their influences within the narrow sector of their current concern. Similarly, people are not good at controlling processes that develop in an exponential or a non-linear fashion. They almost invariably underestimate their rate of change and are constantly surprised at the outcomes.

Box 2 Pursuing the wrong type of excellence in health care

Performing to numerical indices

Institution B worked hard to meet Government targets to reduce waiting lists for clinical procedures and outpatient appointments. It maximised the occupancy of ward/intensive care beds and reduced average waiting times for treatment in its accident and emergency department. On these and other numerical indices, institution B was regarded as a good performer by the regional health authority.

Early warning signs

Senior surgeons from various specialities at institution B had, over time, become increasingly worried about the high labour turnover, particularly amongst nursing staff, blood bank technicians, and operating theatre assistants. They were concerned that unless action was taken the continuous loss of experienced team members would eventually paralyse the system.

The institution's response

The organisation's myopia on meeting efficiency targets meant that warnings about the long term effects of the high staff turnover were not acted upon.

The consequences

The high staff turnover reached such a magnitude that it precluded the ability of the institution to reach both efficiency targets and to operate safely. Over time an over-reliance on agency nurses led to decreased nursing experience on the wards. Many of the agency nurses were unfamiliar with the institution's policies, culture, communication interfaces, and team practices. An audit by the infection control team showed decreased compliance with infection control procedures among nursing staff and an increase in the rate of nosocomial infections. Resource shortages in the blood bank and among operating theatre assistants led to operations being cancelled at short notice because the blood could not be cross matched in time or because insufficient technical support was available to carry out the case. There was also an increase in the frequency of cross matching errors which was linked to the long hours and poor shift patterns worked by laboratory staff.

A hypothetical example of how the single minded pursuit of the wrong type of excellence manifests itself in health care is shown in box 2.

Conclusion

If there is one set of characteristics that distinguishes the robust organisation from those more vulnerable, it would be a preoccupation with the possibility of failure, a conviction that today is going to be another bad day, and a shared awareness of all the many and varied ways in which Sod, Murphy, and human fallibility can combine to cause unintended harm. The net result of the three interlinked pathologies described here is quite the opposite. Seriously sick institutions forget to be afraid or they never learn to be afraid; either way, they remain firmly and fatally attached to the comfort zone in the matter of safety.

Recent developments in organisational learning theory also suggest a way of breaking the vicious VSS cycle.12 Two modes of learning have been distinguished: single loop and double loop learning. These are summarised in fig 3.

Figure 3

Single loop versus double loop learning.

When there is a discrepancy between desired and actual results, as in a patient mishap, single loop learners look only to the immediately preceding actions for an explanation and the lesson. Since this usually involves an error on the part of a “sharp end” professional, it leads inexorably to narrowly targeted efforts to change that person's behaviour—blaming, shaming and retraining as shown in the case study summarised in box 1. Such “learning” serves only to drive the VSS cycle. In contrast, double loop learning looks beyond the immediate actions to the basic assumptions and conditions that gave rise to them. Such “deep learning” leads enlightened (or sufficiently frightened) managers to question their core beliefs, and to recognise that errors are almost always systemic consequences rather than isolated causes. They then go on to make global (rather than merely local) reforms of the system as a whole, accepting that a more resilient organisation is better able to achieve financial as well as safety goals.

Finally, a word of encouragement: we know of no organisation involved in hazardous work in any domain that is entirely free from the VSS. Some symptoms are to be expected everywhere: after all, complex systems are designed, built, managed, operated, and maintained by human beings. Nonetheless, the presence within a healthcare institution of the full blown syndrome bodes ill for both its patients and the front line staff. The ability to detect the incipient indicators and the collective will to implement wide ranging corrective measures are essential prerequisites for an effective risk management programme.

Acknowledgments

Research at the Institute of Child Health and Great Ormond Street Hospital for Children NHS Trust benefits from Research and Development funding received from the NHS Executive.

References

View Abstract