Objectives: To develop a graded classification system for risks in emergency medicine. To test the inter-user reliability of this classification system.
Design: Prospective collection of data involving emergency department (ED) critical incidents. Derivation of classification system using the collected critical incidents. Comparison of results of classification of a sample of the critical incidents between different users of the system.
Setting: EDs in two teaching hospitals and two district general hospitals (DGHs) in the north west of England.
Interventions: Observational study.
Main outcomes: Classification system itself. Results of classification of same critical incidents by different users.
Results: 816 critical incidents were identified and used to derive a typology. This typology was found to have inter-user reliability score of 86% (95% confidence intervals 76.4% to 95.6%).
Conclusions: The typology that has been derived is a reliable tool for the classification of risks in emergency medicine.
- ED, emergency department
- DGH, district general hospital
Statistics from Altmetric.com
In recent years, there has been much attention paid to the need to minimise the number, and the impact, of risks to patients in all branches of medicine, with emergency medicine being no exception. To facilitate this, it is first necessary that the nature of these risks be understood. A consistent means of classifying events entailing risk, or critical incidents, is required if the nature of risk is to be investigated. The need for such a classification has been highlighted by the recommendation by the chief medical officer’s expert group on learning from adverse events that a mandatory reporting scheme for adverse healthcare events and specified near misses be introduced.1 The group recommended that a manual be produced, which would describe which events should be reported. As part of such a reporting scheme, a “filter” would be required, “so that only certain categories of event and near miss [would] be reported nationally or regionally”. In addition to its use as a research tool, it is anticipated that an emergency medicine risk classification system, or typology, would also have uses at a local level, for comparison between local emergency departments (EDs) and to facilitate internal audits.
Studies focusing on other branches of medicine2–4 divided adverse events in various ways, including by location, specialty, or clinical category (operative, diagnosis, therapy, drug, fall, etc). The classifications used in these and similar studies, however, did not include a grading system to indicate the severity of the adverse events. In addition, none are readily applicable to EDs and do not include adverse events relating to nursing care, or to ancillary staff. Guly5 described a method of scoring diagnostic errors on a scale of one to seven, based on what additional treatment, and what follow up, the patient would have received had the diagnosis been made correctly. This scale, however, is only applicable to failures of diagnosis. No typology has previously been described that is inclusive, relates specifically to emergency medicine, and incorporated some grading of the potential harmful consequences of critical incidents. The main aims of this study were to produce a reliable, broad, simple to use critical incident typology for use by clinicians in emergency medicine, and to test the reliability of this typology.
The study was conducted in the EDs of four hospitals, in the north west of England. Two of these were teaching hospitals, and the other two were district general hospitals (DGHs).
A database of critical incidents was created, from which the typology could be developed. For the purposes of the study, critical incidents were defined as any incidents that had an actual or potentially harmful effect on the outcome for a patient or group of patients. The definition thus included near misses as well as incidents resulting in actual harm to patients.
Data regarding critical incidents were collected in all the departments involved for 12 months, from February 1999 to February 2000. Critical incidents were identified in the following ways:
Each department was requested to identify a middle grade doctor to act as a liaison between the department and the investigator. These doctors were asked to continuously collect basic details of critical incidents, on pre-prepared forms.
The middle grade doctors were also requested to start an anonymous incident reporting scheme within their department.
In one department, all patient records were already being checked daily by the emergency medicine consultants. They were asked to record any critical incidents that they encountered.
The main investigator (MT) spent six weeks in each department during the course of the 12 months of the study. During this time, critical incident data were collected by means of daily review of patients’ records, direct observation of the departments, and by direct reporting of critical incidents to the main investigator. Attempts were made to identify any critical incidents that had occurred outside of these six weeks by means of discussion with nursing and medical staff.
Some critical incidents were already being recorded in each of the departments. Where appropriate, these critical incidents were included in the database.
Derivation of typology
The broad categories into which critical incidents were seen to fall by studying the critical incident database were used to derive a basic first draft of the typology. Batches of critical incidents in the database were classified according to these broad categories. These batches were then assigned a level of severity. The reasoning behind each grading was incorporated into the typology. In this way, the typology came to consist of a collection of broad categories, each subdivided into levels of severity, with explanations at each level of what type of incident should be placed at that level.
Reiterative discussions were held between MT and KMJ. The overall structure of the typology, the level of severity that each type of incident had been given, and the descriptions of each incident type were all discussed. Various adjustments were made to the typology as a result of these reiterative discussions, until it was felt to fulfil its requirements.
Further critical incidents from the database were classified according to the typology. Minor modifications and adjustments were made to the typology as necessary as the classification of incidents proceeded. Each critical incident in the database was classified according to the finalised typology.
An emergency medicine consultant, not previously involved in the study, was provided with a copy of the completed typology and an explanation of its use. Fifty critical incidents were selected from the database by means of a computerised random number generator. The consultant was asked to classify these critical incidents, given written descriptions of them, according to the typology. Comparison was made between the consultants’ classification of the 50 critical incidents and that of MT. The results of this comparison were expressed mathematically by means of a percentage figure, with calculated 95% confidence intervals. A formal study of interobserver reliabilty—that is, a κ study—was not possible because of the large number of possible categories into which critical incidents could potentially be placed.
Collected critical incidents
Information regarding 816 critical incidents was collected from all departments over the 12 months of the study. Anonymous voluntary reporting made only a small contribution to the database of collected critical incidents. Most critical incidents were identified by consultant ED record review and by the main investigator during visits to the departments.
The completed typology
Selection of category
Table 1 shows the completed typology. The intention is that the user progresses down the left hand column until the general description of the incident for classification is reached. “Patient assessment” is the first category to be considered. This category is intended for all incidents involving delay or failure in the assessment of a patient or patients. Thus, it includes failures attributable to inappropriate triage, delays attributable to excessive waiting times, the requesting of inappropriate investigations, and failures of diagnosis. The next category is “Treatment”, for failures and delays in the treatment given to patients. “External disposal” is intended for any failures in the disposal of patients from the department. It therefore includes inappropriate discharges, inappropriate admissions, failures to arrange appropriate follow up, and failures in giving necessary information to patients and to their general practitioners. The hierarchies for “Patient assessment”, “Treatment”, and “External disposal” can be further subdivided into failures of omission and those of commission. This division is less applicable to the remainder of categories.
“Internal disposal” involves situations in which patients are placed in inappropriate locations in the department. This may occur by error, or because of lack of available space within the department. The “Equipment” category is intended for situations where an incident occurs because of equipment failures or shortage of supplies. “Documentation” refers to failures occurring as a result of inadequate, illegible, inaccurate, or misplaced documentation. Those incidents that occur solely because of the actions of patients are categorised next. Finally, there is a “General” category for incidents not classified under any of the preceding definitions.
The order in which the user proceeds down the table is specified, to help the user to select the appropriate category, thus helping to improve consistency of results between different users. Consider, for example, a critical incident in which a patient attends the ED with chest pain and is discharged, his ECG having wrongly been interpreted as normal when it in fact showed an acute myocardial infarction. Without assistance, the user may consider this as either a failure of “Patient assessment” or as an “External disposal” failure. Using the natural progression down the table, however, the user is directed towards categorising the incident under “Patient assessment”, this coming before “External disposal”. Specifying the order in which categories are considered also ensures that patient actions are only considered when they are the genuine cause of an incident, and any cause of that incident that could be the department’s responsibility has been considered.
Selection of level of severity
Having selected the broad type of the critical incident for classification, the user then looks across the table until an appropriate description for their incident is reached. The level of severity is thus obtained. The levels range from 1, generally for a life threatening situation, to 5, generally corresponding to failures where no harm occurs.
Results of validation study
Of the 50 randomly selected critical incidents that were classified by the independent consultant as well as by MT, classification was identical in 43 (86% (95% CI 76.4% to 95.6%)). In three of the remaining cases, delays in patient assessment were classified as “General”, rather than as “Patient assessment”, because this particular part of the patient assessment category definition had not been noted. Agreement would have been achieved in 46 (92%) of cases if this had not occurred. The remaining four cases consisted of one in which inability to admit patients to a ward was classified as “Internal disposal” rather than “External disposal”; one in which a patient refused examination, which was classified either as “Patient assessment” or as “Patient action”; one in which failure to take an adequate history and obtain an ECG was classified either as a level 1 patient assessment failure or as a level 3 patient assessment failure; and one case in which a child who was drowsy and vomiting after head injury was discharged after a normal skull radiograph was classified either as a level 3 external disposal failure or as a level 2 external disposal failure.
Critical incident collection
A total of 816 critical incidents were collected and used to derive the typology. Most of these critical incidents were identified either directly by MT or by consultant record review. There was a failure to successfully institute a voluntary incident reporting scheme. Reasons for failure of such schemes have been previously described,1 and include lack of trust leading to fear of retribution, lack of belief in the benefits of reporting, and reluctance to carry out additional work. In this study, it seems that additional factors were important. Firstly, the middle grade doctors did not have a strong incentive to make the schemes work. Successful schemes require effort and regular encouragement of workers to participate, particularly in the early stages, before any benefits can be seen. The lack of incentive for the doctors selected in each ED, combined with their already considerable workload, meant that such efforts may not always have been made. However, voluntary incident reporting did not succeed as a means of incident identification in department D, where the main investigator was based, as well as in the other departments. The main investigator clearly had an incentive to make the scheme work. Lack of incentive was therefore not the sole cause of failure. From comments received by doctors and nurses, it would seems that the time required for completing reports was a significant disincentive to reporting, despite the information required on the form being very basic. The doctors and nurses also said that by the time they had a free moment, they had forgotten about the incident. It may be that the nature of work in the emergency medicine environment, in which there is constantly something that needs to be done, makes incident reporting more problematic than in other fields. In anaesthesia, for example, where incident reporting schemes have been described, there is often time after an incident has passed when a report can be completed before the anaesthetist next needs to provide any active care for the anaesthetised patient.
Strengths and weaknesses of the study
Despite the lack of data from successful voluntary reporting schemes, a large number of critical incidents were collected from all the participating EDs, more than have been identified in any previous studies of such incidents in emergency medicine. The critical incidents varied widely in terms of both their type and severity. The typology that was derived using these critical incidents should therefore be comprehensive, and it is anticipated that it will have the power to classify critical incidents occurring in most EDs.
In the internal validation section of the study, it was found that exact agreement was achieved in 86% of cases (95% CI 76.4% to 95.6%). Examination of the cases of disagreement showed that three of these were attributable to failure to appropriately categorise delays in patient assessment in the “Patient assessment” category, instead categorising them as “General” failures. The description of the typology makes it clear that delays in patient assessment are to be categorised under “Patient assessment”, but it may be that this does not seem intuitive to others, leading to disagreements. Of the remaining four cases of disagreement, in only one case was there a difference of more than one level of severity. In this case, the consultant classified a failure to take an adequate history and request an ECG as a level 1 failure, when the main investigator had classified it as level 3. It should be noted that for this validation study, only a minimal amount of explanation was provided before their classification was performed. It may be that good agreement is achieved with minimal explanation, as in this case, but that excellent agreement could be achieved with more active training and a system of giving feedback after a number of incidents have been classified. This is an area requiring further study before this typology can be widely used.
Comparisons with other studies
As far as is known, no other system has been described for the hierarchical classification of a wide range of adverse incidents in emergency medicine, whether these include only full blown accidents, or also incorporate near misses. In addition, no study has been described that compares different EDs in terms of the types of critical incidents occurring there. Previous studies2–4 have used similar broad categorisation of failures, such as diagnosis, therapy, drug related, etc, but the typology derived for this study differs from these in two main ways. Firstly, it incorporates an assessment of how serious, or potentially serious, an incident is. This has obvious benefits for anyone investigating ED failures who may not be concerned with trivial failures. Failures classified below a specified level can simply be disregarded. Secondly, the typology has been designed specifically for use in EDs, as a comprehensive, clinically driven tool. It therefore includes a broad range of incidents that occur in the field of emergency medicine, but is not made complicated by including incidents that do not occur in this area.
In comparison with Guly’s failure severity scoring system,5 the typology presented here has been designed to look at a much broader range of potential failures occurring within EDs. It therefore is a more comprehensive tool for the investigation of failure in these departments.
Future application of typology and future research
Prospective testing of the typology on a new set of critical incidents, by different users, is required before its true reliability can be assessed. In addition, a survey of experienced clinicians in the field of emergency medicine regarding the typology, to discover to what extent they agreed with the definitions of levels of severity provided by them may be of benefit.
It is of note that most critical incident reports were obtained after daily record review (by the departmental consultants in department D or by the researcher). Further work is warranted to determine the role of this method as risk management tool.
After further validation, and refinement if necessary, the typology could be used as a means of continuous assessment of the types of incidents occurring in EDs. It may also be considered as a tool to assist in the selection of critical incidents for reporting to a hospital reporting system, or to the regional and national reporting schemes that have recently been recommended.1
The typology presented is a logical and rapid method of classifying a wide range of critical incidents, according to both their type and their level of severity. It is anticipated that it will prove to be a useful tool in emergency medicine risk management, being clinically relevant and comprehensive. Further studies of the validity, reliability, and clinical usefulness of the typology are indicated.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.