Article Text
Abstract
Objectives: The aim of this study was to identify performance indicators thought to reflect the quality of patient care in the emergency department.
Methods: A three round accelerated Expert Delphi study was conducted by email or fax. A panel of 33 experts drawn from the fields of emergency medicine, emergency nursing, professional service users, and patients were consulted. Participants were initially asked to propose performance indicators that reflected the quality of care given in the emergency department setting in the United Kingdom. In the second round these proposals were collated and scored using a 9 point Likert scale; those that had not reached consensus were returned for reconsideration in the light of group opinion. Those statements reaching a pre-defined consensus were identified.
Results: 224 performance indicators were proposed. Altogether 36 indicators reached consensus reflecting good departmental performance after round three; 24 of these were process measures.
Conclusions: 36 potential indicators of good quality of care in the emergency department in the UK have been identified.
- performance indicator
- Delphi study
Statistics from Altmetric.com
Within the United Kingdom the specialty of emergency medicine is facing pressures from increasing patient numbers1 and the need to reduce waiting times.23 At the same time the medical profession is being urged to be more accountable and outcome indicators are being developed across a broad range of specialties.4 These indicators can have many purposes ranging from informing patients of the quality of service they can expect from their local hospital to allowing purchasers of health care to see that they are getting value for money.
The concept of measuring performance in health care is not new. Ernest Codman, a Boston surgeon, was recording the outcomes of his care and disseminating the information as early as 1914.5 Later Donabedian proposed the division of health care into structure, process, and outcome, which are causally linked.6
-
Structure—the human, physical and financial resources available to provide health care
-
Process—the care or health service provided to the patient
-
Outcome—the resulting effect on the health of the patient or population
Aspects of each of these can be measured or quantified although there is an emphasis on measuring outcomes—the end results of patient care. Outcome indicators are aggregated statistical measurements that describe the outcomes of health care for a group of patients or a whole population.7 Examples of outcomes measured include mortality, morbidity, physiological parameters or more subjective patient based assessments of health. Outcome indicators are rare in emergency medicine. Performance indicators measure quality of care and may encompass process, outcome, and effectiveness.7 The Joint Commission on Accreditation of Healthcare Organisations in the United States uses measures of process such as the timing of computed tomography in head injury as proxies for outcome when evaluating the quality of care in emergency departments.8 An example of performance assessment that encompasses emergency department care is the United Kingdom Trauma Audit and Research Network—a large database recording the outcome of patient care in trauma in terms of mortality.9 This links outcome with both structure and process.
The only current recorded indicator that directly addresses emergency department performance relates to waiting time.10 Although emergency departments do make a contribution to other NHS indicators such as mortality from skull fractures and suicide rates, it is difficult to quantify. There is a pressing need to develop indicators that reflect the quality of care delivered to the whole range of patients presenting to the emergency department.
The aim of this Delphi study was to identify aspects of practice thought to be indicative of the quality of emergency department care.
METHODS
An accelerated Delphi study was conducted between January and March 2001 using a panel of 33 experts. These included emergency physicians from a broad range of departments, senior emergency nurses, inpatient specialists with an interest in emergency care or outcomes research, and a patients’ representative from the Community Health Council. Thus a wide range of views was represented—including those of service providers, professional service users, and patients. The views of all participating experts were given equal weight. A list of the members of the Delphi panel is given in the appendix (available to view on the journal web site, http://www.emjonline.com/supplemental).
The first round of the Delphi asked the panellists to consider the aspects of emergency department care that might represent the quality of patient management. They were asked to propose indicators under the broad specialty subheadings shown in the box.
Subheadings for proposed indicators in round one
-
Surgery/orthopaedics/trauma
-
Paediatrics
-
Psychiatry
-
Anaesthesia/analgesia
-
Obstetrics and gynaecology/ENT/ophthalmology
-
Primary care
-
Minor injury
-
Radiology/imaging/investigations
-
Cardiac arrest
-
Bereavement
-
Major incidents
-
Other
The replies were collated into a series of proposed indicators/statements covering a wide range of topics. In the second round these statements were returned to the panel members in the form of a series of statements about which they were required to express their level of agreement with the use of the proposed measure as a performance indicator. This was done using a 9 point Likert Scale.11 After this round the results were analysed for frequency of response using the SPSS for Windows statistical package. Statements that had reached consensus as either good or bad indicators of quality of care were identified. Positive consensus was defined as 80% or more of respondents scoring 6 and above. Negative consensus was defined as 80% or more of respondents scoring 4 and below. In the third round the remaining statements were returned to the panellists in a similar format to round two. In addition the scores from round two were summarised and the respondent’s score was underlined as shown in figure 1. This allowed group members to change their response in the light of group opinion. Comments or concerns that had been expressed by panellists in the second round were also added to the round three questionnaires.
RESULTS
Of the 33 panellists, 28 completed round one, 31 round two, and 30 round three. Three of the 31 replies in round two were received after analysis so their responses were not included until the completion of round three. Round one produced a series of 224 statements that were then returned to the panellists. After round two 43 indicators had reached consensus as either good or bad indicators, and were removed from the process. A further 13 indicators reached consensus after round three. The 36 measures reaching consensus as having good potential for use as departmental performance indicators are shown in table 1 with the 20 measures thought to be poor performance indicators in table 2. The remaining statements are not presented here.
DISCUSSION
This study has identified potential indicators thought to reflect the quality of care given to the whole range of patients presenting to emergency departments. These can provide a starting point for further research and implementation. With adequate input of resources they could be incorporated into a national framework for quality control. Clearly, while healthcare providers are being urged to measure outcomes there are problems assessing the outcomes of emergency care. One current performance indicator for emergency departments relates to waiting times—a measure of process.10 In this study “time to be seen” and “total time in the department for patients with minor injury” were thought to represent good departmental performance whereas “total time in the department for admitted patients” depends on many other factors and consensus was not reached. Health Services Accreditation published performance indicators for accident and emergency services in 1997 and some of the indicators from this Delphi were similar to those best practice standards.16 These need to be developed and disseminated as measurable standards that are suitable for audit.
The panellists were chosen to represent a large number of viewpoints and the use of the Delphi technique allowed the panel to express their views anonymously. In this way consensus could be sought without prejudice and interpersonal relationships introducing bias. Panellists were also able to change their minds once they had seen the spread of opinion from the rest of the group and any relevant comments. Nevertheless, the Delphi process does have some limitations.12 The selection of the panel depended on the subjective opinion of the researchers and the availability of the experts within the allocated time period. The wide variety of indicators proposed in round one meant that the focus necessarily had to change from outcome indicators to performance indicators. This was felt to be a reflection of the difficulty in measuring outcomes within emergency medicine. Although 224 indicators were proposed in round one it is possible that important issues have been overlooked. Consensus as defined above does not mean agreement.
Although initially aiming to identify outcome indicators for emergency medicine, the proposed indicators in the first round reflected structure, process, and outcome. Most measured processes occurring within the emergency department. This may be because the specialty is process driven, with the timings of procedures being related to the urgency of the presentation. There are many problems with the assessment of outcomes in emergency medicine. For many patients the care received in the emergency department constitutes only a small proportion of their overall health care with other specialists responsible for their ongoing management. For these patients the timeliness and appropriateness of the diagnostic and therapeutic processes are the most important aspects of emergency department care. The outcomes of these processes may not be apparent when the patient leaves the department and it is not clear when the outcomes of emergency care for these patients should be assessed. For many other patients, treated solely by the emergency department, the outcome of their episode of care is again unclear. Only a small proportion of these are followed up in review or fracture clinic and the outcome for the others is usually unknown. The lack of data collection in many departments increases this problem. It must be remembered that from the perspective of both the patient and other healthcare providers, satisfaction with the processes of care should be considered an outcome. This was emphasised in the replies from the representative of the Community Health Council.
Two of the 36 indicators reaching positive consensus in this study were thought by the authors to be direct measures of outcome. One of these proposed that we should assess the survival rate from cardiac arrests within the emergency department using the Utstein criteria.13 Survival from out of hospital arrests is evaluated in this way in some regions by the ambulance service. Indicators reflecting the structure of emergency departments, including staffing levels also reached consensus. Many of these have been considered elsewhere.1415
Many of the indicators reaching consensus as a poor reflection of the quality of patient care are not under the direct control of emergency medicine. These include the “proportion of patients sent by their GP” and “time from cardiac arrest to presentation at A&E”. Although these may represent important quality issues for other healthcare providers, they were thought not to be relevant to care in the emergency department. Others, such as the proportion of patients admitted to the intensive care unit from the emergency department were highly case mix dependent.
The results of this study can only be used as a basis for further work. Before any performance indicator can be adopted it needs to be clearly defined, tested for reliability (the ability to give repeatable results), validity (the degree to which the measure reflects actual performance), and responsiveness (the ability to detect a significant change in performance). A widespread accurate system for data collection also needs to be in place. Significant investment in the process of continuous quality monitoring is required for this to occur. More practically, many of these measures could be audited locally, regionally, or even nationally.
This Delphi study provides a starting point for the development of indicators within the specialty of emergency medicine. It has identified areas where further research is needed to explore the outcomes of emergency care and to link measurable processes with these outcomes. This is essential to define the role of emergency departments and to monitor the standard of care from within the specialty.
CONTRIBUTORS
E Beattie coordinated the project undertook the analysis and wrote the first draft of the paper. K Mackway-Jones initiated the study, identified the panel members and contributed to data interpretation and the subsequent drafting of the paper.
Acknowledgments
The authors would like to thank Mr Mike Clancy and Professor Andrew Long of Salford University for their support and advice.
Supplementary materials
Web-only Appendix
The appendix is available as a downloadable PDF (printer friendly file).
If you do not have Adobe Reader installed on your computer,
you can download this free-of-charge, please Click hereFiles in this Data Supplement:
- [View PDF] - Appendix 1: Participants in Delphi Study
Footnotes
-
Funding: this research was funded by a research grant from the Faculty of Accident and Emergency Medicine.
-
Conflicts of interest: none declared.
Linked Articles
- Primary Survey