Article Text
Abstract
The importance of presentation and evaluation of economic data with regard to the cost effectiveness of a health care intervention are discussed.
- economic evalutaion
Statistics from Altmetric.com
The recognition that the health services, like all other activities, have limited resources available has lead to the widespread adoption of cost effectiveness criteria in the assessment of new health technologies.1–4 Evidence is required, not only of safety, efficacy, and effectiveness, in the form of clinical trials, but also of efficiency, or cost effectiveness. The NHS Centre for Reviews and Dissemination has identified over 4000 published economic evaluations,5 the majority of which have been published over the past 10 years. Some innovations are promoted on the basis of cost effectiveness alone.
Given this trend, it is increasingly important for clinicians to be able to critically appraise articles that report economic data. Excellent textbooks6 and articles7–9 exist to educate those interested in health economics and guide the appraisal of health economic papers. However, busy clinicians may have little time available for such study and, faced with appraisal guidelines based on unfamiliar concepts, may be tempted to accept published data at face value.
The purpose of this paper is not to attempt to teach health economics to accident and emergency (A&E) specialists but to introduce a few important concepts and demonstrate how health economic data, like clinical data, can be presented in a misleading fashion. It is based around a series of examples of poor practice that might be used by a naive or unscrupulous researcher to demonstrate that their innovation appears cost effective. Each is illustrated by an example. These examples are all entirely fictitious—none are based on genuine research. Any similarity to published data is unintentional.
EXAMPLE 1 DO NOT PRESENT ANY DATA
A study shows no difference in outcome between nurse practitioner and senior house officer management of minor injuries. The results section concludes by stating “economic evaluation showed that on average £1:17 was saved per patient attended by a nurse practitioner as opposed to the SHO (p<0.001)”
The inadequacy of this type of statement should not require any elaboration. Nevertheless, it is surprising to see how often it appears in published articles. We would not accept clinical data presented in such a brief fashion. The same should apply to economic data. Likewise, the apparent statistical analysis is meaningless unless more data are presented.
In fairness to authors, papers reporting primary clinical trials can spare few words to give detailed accounts of the economic evaluation, while the results of these evaluations are crucial to the policy implications of the research. However, where such brief statements are made references to reports or papers giving more details of the economic evaluation are a minimum requirement. Where such references are provided, it is incumbent upon the reader to follow the paper chase, as there are a host of errors of commission and omission still available.
EXAMPLE 2 PRESENT COST DATA ONLY AND IGNORE OUTCOMES
Introduction of a protocol to guide the management of minor head injuries significantly reduced the number of skull radiographs performed and resulted in impressive cost savings.
An economic evaluation compares the costs and outcomes of two or more interventions10 (see table 1). Anything less is a partial evaluation, in this case, a cost comparison.
What is an economic evaluation? An economic evaluation should compare both the costs and the outcomes of two alternatives. If not, this is not a true economic evaluation and the terms outlined in the table below should be used
The protocol in the example may well save costs but how do we know that outcomes, such as missed disease or patient satisfaction, are not affected. While it is perfectly acceptable for the outcome data to be derived from other sources, such as a literature review or a separate clinical trial, the evidence on outcomes should be presented in sufficient detail to support any presumption of equal effectiveness.
EXAMPLE 3 ONLY MEASURE THE OUTCOMES THAT SUPPORT YOUR CASE
A study comparing the “multiple nerve block” technique for manipulation of wrist fractures with general anaesthesia found that considerable cost savings were achieved with the nerve block and yet radiological outcome at six months was identical for the two strategies.
It often surprises clinicians, who think that health economics is all about counting costs, that much more attention seems to be paid to measuring outcomes.11 This may reflect an excessive reliance in clinical research upon outcome measures that are more important to clinicians than patients. Measurement of peak flow rate, blood pressure, serum haemoglobin concentrations, and radiological degrees of angulation may all help our chances of getting statistically significant results, but do they translate into any benefit for the patient; that is, does the patient feel any better?
In the case above, it would be more useful to have measured pain scores or patient satisfaction. These may have revealed that patients detested the multiple nerve block. If so, we now have a problem because the cheaper alternative is the least effective and interpretation of the results is more complicated. Rather than one alternative being dominant by virtue of achieving lower costs with the same or better outcomes (or better outcomes at the same or lower costs), we now have an alternative that is less effective but also costs less. In this case results can be presented in the form of a cost effectiveness ratio; that is, the cost per unit change in outcome. The outcome(s) measured and whether any differences are found will determine what type of economic evaluation should have been performed12–14 (see box 1).
Box 1 Types of economic evaluation
Cost minimisation analysis
The effectiveness of the two alternatives is, or has been, demonstrated to be equivalent. Analysis simply determines which option costs less. If no data are presented to establish equivalent effectiveness then this is simply a cost analysis and not a true economic evaluation.
Cost effectiveness analysis
The effectiveness of the two alternatives differ and is measured in a single, common unit of effect, such as lives saved, change in peak flow or change in pain score. If the more effective alternative is also more expensive the results may be expressed as a ratio of cost per unit change in effect.
Cost utility analysis
This is a form of cost effectiveness analysis in which the measure of effectiveness used is a measure of utility. This is a generic measure, such as the quality adjusted life year (QALY) that can be used to value health status across any type of illness.
Cost consequences analysis
If a cost effectiveness analysis has measured a variety of outcomes then results may be presented as a cost consequences analysis. This has the disadvantage of not permitting combination of data in a single ratio of cost effectiveness.
Cost-benefit analysis
Outcomes are valued in monetary units to provide information on the absolute benefit of an intervention—that is, does the benefit outweigh the costs. Although attractive in principle, the difficulty in obtaining valid outcome measures has limited the acceptance of this approach.
The choice of outcome measured is therefore of key importance in economic evaluation and is an area of growing research.15 As mentioned above, outcome measures should preferably reflect values important to the patient. Using several outcome measures may cause complications when results are presented, as it will clearly be difficult deriving a single cost effectiveness ratio. A single outcome measure that combines all aspects of the patients' perception of their health could therefore be considered ideal. The development of such measures has been the subject of much recent research.15–17
EXAMPLE 4 CHOOSE A POOR COMPARATOR
A study of massage therapy for treating acute cervical sprain was cost effective when compared with routine follow up of soft cervical collar, physiotherapy, and orthopaedic outpatient referral. Outcomes were identical but massage therapy cost less.
In clinical trials, a new intervention should be compared with the best alternative. The same principle applies to economic evaluations. The challenge is in defining the best comparator. Issues such as efficacy, side effect profile, and utilisation all need to be taken in to account. For the purposes of economic evaluation, the cost must also be considered in deciding what is the best comparator.
If it is unclear whether any alternative strategy is effective, the best alternative may be to do nothing. In this case, the alternative regimen has little evidence to suggest it will be any more effective than no intervention18–20 and would be expected to incur significant costs. A more relevant comparison may be to physiotherapy alone21 or to no follow up.
As a general rule the rationale for the choice of comparator must be explicit so that the reader can choose whether they agree with the rationale, and assess whether the rationale applies in the context they work .
The choice of comparison is particularly important when one wishes to extrapolate the results of economic evaluations carried out in other countries, or other health care systems, to ones own service. Many economic evaluations come from the United States (US). Given that the US spends 13.7% of gross domestic product on health care compared with 5.8% in the United Kingdom (UK),22 it is unsurprising that perfectly valid economic evaluations in the US may have little relevance here.
EXAMPLE 5 IGNORE POTENTIAL KNOCK ON EFFECTS FROM YOUR INNOVATION
An intensive diagnostic testing regimen for patients presenting to A&E with chest pain reduced the admission rate with this complaint from 60% to 40%. The cost savings associated with reducing the admission rate more than compensated for the increased cost of diagnostic testing in A&E. Outcomes were unchanged, so the diagnostic testing regimen was cost effective.
Any economic evaluation must state the viewpoint taken6,7 (see box 2). There is no “correct” viewpoint, but if a restricted viewpoint is taken then the possibility that costs are simply being moved on to another budget must be considered. If that is your intention, fair enough, but it is dishonest to pretend that costs simply disappear.
Box 2 The viewpoint of an economic evaluation
The viewpoint of an economic evaluation should always be specified. This will determine what costs and outcomes should be measured. Possible viewpoints include departmental (for example, A&E), institutional (for example, the hospital), health services or society as a whole. If a restricted viewpoint is taken cost effectiveness may only be relevant to the budget considered. The intervention may not be cost effective when another viewpoint is taken. Taking a broad viewpoint may overcome this problem but the difficulties in measuring costs and outcomes will increase.
In this case the viewpoint was that of the A&E department. The knock on effects of diagnostic testing are likely to have their effect elsewhere. For example, intensive diagnostic testing is likely to produce a number of positive results. True positives may benefit from treatment, but this should be demonstrated. False positives will not, and the cost of ultimately arriving at a definitive diagnosis may fall onto another department. For example, what happened to the angiography rate when intensive diagnostic testing was introduced?
A&E is particularly susceptible to both creating and receiving knock on effects. Health service costing will be discussed further but it is important to note at this stage that it often creates an economic incentive to reduce admission and expedite discharge. If this simply transfers costs to outpatient services then it will only be cost saving from a restricted viewpoint. Likewise, if accelerated discharge results in increased reattendance then A&E departments will receive the cost of knock on effects.
Again, the rationale for the perspective adopted needs to be made explicit.
EXAMPLE 6 KEEP THE COSTING METHOD AS SIMPLE/CRUDE AS POSSIBLE
“Cellulomycin” is a new antibiotic for treating soft tissue infections. Although it is much more expensive than those in routine use and it does not improve overall cure or complication rates, it does control infections much quicker and therefore reduces length of stay. An economic evaluation has demonstrated that by reducing length of stay Cellulomycin is actually more cost effective than its rivals.
Length of stay is typically used by hospital financial data systems as a marker for resource use. The total cost of medical, nursing, diagnostic, and therapeutic interventions are included to give a per diem, or daily, cost of inpatient hospital stay. Per diem cost is therefore only an average of a wide range of possible inpatient costs. For a typical hospital stay most costs are concentrated in the first few hours of admission. During this time the actual cost may be much greater than the per diem cost. At the end of a hospital stay the opposite is true, the actual cost may be less than the per diem cost.
If a patient is treated with Cellulomycin they still incur all the early costs of medical and nursing assessment and diagnostic testing. The reduction in length of stay comes from the latter part of their stay. Per diem costs will therefore overestimate the cost saving associated with a reduced length of stay.
This is not the only way in which the data from hospital financial returns may fail to reflect the true cost of a service. Cost data may be unavailable and only charges can be used. Charges may be simplified or used to cross subsidise other services and may therefore overestimate the true cost.
What we really want to measure when we say “true cost” is the opportunity cost. This is described in box 3. Actually measuring the opportunity cost is very difficult and simplifications of the costing process may have to be accepted. However, where simplifications are made, it is the responsibility of the analyst to convince the reader that these simplifications are incapable of changing the conclusions of the evaluation. The potential impact of such simplifications upon the final conclusions of an analysis should not be ignored.
Box 3 Opportunity cost
The opportunity cost of a programme is the benefit foregone by using resources on that programme instead of the most attractive alternative. As such it represents the “true cost” of a programme by assuming the resources required would otherwise be used to their maximal potential.
EXAMPLE 7 IGNORE UNCERTAINTY AND RANDOM VARIATION
Two antibiotics (Cheapomycin and Costacillin) used for prophylactic treatment of dog bite wounds were compared in a randomised controlled trial. No significant difference in outcomes were recorded between the 50 patients in each arm of the trial. The authors concluded that Cheapomycin, being half the price of Costacillin, is the preferred antibiotic for this condition.
The importance of considering the role of chance, or random variation, in interpreting clinical data is familiar to all clinicians. Cost data can be equally subject to random variation. Resource use may depend upon, for example, length of stay, time taken off work, number of clinic appointments or use of intensive care. In each case random variation may be important. Simply demonstrating that one group uses fewer resources than another is not sufficient, we must apply some statistical analysis.
Cost data are often highly skewed.23 A few cases often use a disproportionate amount of resources. If these few cases happen by chance to fall into one group then a false impression of increased resource use may be obtained. In the example described above the basic cost of antibiotic prophylaxis may well be trivial compared with the cost of treating an infected wound. Unless the study is powered to detect differences in complication rates we cannot consider it powerful enough to detect significant difference in cost.
Uncertainty in economic evaluation may result from more than just random variation. It is not unusual to have to make assumptions regarding the extrapolation of data in an economic analysis, for example in attributing per diem costs to length of stay. There is often no easy way to avoid such assumptions. Instead they must be tested by performing a sensitivity analysis.24 If the results of an analysis are sensitive to variation of key parameters or assumptions within a credible range, then they should be viewed with caution.
SUMMARY
This article has attempted to illustrate how shortcomings in presentation and interpretation of economic data can lead to misplaced conclusions regarding the cost effectiveness of an intervention. Economic evaluations are as subject to error because of bias or random variation as clinical trials. Reporting of objectives, methodology, and results should be comprehensive and subject to statistical analysis. Readers who are interested in developing critical appraisal skills in this regard are encouraged to consult the texts listed under the heading Further reading. Those with less time and enthusiasm are encouraged to view uncritical claims of cost effectiveness with scepticism and to consider whether the flaws we have described may be responsible.
FURTHER READING
Mooney G. Economics, medicine and health care.25 A readable introduction to the principals behind health economics. Drummond MF, et al. Methods for the economic evaluation of health care programmes. 2nd edn.6 Essential reading for any clinician collaborating in an economic evaluation.
Gold MR, Siegal JE, Russell LB, et al, eds.Cost effectiveness in health and medicine.27 Provides a detailed discussion of theory behind the practical application and provides recommendations for good practice.
Drummond MF, et al. Guidelines for authors and peer reviewers of economic submissions to the BMJ.7 A comprehensive check-list for critical appraisal.
Series of short BMJ articles by Ray Robinson.10,12–14,26 A concise introduction to the basic concepts behind economic evaluation.
REFERENCES
Linked Articles
- Primary Survey