Article Text

Download PDFPDF

On the philosophy of diagnosis: is doing more good than harm better than “primum non nocere”?
Free
  1. R Body,
  2. B Foex
  1. Emergency Medicine Research Group, Research Office, Emergency Department, Manchester Royal Infirmary, Manchester, UK
  1. Dr R Body, Emergency Medicine Research Group, Research Office, Emergency Department, Manchester Royal Infirmary, Oxford Road, Manchester M13 9WL, UK; rbody{at}doctors.org.uk

Abstract

Diagnosis is arguably the cornerstone of medicine. Without at least some form of diagnosis the practice of medicine would not be possible. This narrative review explores common philosophical assumptions and challenges the notion that a certain diagnosis can ever be made. The idealistic concept of “primum non nocere” is discussed, and whether the utilitarian goal of achieving “the greatest happiness for the greatest number” is a feasible or preferable alternative is considered. It is concluded that utilitarianism is inescapably intertwined with modern medical practice. Suggestions are presented to further the understanding of diagnostic medicine by embracing its principles.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT DO WE MEAN BY DIAGNOSIS?

Diagnosis is defined as “the identification of the nature of an illness or other problem by examination of the symptoms”.1 Its origin is from the Greek words “dia” meaning “apart” and “gnosis” meaning “knowledge”. There are, in fact, three distinct terms for “knowledge” in Ancient Greek. “Skene” is the kind of knowledge derived from observation and is the origin of the English word “science”. “Mathein” is the kind of knowledge derived from calculation as with “mathematics”. “Gnosis”, meanwhile, is a deeper inherent knowledge. It yields, for example, the English word “recognise”.

Through diagnosis we provide patients with labels to identify illness or disease. We infer an understanding of the anatomical and pathophysiological mechanisms behind their disease process. It is only through diagnosis that we are able to prescribe treatments and develop management plans for our patients. Without applying at least some form of diagnosis, the practice of medicine would not be possible.

TRADITIONAL CONCEPTS OF DIAGNOSIS

Our traditional concept of diagnosis derives from a philosophical approach or paradigm known as “positivism”. Positivists believe that we can find out “how things really are” or “how things really work”.2 Reality exists, and it is driven by immutable natural laws. It is possible for us to appreciate that reality by adopting a neutral and distant position and making objective observations about it. We develop hypotheses and subject them to verification. By observing constant relationships between variables, we can derive “laws” and “rules” that we can confidently state as “facts”.

Incorporation of the word “gnosis” into “diagnosis” is entirely consistent with the positivistic framework. Thus, physicians can recognise illness and be sure of its nature. Indeed, positivism would hold that, having objectively researched a particular investigation and proven its diagnostic use, we may implement it and know whether our patient has the disease under investigation. As such, we know whether a patient is anaemic by his or her haemoglobin concentration or whether an injured patient has a fractured bone from the radiograph. Ultimately, a positivist viewpoint holds that a patient either has or does not have a particular illness. It is possible to investigate this objectively and to categorically present our diagnosis as a fact.

Under scrutiny the positivistic paradigm becomes untenable. According to the positivistic paradigm, every scientist who observes the same piece of reality will see the same thing. Intuitively we know this to be untrue. It has been aptly demonstrated that our observations are not shaped by reality alone, but by the imperfections of our sensory and cognitive mechanisms and by our individual characteristics and perspective. Thus, multiple studies have demonstrated suboptimal interobserver reliability when two independent physicians undertake such apparently basic techniques as taking a patient history or eliciting clinical signs.35 The same is true even among experts (there is, for example, a high rate of interobserver variation between radiologists reporting chest radiographs).6 Absolute objectivity is therefore merely an ideal. While it can never be truly achieved, we can approximate it reasonably closely by striving to eliminate the effects of our imperfect perceptions and predispositions.

It is impossible for us to truly “know” reality. As such, though we may observe an apparently constant correlation between two variables, we cannot be entirely certain of its validity. We therefore express our understanding of reality in terms of probabilities. This is the post-positivistic paradigm that informs the majority of contemporary quantitative scientific inquiry. It underpins the evaluation of diagnostic tests as we currently see them in the medical journals. Sensitivities, specificities, positive and negative predictive values and likelihood ratios are presented together with 95% confidence intervals. Receiver operating characteristic (ROC) curves help us to make the best use of imperfect diagnostic tests.

As the esteemed diagnostician William Osler once said: “Medicine is a science of uncertainty and an art of probability”. We seem to have come to appreciate that there is no such thing as the perfect diagnostic test. While the quest goes on, it is clear that we hunt in vain for the “square ROC curve”. Our practice must therefore be guided by less than ideal investigations that have only a “reasonable accuracy” for our intended purposes. Thus, we demand high sensitivities for investigations that we would rely upon to exclude important diagnoses, even if that means compromising the specificity (take, for example, the use of D-dimer in suspected venous thromboembolism). Similarly, we need investigations with high specificities if we are to use them to recommend potentially risky or unpleasant treatment regimes, even if the sensitivity is suboptimal (a good example is the use of the ECG as a marker of ST elevation myocardial infarction and for guiding the prescription of thrombolytic therapy).

While today’s medical journals are brimming with similar examples, a hugely important question remains largely unanswered and, perhaps surprisingly, somewhat neglected. Although we are clearly willing to accept the use of imperfect investigations, how do we define an acceptable level of inaccuracy? Many researchers seem happy to ignore this question altogether. Others may state arbitrary cut-off points within their methods. They may, for example, be content with demonstrating a sensitivity of at least 99% or a negative predictive value whose lower 95% confidence interval is at least 90%. Even then it is rare for researchers to articulate what they would consider to be a reasonable specificity in order to achieve those subjective targets.

We very rarely consider the expected consequences of implementing a diagnostic test or choosing a particular diagnostic cut-off. There must be potential benefits associated with true positive diagnosis for the investigation to be worth appraising in the first place. On the other hand, there are inherent risks associated with false negative and false positive results. How do we choose an appropriate balance between the potential benefits and risks?

While we have a responsibility to minimise the risk to patients, it is not clear how we should define an acceptable level of risk. Further, there are not only risks inherent in false negative (or missed) diagnosis but also in false positive diagnosis, which all too often leads to invasive or potentially risky treatment or to unnecessary hospital admission. Perhaps it is time for diagnostic research to move on and consider scientific ways of minimising the potential risk while maximising benefit as an integral part of the appraisal of new diagnostic technologies. Subjective statements regarding acceptable levels of sensitivity or specificity should be abandoned in favour of a more objective and pragmatic endeavour to ensure the greatest benefit and least risk for the greatest possible number of patients.

WHAT IS UTILITARIANISM?

Utilitarianism is a consequentialist philosophical theory that was originally described with regard to the legal system by Jeremy Bentham in 1781.7 Bentham derived a simple hedonic calculus by which we can calculate the “right” thing to do in any situation by weighing the pleasure that would result against the pain and choosing whichever course of action leads to the “greatest happiness for the greatest number”.

As a moral theory, utilitarianism has many critics. According to this model, our main goal in life is the pursuit of pleasure and avoidance of pain. There are no moral absolutes or boundaries. Thus, by way of an example, the brutal antics of violent gangs could even be justified. Although one unfortunate person may be tortured and beaten causing significant pain and distress, each member of a large crowd may derive from it immense pleasure and thus, according to the calculus, the beating was justified.

This and similar scenarios have led to utilitarianism being considered something of a “dirty word” and, to some extent, disregarded as a philosophical construct upon which to base medical practice. Instead, we often hold to the standard that a moral wrong is absolute. In medicine this has yielded the traditional notion of “primum non nocere” (above all, do no harm), which is widely and perhaps somewhat idealistically proclaimed to be our abiding principle.

UTILITARIANISM AND MODERN MEDICINE

While it is intuitive that, as physicians, we should not intentionally harm our patients, this dogma of “above all, do no harm” is actually untenable in real-life medical practice. There are many situations where we must inflict a degree of harm on our patients in order to achieve a net benefit (venous cannulation and surgery are good examples). Further, there must be an intrinsic moral difference between intentionally inflicting harm and prescribing treatment that carries a risk of causing harm but with beneficent intentions.

Contemporary study of medical ethics is largely influenced by four key principles: autonomy, beneficence, non-maleficence and justice.8 Diagnosis relies upon the selection of an appropriate balance between two of these principles—beneficence and non-maleficence. How we achieve that remains to be decided. However, if we seek simply to minimise risks, we may lose out on tremendous potential net benefits. Perhaps utilitarianism, which seeks to achieve the maximum net benefit, does have something to offer in this regard.

Indeed, utilitarian principles are already employed widely in medical practice. In research, one particular area where utilitarian principles have been widely adopted is in health economics. The concept of quality adjusted life years (QALYs) supposes that health is a function of both length and quality of life and attempts to aggregate the two. If the objective of healthcare policy is to maximise health, then finite resources should be allocated so as to maximise the number of QALYs generated.

Even away from the academic arena, there are countless examples of utilitarianism in medicine. Consider the case of thrombolysis for ST elevation myocardial infarction. A vast quantity of high-level evidence suggests that thrombolysis reduces the mortality of the condition, preventing around three deaths for every 100 patients treated within 12 h of symptom onset.9 However, the treatment carries a risk of causing intracranial haemorrhage in 0.5–1% of patients.10 Under a rule of “above all, do no harm”, exposing our patients to such risk would be unacceptable, precluding the use of thrombolysis. Nonetheless, for many years doctors have accepted these risks, using the potential net gains as an unequivocally utilitarian justification for recommending the use of thrombolysis.

A glimpse at any surgical consent form reveals similar examples. Surgery is undertaken because of potential and probable net benefits, accepting that there are often serious and significant risks to the intended procedure. Furthermore, the prescription of any medication is made on the understanding that it is likely to yield a net benefit over and above the risks of unwanted side effects and allergic reactions. Indeed, every aspect of clinical medicine is associated with risks for the patient and clinician and carries inherent costs.

Under these circumstances, the notion of “primum non nocere” is clearly an unattainable ideal. We do not and should not abandon our profession in response. Rather, we should progress from that positivistic and utopian concept to a more sophisticated post-positivistic paradigm that must inevitably incorporate some utilitarian values.

HOW CAN UTILITARIANISM INFLUENCE DIAGNOSTIC MEDICINE?

The influences of utilitarian theory on diagnostic research have been apparent in the medical literature for over 30 years. Decision analysis is a quantitative and probabilistic method for identifying the optimal course of action from a set of well-defined alternatives, particularly with reference to diagnostic and therapeutic medicine. It has been used, for example, to evaluate whether a particular diagnostic test ought to be used in clinical practice.1113 The technique involves mapping all possible courses of action and their consequences using a decision tree. The probability of the occurrence of each outcome is calculated and assigned a utility (or value). Subsequently, the probabilities and utilities are combined and the optimal course of action is indicated by the highest combined value. Finally, sensitivity analyses are carried out to examine how reasonable changes in disease prevalence or treatment efficacy would affect the results.11

The incorporation of decision analysis into all novel diagnostic research would undoubtedly be a step in the right direction. It is, however, a somewhat labour-intensive and time-consuming process. By omitting the formal construction of a decision tree, it is possible to achieve the same goal by mapping the projected costs associated with a particular diagnostic investigation. For example, the total cost (C) inherent in using an investigation is estimated by the equation:

C = C0+(P(TP)×CTP)+(P(FP)×CFP)+(P(TN)×C(TN)+(P(FN)×CFN)

where C0 is the cost of performing the test, P(TP) is the probability of a true positive result, CTP is the cost of a true positive result, P(FP) is the probability of a false positive result, CFP is the cost of a false positive result, P(TN) is the probability of a true negative result, CTN is the cost of a true negative result, P(FN) is the probability of a false negative result and CFN is the cost of a false negative result.14 When comparing investigations, the superior investigation is the one associated with lower total costs. In situations when it may be difficult to accurately project costs or where it is more appropriate to use a single simple measure to reflect risk/benefit ratios, utility may be used as a proxy.

Further, in the appraisal of any one novel diagnostic investigation with ordinal or continuous data, the reporting of ROC curves is well established. However, the selection of an appropriate diagnostic cut-off by ROC analysis is not always rigorous and scientific and is often subjective. Using equations similar to the above, it is possible to map the point on the ROC curve at which the benefit/cost ratio is maximised. While this process may itself be time-consuming and at present demands a degree of statistical expertise, we believe that the adoption of scientific approaches similar to those described here has the potential to improve the quality of diagnostic research in emergency medicine. Utilitarian principles are firmly rooted at the heart of the methodology.

We would urge the manufacturers of statistical software packages to incorporate such methods into their programs. We also urge those undertaking diagnostic research to clarify their philosophical stance and to be pragmatic and systematic when comparing and evaluating diagnostic investigations.

CONCLUSIONS

While it may be undesirable as a societal moral framework, utilitarianism is inescapably intertwined with modern medicine. Although the notion of “primum non nocere” would clearly be preferable in an ideal world, it is untenable in reality. Utilitarian values underpin the very principles of medical practice. By embracing them we can hope to move towards enhanced understanding of the true value of diagnostic investigations and to improve the science behind the art of diagnosis.

REFERENCES

Footnotes

  • Funding: None.

  • Competing interests: None.

Linked Articles

  • Primary survey
    Kevin Mackway-Jones