Article Text

Multi-institutional intervention to improve patient perception of physician empathy in emergency care
  1. Katie Pettit1,
  2. Anne Messman2,
  3. Nathaniel Scott3,
  4. Michael Puskarich3,
  5. Hao Wang4,
  6. Naomi Alanis5,6,
  7. Erin Dehon7,
  8. Sara Konrath8,
  9. Robert D Welch2,
  10. Jeffrey Kline2
  1. 1 Department of Emergency Medicine, Indiana University School of Medicine, Indianapolis, Indiana, USA
  2. 2 Department of Emergency Medicine, Wayne State University School of Medicine, Detroit, MI, USA
  3. 3 Hennepin County Medical Center, Minneapolis, Minnesota, USA
  4. 4 Department of Emergency Medicine, JPS Health Network, Fort Worth, Texas, USA
  5. 5 Department of Emergency Medicine, John Peter Smith Hospital, Fort Worth, Texas, USA
  6. 6 Department of Emergency Medicine, Integrative and Computational Neurosciences Research Unit, Dallas, Texas, USA
  7. 7 University of Mississippi Medical Center, Jackson, Mississippi, USA
  8. 8 Indiana University, Purdue University at Indianapolis Lilly Family School of Philanthropy, Indianapolis, Indiana, USA
  1. Correspondence to Dr Jeffrey Kline, Emergency Medicine, Wayne State University School of Medicine 4201 St. Antoine Boulevard, Detroit 48201, MI, USA; jkline{at}wayne.edu

Abstract

Background Physician empathy has been linked to increased patient satisfaction, improved patient outcomes and reduced provider burnout. Our objective was to test the effectiveness of an educational intervention to improve physician empathy and trust in the ED setting.

Methods Physician participants from six emergency medicine residencies in the US were studied from 2018 to 2019 using a pre–post, quasi-experimental non-equivalent control group design with randomisation at the site level. Intervention participants at three hospitals received an educational intervention, guided by acognitivemap (the ‘empathy circle’). This intervention was further emphasised by the use of motivational texts delivered to participants throughout the course of the study. The primary outcome was change in E patient perception of resident empathy (Jefferson scale of patient perception of physician empathy (JSPPPE) and Trust in Physicians Scale (Tips)) before (T1) and 3–6 months later (T2).

Results Data were collected for 221 residents (postgraduate year 1–4.) In controls, the mean (SD) JSPPPE scores at T1 and T2 were 29 (3.8) and 29 (4.0), respectively (mean difference 0.8, 95% CI: −0.7 to 2.4, p=0.20, paired t-test). In the intervention group, the JSPPPE scores at T1 and T2 were 28 (4.4) and 30 (4.0), respectively (mean difference 1.4, 95% CI: 0.0 to 2.8, p=0.08). In controls, the TIPS at T1 was 65 (6.3) and T2 was 66 (5.8) (mean difference −0.1, 95% CI: −3.8 to 3.6, p=0.35). In the intervention group, the TIPS at T1 was 63 (6.9) and T2 was 66 (6.3) (mean difference 2.4, 95% CI: 0.2 to 4.5, p=0.007). Hierarchical regression revealed no effect of time×group interaction for JSPPPE (p=0.71) nor TIPS (p=0.16).

Conclusion An educational intervention with the addition of text reminders designed to increase empathic behaviour was not associated with a change in patient-perceived empathy, but was associated with a modest improvement in trust in physicians.

  • emergency department
  • education
  • teaching
  • interpersonal

Data availability statement

Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplemental information. Deidentified participant data available upon reasonable request.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Key messages

What is already known on this subject

  • Clinician empathy has been positively associated with multiple improved patient and provider outcomes.

  • The ability to display empathy in the emergency setting is broadly applicable in medical training as most medical students and virtually all residents in patient-oriented training programmes provide care for patients in the ED.

  • These facts suggest the need for an emergency care-specific tool to teach empathy.

What this study adds

  • A 3-hour educational intervention on clinician empathy showed no improvement in an empathy assessment tool and at best a modest improvement in trust in physicians scale as assessed by patients.

Introduction

Clinician empathy has been positively associated with higher patient satisfaction, improved compliance and better outcomes in multiple health settings.1 Empathy has also been linked to decreased medical errors,2 3 increased adherence to treatment plans4(4), reduced anxiety and better overall health outcomes.5 6 Empathy may also reduce provider burnout and personal distress.7 8Accordingly, several teaching courses have been validated to increase empathic communication in medical school for the clinic and office setting.9 One study found that simulating the experience as an emergency department (ED) patient enhanced the emergency resident empathy of patients.10 Another single center, randomized trial found that a 3-hour didactic session based upon an acronym emphasizing non-verbal communication was associated with improved patient perceptions of empathy 1-2 months later.11 However, this study included residents from six specialties, not including emergency medicine (EM).

While the ability to display empathy in the emergency setting is broadly applicable in medical training, the ED poses multiple challenges to physician empathy including rapid pace, lack of privacy, frequent interruptions, the absence of pre-existing relationships between patients and providers, and the possibility of patients having a group-based distrust of providers.12–14 These facts suggest the need for an emergency care-specific tool to teach empathy. Recently, the authors developed an emergency care-specific teaching tool (the ‘empathy circle’) and didactic workshop using the input from a geographically and demographically diverse sample of ED patients.15 To test the effectiveness of this emergency care empathy workshop, we conducted a multi-institutional, quasi-experimental cluster controlled trial comparing the patient-perceived empathic behaviour of EM residents from three hospitals that participated in the empathy workshop to EM residents from three control hospitals.

Methods

Design

This study tested the effectiveness of an educational intervention designed to increase patient perception of empathy and trust in emergency physicians using a quasi-experimental non-equivalent control group design.16 The work was done at six EM residencies between May 2018 and June 2019. With this pre/post-treatment–control design, three residencies at ‘intervention’ hospitals received the didactic workshop (Indiana University, Indianapolis, Indiana, USA; Hennepin County, Minneapolis, Minnesota, USA; Sinai-Grace, Detroit, Michigan, USA). At the three control hospitals, residents received their usual didactic which did not contain any specific training to enhance empathy (University of Mississippi, Jackson, Mississippi, USA; John Peter Smith Hospital, Fort Worth, Texas, USA; Detroit Receiving Hospital, Detroit, Michigan, USA). Patient perception of resident empathy was assessed prior to intervention or control. Then, 3–6 months after intervention or control, patient perceptions of resident empathy were reassessed using the Jefferson Scale of Patient Perception of Physician Empathy (JSPPPE).17 Patient trust was also assessed using the Trust in Physicians Scale (TIPS).18

We also administered the Jefferson Scale of Empathy (Jse) to participating residents to assess Physician self-perception of empathy.19 The JSE was given to resident participants at scheduled didactic conferences at T1 and T2. at intervention Hospitals, the T1 JSE was given prior to the didactic session. We retrospectively registered our study (Https://aspredicted.org/b44eh.pdf).

Participants

EM residents of all postgraduate years (PGYs) at all study institutions were eligible for enrolment. We made no attempt to modify their decision to attend the intervention. Residents were enrolled and gave consent on a volunteer basis during their EM shifts for their permission to obtain the psychometric surveys from patients. They did not receive any incentive for participation.

Patient participation

Patients were participants to the extent that they completed the JSPPPE and TIPS surveys on the participating residents. Patients were enrolled between 07:00 and 23:00 on all days. Patients were eligible if they were >18 years old, without a traumatic mechanism of injury, and were either undergoing or considered for CT imaging, and for whom physicians anticipated a ‘very low’ probability of finding an emergent condition. Patients completed the surveys near the time of discharge and were shown a photograph of the resident to confirm his or her identity. Patients were excluded if the physician had a high suspicion of medical emergency or for any of the following: CT scanning performed for multisystem trauma, haemodynamic instability, respiratory distress, dementia, intoxication, psychiatric instability, threatening or hostile behaviour, severe uncontrolled pain, prior participation or expressed intention to leave against medical advice. This specific group of patients was selected in order to capture a patient population that was well enough to participate in the surveys but presented with a complaint that led to some degree of diagnostic uncertainty in order to represent a patient population in which empathy is vital in the provider–patient relationship.

Intervention

The long-term, pragmatic goal of this work is to test if increased empathy can reduce low-value CT imaging in emergency care. Because no curriculum or tool existed to train empathy to emergency care workers, the authors derived a novel tool, based on the combination of expert opinion, incorporation of previous work4 11 15 and results of a multicentre survey of ED patients undergoing low-value CT imaging.20 The authors refer to this tool as the ‘empathy circle’, shown in figure 1 which summarises anchoring behaviours that enhance patient perception of empathy and trust.

Figure 1

Empathy circle: the cognitive map of the training course (used with permission from SAEM and Wiley Publishing company).

Several authors, including emergency care practitioners, an expert in medical anthropology and an expert in empathy, developed a 3-hour interactive workshop based on the ‘empathy circle’ using didactic presentations, videos and small group interaction (available from the following public URL: https://iu.box.com/s/9vqvsfli10my9g2enruy7manz3t71syo). In addition to the components of the empathy circle, the didactic reinforced the definition of empathy as ‘the ability to understand and share the thoughts and feelings of another’.21 The intervention was presented to the residents in the intervention groups between August and October 2018 during normally scheduled residency didactic time. The intervention was not otherwise mandatory.

At the intervention hospitals, residents were additionally offered the opportunity to be randomised to either receive or not receive 1 of 34 different motivational texts (eg, ‘make the patient the centre of your attention’ and ‘try to sit down and listen to your patients on this shift’) intended to reinforce empathic thoughts and behaviours (see full list in online supplemental table 1) prior to working clinically. We obtained the resident’s shift schedules in the ED over the subsequent 2 months. The texts were sent at the beginning of shifts using a commercial application (Twilio), operating from the RedCap data archiving platform.22

Supplemental material

Outcome measures

The primary outcomes were the comparison of the change in patient-reported JSPPPE and TIPS scores for residents between T1 and T2. These surveys were administered to a convenience sample of ED patients using previously published methodology.20 The protocol specified intent to obtain JSPPPE and TIPS surveys from up to six unique patients seen by the same resident at both T1 and T2. Minimum adequate data for analysis required one or more surveys from a patient at both T1 and T2 for an individual resident. We planned to assess the covariate effect of resident gender and age on the JSPPPE and TIPS, but we made these data optional for the resident to provide.

Data analysis

We hypothesised that residents with adequate data at intervention hospitals would have a larger increase in JSPPPE and TIPS than residents at control hospitals. We tested data for normality and equality of variances using the D’Agostino-Pearson test and Bartlett’s test, respectively (p<0.1 to reject null hypothesis), and then compared change in means using a paired t-test and production of dot-plots for all residents with adequate data.

To evaluate the change in scores between pretest and post-test evaluations, random effects heirarchical linear models (HLMs) were fit for both TIPS and JSPPPE scores. In addition to the study intervention group, we considered clinician’s age and gender to be of theoretical importance and these characteristics were included in the model building phase. Residents may have been evaluated multiple times—up to 12: 6 before and 6 after. However, not all residents had results completed for both the pre-evaluation and post-evaluation. Patients were excluded if they did not have at least one pre-evaluation and one post-evaluation to ensure included residents contributed to both the pre-evaluation and post-evaluation results.

Since the change between pretest and post-test evaluations for the control and intervention groups was considered to be the main interest, study group, pre-evaluation and post-evaluation (time), and the group×time interaction effects were mandatory. The main effects of time (T1 relative to T2) and group (intervention vs control) on the outcomes measured were included in the models. Study site (random intercept) and time (pre and post; random slope) were considered random effects and time was also considered as a fixed effect. The models took into account that each resident could contribute more than one data point. The resident was the repeated effect and was used to define the covariance structures for repeated measurements of each individual resident. For the fixed effects, the SEs were derived using the empirical (‘sandwich’) estimator. Models and covariance matrixes were evaluated using both theoretical aspects and Bayesian information criterion. Models were fit using PROC GLIMMIX in SAS V.9.4. To test the effect of texting, we used a mixed-effects, repeated measures analysis of variance (split plot ANOVA), reporting the p value for time×group effect (SPSS; V.26.0.0, IBM Corporation).

Practical limitations precluded obtaining six surveys at T1 and T2 on all resident participants. There is no precedent literature to determine the stability of either the JSPPPE or TIPS based on the number of patients surveyed. Accordingly, we plotted the incremental mean JSPPPE and TIPS for all residents based on the means at T1 from the first patient, followed by the pooled mean of the second to sixth survey. We also analysed for clustering effect based on hospital using graphical analysis of means and one-way ANOVA with Tukey’s post-hoc. Statistical analyses were performed and figures were produced using GraphPad Prism V.8.0.0 for Windows (GraphPad Software, San Diego, California, USA). Data will be available from a publicly available URL. A minimum sample size of 49 in each group was predicated on a 90% power to detect a 10% difference in JSPPPE assuming an SD of 15% and alpha=0.05.

Patient and public involvement

Patients informed this project as described above (patient participation and intervention section) by use of patient focus groups to develop the empathy circle and therefore the intervention.

Results

Clinician characteristics

We obtained consent to participate from 223 residents: 2 voluntarily withdrew, leaving 221 for initial analysis. Of these, 135 were at Intervention Hospitals and 86 were at control Hospitals. The distribution of residents by training level was Pgy 1, N=62 (28%); Pgy 2, N=81 (37%); Pgy 3, N=69 (31%); and Pgy 4, N=5 (2%). the mean age of residents was 30 (Sd 4.4) years, with 183 (83%) reporting Caucasian race and 131 (57%) reporting male gender. online supplemental file 1 Shows the flow of resident participants. We obtained both the JSPPPE and TIPS at T1 and T2 from one or more patients cared for by 76 residents at Intervention Hospitals and from 53 residents at Control Hospitals, representing 59% of enrolled residents with minimal data for the HLM. We had paired patient data, resident gender and age on N=49 and 70 in the control and intervention groups, respectively. The primary reason for loss of residents was that they were not scheduled in the ED during both sample periods. Table 1 compares the demographic and training levels of residents with adequate data versus those excluded for inadequate data. Only those residents with adequate data were included in the analysis.

Table 1

Comparison of residents with and without adequate data*

Patient characteristics

We obtained JSPPPE and TIPS from 1007 ED patients. The clinical characteristics of patients included a mean age of 53 (SD 17.3) years, with 67% reporting non-Caucasian race, and 60% female gender. Additional patient characteristics are presented in online supplemental tables 2 and 3. Figure 2 shows the main results of the study for patient perceptions of resident physicians with complete data.

Figure 2

Dot-plot of the mean Jefferson Scale of Patient Perception of Physician Empathy (JSPPPE) scores and Trust in Physicians Scale (TIPS) scores from at least one patient for N=76 residents in the intervention group and N=53 residents in the control group. P values from paired t-test. Bars show the mean and 1 SD.

Table 2

Results of hierarchical linear modelling including resident gender and age

Patient perception of empathy (JSPPPE)

As shown in figure 2, the control group, the mean aggregate (SD) JSPPPE scores at T1 and T2 were 29 (3.8) and 30 (4.0), respectively (mean difference 0.8, 95% CI: −0.7 to 2.4, p=0.20, paired t-test), and in the intervention group, the mean aggregate (SD) JSPPPE scores at T1 and T2 were 28 (4.4) and 30 (4.0), respectively (mean difference 1.4, 95% CI: 0.0 to 2.8, p=0.08). When the paired data from 129 residents were analysed with HLM, we found no significant difference for time×group effect for JSPPPE score (p=0.71). When the HLM was repeated for the 119 residents with age and gender information (table 2), we found no effect of physician female gender (p=0.23) nor for clinician age (p=0.64).

Trust in physician (TIPS)

As shown in figure 2, in the control group, the mean TIPS at T1 was 65 (6.3) and T2 was 66 (5.8) (mean difference −0.1, 95% CI: −3.8 to 3.6, p=0.35). In the intervention group, the mean TIPS at T1 was 63 (6.9) and T2 was 66 (6.3) (mean difference 2.4, 95% CI: 0.2 to 4.5, p=0.007). Paired data analysed with HLM also found no significant difference for time×group effect for TIPS (p=0.16). There was no effect of physician female gender (p=0.54) or for clinician age (p=0.60).

Self-assessed empathy (JSE)

We found no significant difference in the change in physician self-assessment of empathy (the JSE) between T1 and T2 in either the control group (T1=109 (14), T2=111 (10), p=0.39) nor the intervention group (T1=107 (19), T2=112 (12), p=0.38).

Lack of effect of motivational texts

At the intervention hospitals, text messages to residents were not associated with a change in mean JSPPPE (T1=30 (4.0) to T2=(30 (4.2), respectively), compared with the change in providers who did not receive texts (T1=29 (6.9), T2=30 (5.5)), leading to a time×group p=0.602 from mixed-effects repeated measures ANOVA. The TIPS values increased proportionately with texting (T1=64 (9.9) to T2=67 (10.4)), compared with physicians who did not receive texts (T1=65 (11.0) to T2=68 (9.0), time×group p=0.17).

Lack of effect of patient sample size on JSPPPE and TIPS

To determine the stability of the JSPPPE and TIPS across repeated observations by unique patients, figure 3 plots the mean aggregate score of the JSPPPE obtained from the first patient, up to the mean aggregate score from six unique patients evaluating each resident, stratified by group and T1 versus T2.

Figure 3

Effect of increasing patient sample size on mean Jefferson Scale of Patient Perception of Physician Empathy (JSPPPE) scores. The figure plots the mean JSPPPE from one to six patients for control and intervention emergency residents at both time points. P value is from one-way analysis of variance. Data show means and SDs.

Each of these samples of six was a unique patient and therefore represents an independent sample. The data show relative stability of the mean score from one to six patients surveyed with no difference in means detected with a one-way ANOVA for either the control (p=0.216) or intervention group (p=0.56). We found similar results with the TIPS survey (data not shown). These data are consistent with the inference that the mean score of a JSPPPE or TIPS survey obtained from one patient per physician for a sample of physicians is similar to the mean obtained from six patients per physician.

Effect of clustering by hospital

We tested for heterogeneity from clustering by hospital by plotting the mean value for the JSPPPE and TIPS for residents at T1 for all six hospitals (online supplemental figure 2). Only one hospital was an outlier for the TIPS with a significantly lower mean value. Subgroup analysis of this hospital shows that this hospital did not overcontribute to the overall significant increase in TIPS given that the mean T2 TIPS score was 62 vs 63 at T1.

Discussion

This study suggests that a 3-hour didactic session provided no effect in terms of patient perceptions of physician empathy, assessed using the JSPPPE, but had a modest effect on perception of trust, based on TIPS measured 3–6 months later. These findings provide at least two inferences, specific to the emergency care setting: (1) our 3-hour intervention had a weak effect on improving empathic behaviours and (2) the JSPPPE was insensitive to detection of changes in behaviours. Our lack of effect on JSPPPE differs somewhat from a randomised trial by Riess et al using a neurobiological approach to teaching empathy.11 In that study, the authors found a significant increase in another psychometric scale of empathy (the Consultation and Relational Empathy (CARE) measure). However, similar to the study by Riess et al, we also found no between-group or within-group change in the physician-reported JSE.11 Our study adds to previous literature because it is the first to test an empathy intervention directly for the emergency care setting, and is the first to demonstrate improved perception of trust in physicians.

The fitness of the psychometric tests chosen as outcome measures in emergency care warrants discussion. The JSPPPE may be better suited for primary care and the clinic environment than the emergency care environment. For example, two of the five JSPPPE questions ask for agreement with statements ‘My physician asks about what is happening in my daily life’ and ‘My physician seems concerned about me and my family’, which could be interpreted as having limited importance in emergency care. From TIPS, we believed in advance that one question was particularly relevant to the reduction in repeated low-value imaging: ‘I sometimes distrust my doctor’s opinion and would like a second one’ and found a significant increase when reverse coded (from 2.5 (SD 2.0) to 3.2 (SD 2.0); p=0.001, paired t-test) in the intervention group compared with a non-significant decline from 2.5 (SD 2.2) to 2.3 (SD 2.0) in controls. This secondary observation (which was preplanned) suggests some value of the intervention for reducing the desire of patients to seek repeated opinions, and possibly, repeated CT imaging from multiple ED visits. In retrospect, the CARE survey questions may be more relevant to the emergency care setting than the JSPPPE as the CARE questions ask about immediate impressions and observations of behaviours as opposed to opinions, which may take longer to form.

We considered in advance that a single 3-hour interactive didactic, no matter how well conceived or delivered, might not produce lasting and measurable effects on empathic behaviours. Accordingly, we tested a subhypothesis that ‘reminder texts’ might motivate residents and refresh their recall of the session and remind them of behaviours that comprise the components of the empathy circle. However, we found no improvements in this subgroup within the intervention group. Prior work has suggested that texting can motivate patients to improve multiple patient behaviours, but to our knowledge, no controlled study has tested effectiveness of text messages on patient-reported perceptions of physician behaviour.23 Reasons for possible failure of texting could include that the text content was written as ‘inspirational’ rather than directional and instructive and therefore did not produce behaviours that reflected the components of the empathy circle. Alternatively, the texts may have provoked empathic behaviour change, but the JSPPPE and TIPS did not reflect these changes. Additionally, residents commented that they would have liked the text in the middle of the shift when they are more fatigued. The authors were regularly exposed to residents who received the texts and received numerous unsolicited positive comments. This anecdotal experience provided the suggestion that many residents found the texts appealing and valuable.

Limitations

The first limitation is that data were incomplete for 41% of enrolled residents because of a lack of T2 measurements. This loss was primarily because not all residents present during T1 data collection were scheduled in the ED during the T2 data collection. We saw unexpectedly high scores in JSPPPE and TIPS during T1 which limited our ability to show a difference at T2. This could be explained by our convenience sampling which was done largely during daytime hours, patient knowledge that they were evaluating doctors in training, or the loss of patient attention span because of mental fatigue (they were surveyed at the end of their ED visit) or so-called ‘survey fatigue’ (desire to just get the survey done as fast as possible, leading to superficial responses). In addition, the construct validity of the JSPPPE and TIPS in the emergency care setting remains uncertain. Additionally, as residents were given this educational intervention as part of their normal didactic time, not all residents at the intervention institutions actually attended the intervention in person, but all had video access. Lastly, we did not deploy a study associate to objectively assess and record physician behaviours reflective of the empathy circle. Therefore, we do not know if the modest performance was because the empathy circle components are ineffective, or if the doctors did not change their behaviours.

Conclusion

A 3-hour educational intervention on clinician empathy showed no improvement in an empathy assessment tool and at best a modest improvement in trust in physicians scale as assessed by patients. Texted reminders to providers did not improve either outcome. These findings raise questions about both the effectiveness of a single didactic session to increase empathy, even if augmented by text reminders, and also about the construct validity of existing assessment tools for clinician empathy in the emergency care setting.

Supplemental material

Data availability statement

Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplemental information. Deidentified participant data available upon reasonable request.

Ethics statements

Patient consent for publication

Ethics approval

The study was first approved by the Indiana University School of Medicine Institutional Review Board (IRB) and subsequently approved by the IRBs of all participating hospitals.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Handling editor Richard Body

  • Twitter @KatiePettitMD, @klinelab

  • Contributors KP—study concept and design, acquisition of data, analysis and interpretation of data, and drafting of the manuscript. AM—study concept and design, acquisition of data, analysis and interpretation of data, and critical revision of the manuscript for important intellectual content. NS—acquisition of data and critical revision of the manuscript for important intellectual content. MP—acquisition of data and critical revision of the manuscript for important intellectual content. HW—acquisition of data and critical revision of the manuscript for important intellectual content. NA—acquisition of data and critical revision of the manuscript for important intellectual content. ED—acquisition of data and critical revision of the manuscript for important intellectual content. SK—study concept and design, analysis and interpretation of data, and critical revision of the manuscript for important intellectual content. RDW—analysis and interpretation of data, and critical revision of the manuscript for important intellectual content. JK—study concept and design, acquisition of data, analysis and interpretation of data, drafting of the manuscript, critical revision of the manuscript for important intellectual content and acquisition of funding.

  • Funding Work was funded by the Physician Scientist Initiative from the Lilly Endowment Foundation to JK.

  • Disclaimer The Lilly Endowment Foundation had no role in the conception, design, conduct or production of this work.

  • Competing interests Author JK reports grant money to Indiana University School of Medicine to conduct research conceived and written by JK from Bristol Meyer Squibb and Janssen Pharmaceuticals.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles