Article Text

Download PDFPDF
Cross-cultural adaptation and its impact on research in emergency care
  1. Tom Roberts1,2,
  2. Edward Carlton2,3,
  3. Matthew Booker4,
  4. Sarah Voss2,
  5. Samuel Vaillancourt5,6,
  6. Anisa Jabeen Nasir Jafar7,
  7. Jonathan Benger2,8
  1. 1 Doctoral Fellow, The Royal College of Emergency Medicine, London, UK
  2. 2 Faculty of Health and Life Sciences, University of the West of England, Bristol, UK
  3. 3 Emergency Department, North Bristol NHS Trust, Westbury on Trym, UK
  4. 4 School of Social and Community Medicine, University of Bristol, Bristol, UK
  5. 5 St Michael's Hospital, Li Ka Shing Knowledge Institute, Toronto, Ontario, Canada
  6. 6 Emergency Department, St Michael's Hospital, Toronto, Ontario, Canada
  7. 7 Humanitarian and Conflict Response Institute, The University of Manchester, Manchester, UK
  8. 8 University of the West of England, Bristol, UK
  1. Correspondence to Dr Tom Roberts, The Royal College of Emergency Medicine, London SE1 1EU, UK; tomkieranroberts{at}


The perspective of patients is increasingly recognised as important to care improvement and innovation. Patient questionnaires such as patient-reported outcome measures may often require cross-cultural adaptation (CCA) to gather their intended information most effectively when used in cultures and languages different to those in which they were developed. The use of CCA could be seen as a practical step in addressing the known problems of inclusion, diversity and access in medical research.

An example of the recent adaptation of a patient-reported outcome measure for use with ED patients is used to explore some key features of CCA, introduce the importance of CCA to emergency care practitioners and highlight the limitations of CCA.

  • methods
  • emergency department
  • research design

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


To better understand health, we need culturally adapted tools that allow comparison of outcomes between diverse populations. A process called cross-cultural adaptation (CCA) allows the updating of previously validated research instruments to new populations.1 This allows a more accurate comparison of health outcomes between diverse groups and could facilitate improved research inclusion, diversity and access—a current health research priority.2

Do you know what ‘keeping house’ means?

Every elderly patient admitted from our ED has a Rockwood Clinical Frailty Score calculated.3 Derived and validated in Canada, it is used extensively worldwide to assess frailty and is correlated with clinical outcomes.4 One of the main descriptors for classifying someone as ‘moderately frail’ is needing help with ‘keeping house’.3

We can make assumptions about the meaning of ‘keeping house’ to our diverse populations and apply these assumptions to Rockwood calculations. However, these assumptions have the potential to alter the emphasis of one or more elements of a clinical score and be significant enough to invalidate the score outcome. One way to mitigate for this is to use CCA.

In this paper, we explain the imperative to adapt research instruments so they are equally valid in another culture, context or language. We provide an overview of the process and the different aspects of equivalence the process seeks. We include a worked example of adapting the patient-reported outcome measure-ED (PROM-ED) from Canada to the UK. Although the cultural difference between Canada and the UK may seem minimal, the example illustrates the importance of CCA in emergency care (EC) research and further explores how CCA could improve inclusion, diversity and access in research.

The methodological need to adapt

Using instruments with previous data supporting their validity and reliability make results comparable across populations and countries, increasing the potential use of research findings. It is often prohibitively costly and time-consuming to create bespoke questionnaires for a particular project and, by virtue of their novelty, these can never be supported by a comparable body of evidence.5

A research instrument’s reliability and validity can be compromised when used in another time, culture or context.6 For questionnaires in a language foreign to that of the person completing it, the need for translation is obvious. However, from the stand-point of British English-speaking researchers based in the UK, most research tools have been derived and validated in the English language, making the need for CCA less obvious.

However, there is more to translation than just words. Eighteen countries worldwide have English as a first language7 and the British Library estimate that >50 contemporary dialects are spoken within the UK.8 Using research instruments in patients with dialects and/or cultures that differ from those in which the instrument was derived may compromise the reliability and validity of the instrument. Considering the CCA of a research instrument helps avoid the assumption that a tool can automatically be used with equivalence in countries or regions that speak the same language.

Similarly, when a tool is translated perfectly to the target language, there may be concepts that are not equivalent. An example of this is when researchers adapted the Health Assessment Questionnaire from English to Thai.9 When adapting the question ‘Take a tub bath?’, the translation to Thai was straight forward. However, it was apparent that bathtubs are not routinely used in Thailand. Therefore, authors had to seek an equivalent action that would assess similar psychomotor functions as having a bath. They elected for sitting to pay homage to a sacred object.10

What is cross-cultural adaptation?

The process of CCA aims to seek equivalence between a version of an instrument that has undergone validation in one setting and the new target version of the tool.1 This seeks to ensure the tool measures concepts that are as similar as possible across languages, cultures and populations. Practically, this means ensuring the content of the questions and answers remains the same.

While the process is designed to preserve previous evidence supporting validity and reliability of the original tool, significant cultural differences may be uncovered preventing an individual item in an instrument from being adapted adequately and further psychometric testing of the adapted item or domain may be required.1

There are several widely accepted methods for conducting a CCA that allow for a coherent process, although none have been found to be superior over the other.11 Methodologies seek equivalence in four key domains, these are semantic, idiomatic, experiential and conceptual equivalence.1

Semantic equivalence would describe a substitution such as ‘sidewalk’ for ‘pavement’. Idiomatic equivalence describes the changes needed for phrases or words whose meaning is not deducible from the individual words (eg, over the moon, cry your eyes out, both used in the UK). This step is vital for tools which will be used where the language of the tool is not the first language of the target population. In the UK, this represents 8% of the overall population, but a more significant proportion of certain UK ethnic groups (figure 1).12 In the USA, census data show that 21.6% of the population speak a foreign language at home.13

Figure 1

Percentages of people in each level of English language skills by ethnicity. Adapted from 2011 UK census. Available from: Contains public sector information licensed under the Open Government Licence V.3.0.

Experiential equivalence is important to consider when thinking about functional assessments or mechanisms of injury. Is a ‘recreational motorised vehicle’ collision (as described in the Canadian C-spine Rule) the same in Canada (when compared with other English-speaking countries) where all-terrain vehicles are popular and there are over half a million registered snowmobiles?14 15

Finally, conceptual equivalence requires an awareness of both original and target languages and cultures. For example, a question about access to emergency healthcare would differ between countries and cultures as definitions of ‘emergency’, ‘access’ and ‘healthcare’ will vary. Similarly, questions about family support would be interpreted differently as the social norms of family support structures, and rates of multigenerational living, vary across different cultural groups within the same country.


There is no defined threshold for when to undertake a CCA, especially when adapting tools derived in the same language as the target version. This is because the key aim of the process is to identify areas where reliability and validity are compromised. If the process is not conducted, these areas cannot be identified. This suggests a CCA should be done for all new populations, as without an assessment, it cannot be certain if the tool is valid in that population.

This raises questions about the viability of CCA. It is not feasible to adapt an instrument from a host population to every subculture within a new population. But it is reasonable to be sceptical about an instrument’s ability to perform as expected across all questions in a new population. In the future, a CCA threshold could be defined by a saturation point, that is, a CCA has been performed in a number of different populations and no areas of compromise to reliability and validity have been identified. However, in the absence of any evidence-based thresholds, the decision to undertake a CCA is subjective.

Worked example: the PROM-ED

The PROM-ED is a newly validated tool, developed in Canada, for measuring outcomes in patients discharged from the ED. It has four key domains that asses ‘symptom relief’, “understanding my health concern”, ‘reassurance’ and “having a plan I can follow”.16 Our research group intended to apply it to a general cohort of patients being discharged from the ED with acute headache in the UK. Despite the fact that the original tool went through rigorous development and validation, including the consideration of underserved populations, ethnic and language variation, we hypothesised there could be subtle differences that might have limited understanding in a UK population.

Methods: the stages of CCA

We used the most widely cited method proposed by Beaton et al.1 A description of the CCA process is outlined in figure 2, adapted from Beaton et al. Other methods exists and there is no consensus on how CCA should be done.11 The full CCA report is available in the online supplemental.

Supplemental material

Figure 2

The six stages of cross-cultural adaptation. Adapted from Beaton et al. 1

Stage 1: initial translation

The PROM-ED was reviewed to identify any words, phrases or concepts that would not adequately translate from the host (Canadian English) to target (British English) language. Guidance states that one translator should be non-clinical and have experience of the original and host country and culture, to allow a focus on the language and culture. The second translator should be clinical to allow a focus on the specific elements of clinical equivalence.1

For the Prom-ED, DJ and TR performed two independent translations, named translation 1 (T1) and translation 2 (T2). DJ was a White Canadian male, born in Canada but who had been living in the UK for a decade, with a non-clinical background. TR was a White British male born in the UK, with a clinical background.

As this CCA was being adapted to a general UK setting, where 82% of the population (England and Wales) are White British, using a predominantly White British CCA team was deemed broadly representative.12 When being adapted to other specific populations, the CCA team should be broadly representative of the target population.

Stage 2: translation synthesis

This stage brings together the two independent translators and their independent translations developed at stage 1 (T1 and T2). The aim is to produce a third translation (T3), which is an agreed synthesis of the original PROM-ED and the two translated versions. This synthesis is agreed during a meeting between the two translators involved at stage 1 and a ‘recording observer’.1

For the PROM-ED, a virtual meeting was held between TR, DJ and a third ‘recording observer’ (EC). A new synthesised version of the PROM-ED was produced (T3), along with an associated report highlighting the relevant discussions and changes.

Stage 3: back translation

This stage involves translating T3 back into the original language, to check the proposed translations. For the PROM-ED, this stage was not performed due to increasing evidence that it does not add value, especially in the context of this adaptation where baseline language will stay the same.11

Stage 4: expert committee

This stage involves a review of the original PROM-ED, T1, T2, T3 and the associated report. The review is conducted by an expert committee composed of translators already involved at stages 1 and 2, relevant health professionals, methodologists, language professionals and the original authors of the PROM-ED.1

The aim is to produce a consensus for a final adapted PROM-ED that can be progressed to pretesting. The committee is advised that the tool should be understood by a 12-year-old and all relevant areas of compromise to reliability and validity should be addressed for the UK context. Practically, this meant each item highlighted for change was examined and discussed in reference to semantic, idiomatic, experiential and conceptual equivalence.

For the PROM-ED adaptation, the expert committee are outlined below:

  • Translators already involved at stages 1 and 2 (TR, DJ, EC)

  • Health professionals (NM)

  • Methodologists (JvO)

  • Language professionals, linguist (LR)

  • Original authors of the PROM-ED (SVa)

A full report was written, which outlined the key discussions and decisions. The five agreed changes are outlined in table 1.

Table 1

Changes made from Canadian PROM-ED to UK PROM-ED

The final two stages of the CCA according to Beaton et al involve ‘pretesting’ (stage 5) and ‘submission’ (stage 6).1 The purpose of pretesting is to ensure equivalence when applied to the new population, and to examine areas of missing or single responses. The purpose of submission to original authors is to ensure a central collection point for all versions of the research instrument.1

There remains debate about exactly how pretesting should be done.11 Beaton et al recommends the interview of 30–40 individuals, and while most guidelines advocate testing with patients, there is precedent for approving an adaptation using input from the expert committee or focus groups.11 17 This approach makes CCA more achievable, and in the context of same language CCAs with minimal changes, it may be an appropriate approach given resources available.

For the PROM-ED, there were four minor semantic and one grammatical change. The content validity, the key factor that the CCA is attempting to maintain, was felt to be preserved by the expert committee. For these reasons, the final version of the culturally adapted PROM-ED UK was submitted (as per stage 6 of the CCA) to the original PROM-ED authors without pretesting. However, future large-scale testing of the UK PROM-ED is planned.

Pretesting and large-scale testing of the tool are different.1 Pretesting, sufficient for the CCA process, focuses on content validity. Large-scale testing aims to ensure the new instrument retains the full psychometric parameters of the original instrument. This includes item-to-scale correlations, internal consistency, reliability, construct validity and responsiveness.1 This process has been acknowledged to be beyond the remit of a CCA.

Improving inclusion and diversity in research

The sections above outlined why CCA is important to maintain the validity of the tool, and a worked example of a CCA process. Beyond the maintenance of an instrument’s validity, the process of CCA provides an opportunity to use well-validated tools in diverse, hard to reach and underserved populations. These populations are known to be under-represented in research.18

Research studies are not inclusive and do not serve the diverse populations of patients we aim to use them in.19–21 From a purely methodological standpoint, this raises concerns about the psychometric properties of the tool when used in a new, untested and culturally different population. But from an ethical and moral perspective, the lack of culturally adapted tools serves as an example of the work still required for research to be inclusive and diverse in design, recruitment and analysis.

Under-represented groups are less likely to be involved in research but more likely to suffer the adverse consequences of ill-health. While funding bodies are increasingly aware of this issue, many trials exclude patients who are ‘unable’ to consent.22–24 While this is a generic term for many situations, practically, it often represents a group of patients who do not understand the information provided. It is incumbent on researchers, not the patient, to ensure our research is accessible to all. Furthermore, improving the availability of culturally adapted research instruments could improve accurate data collection and analysis.

The UK version of the PROM-ED makes it more broadly valid and usable in the UK context as compared with a Canadian context but is only one step in the direction of increasing its usability and validity in a pluralistic society. It is very probable that the adaptations between Canada and the UK are insignificant when compared with the adaptations that would be required to bring it even closer to the areas of the UK with concentrated high levels of linguistic and cultural diversity. It is clear that encouraging, collaborating and supporting colleagues from a wide range of cultural backgrounds to lead future CCA projects would produce adapted research instruments, and would promote an improved, diverse, research workforce. These observations should not discourage us from seeking to adapt existing tools for lack of perfection, but a rigorous and humble process will nonetheless ameliorate some of the many barriers to tool usage.25


To understand the context and limitations of any study, the rationale behind its conception (reflexivity) and the identity and perspective of the authors (positionality) is important.26 As an author group of this manuscript, we work in British English (n=6) and Canadian English (n=1), two authors are bilingual (Canadian French n=1, Spanish n=1), one author has professional working proficiency in French and elementary proficiency in Arabic and Pashto. Five authors work clinically in emergency medicine, one as a general practitioner, one author is solely academic.

The positionality and lived experience as English-speaking clinicians and researchers, some in predominantly White-British regions of the UK, will also have played a part in how the CCA was conducted. A different approach would have been to consider populations with the poorest health and making CCA to accommodate the largest group of non-English primary language (NEPL) patients in the UK. This would have made more difference both by virtue of the magnitude of change required during CCA and the subsequent health impact on an already less healthy, although to a much smaller in number, population.

The main methodological limitation is the lack of testing. This is due to the minor changes identified. Future planned iterations in more diverse populations of NEPL may benefit from further psychometric testing if significant areas of compromise to reliability and validity are identified.


Instruments will always fall short of eliciting the exact same concepts from an increasingly diverse population, but this obvious fact about the diversity of life experiences, languages and cultures should encourage rather than discourage the CCA process of broadening the validity of research instruments. The very concept of CCA serves as a stark reminder that traditionally qualitative research tools such as positionality and reflexivity very much have their place in ensuring that the reader can both digest and implement research findings with a deeper understanding of who generated them and why, allowing for more informed judgement as to some nuanced limitations.

Ultimately, we can still only assume what ‘keeping house’ means for each person, however using CCA we have the potential to craft language in such a way that it reflects more accurately what we aim to measure.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.


We would like to thank Dan James (DJ), Dr Luke Rudge (LR), Dr Nicholas Moore (NM) and Dr James Van Oppen (JvO) for their valuable contributions during the cross-cultural adaptation process.



  • Handling editor Richard Body

  • Twitter @DrTomRoberts, @eddcarlton

  • Contributors TR and EC conceived the idea for this article. The manuscript was drafted by TR with revision of subsequent drafts by EC, JB, SVo, MB and SVa. All authors approved the final submitted version. Following initial review a further author (AJNJ) was invited to provide specific expertise in positionality and reflexivity.

  • Funding TR is funded by Royal College of Emergency Medicine. EC, MB and JB receive funding from the National institute for Health Research.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.