Article Text

Download PDFPDF

Strategies for implementing implementation science: a methodological overview
Free
  1. Margaret A Handley1,
  2. Anuradha Gorukanti2,
  3. Adithya Cattamanchi3
  1. 1Department of Epidemiology and Biostatistics, University of California San Francisco, San Francisco, California, USA
  2. 2Saint Louis University School of Medicine, St Louis, Missouri, USA
  3. 3Department of Medicine, University of California San Francisco, San Francisco, California, USA
  1. Correspondence to Dr Margaret A Handley, Department of Epidemiology and Biostatistics, University of California San Francisco, 550 16th St, San Francisco CA 94158, USA; Margaret.Handley{at}ucsf.edu

Abstract

A key reason for the consistent gaps between evidence and practice across all areas of medicine is that there has been little attempt to identify or target factors critical for successful implementation of an evidence-based intervention. There is either no explicit implementation strategy or the strategy is based on a best guess rather than on a systematic assessment of crucial barriers and enablers. A different approach is needed to close the evidence–practice gap and thereby achieve the triple aim of improved health, improved patient experience and reduced healthcare costs. We present three fundamental principles of implementation science, which is a methodology that offers a systematic and comprehensive approach to improving healthcare practice and a series of ‘how to’ steps to conduct implementation science research. In an accompanying article, a scoping review of the types of implementation science research conducted in emergency medicine is reviewed, and several of the principles related to this review are discussed.

  • research, operational
  • performance improvement
  • emergency department management

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

“All breakthrough, no follow through”—S Woolf Washington Post editorial 2006 on the need to close the gaps in the US health care delivery system.1

What is implementation science?

The gap between care that is effective and care that is delivered reflects, in large measure, the paucity of evidence about implementation.2 Implementation science is the systematic study of how to design and evaluate a set of activities to facilitate successful uptake of an evidence-based health intervention. ‘Evidence-based’ refers to interventions that have undergone sufficient scientific evaluation to be considered effective and/or are recommended by respected public health or professional organisations. As noted by Madon et al,3 ‘Scientists have been slow to view implementation as a dynamic, adaptive, multiscale phenomenon that can be addressed through a research agenda’.

But the tide is changing, with funding agencies increasingly recognising the need to support research to guide implementation. Implementation science seeks to understand factors that determine why an evidence-based intervention may or may not be adopted within specific healthcare or public health settings and uses this information to develop and test strategies to improve the speed, quantity and quality of uptake.4 Other terms—such as knowledge translation—are also used to describe research to understand factors important to evidence uptake. The journal Implementation Science defines implementation research as ‘the scientific study of methods to promote the systematic uptake of proven clinical treatments, practices, organizational, and management interventions into routine practice, and hence to improve health. In this context, it includes the study of influences on patient, healthcare professional, and organizational behavior in either healthcare or population settings’. (http://www.implementationscience.com/about).

In the field of Emergency Medicine, as indicated by the scoping review by Tavender et al,5 a growing number of studies characterise evidence–practice gaps in areas such as head trauma, management of sepsis and acute pulmonary care, primarily asthma. A key reason for the persistent gaps between evidence and practice across all areas of medicine is that there have been few attempts to identify or target factors critical for successful implementation of an evidence-based intervention. There is either no explicit implementation strategy or the strategy is based on a best guess rather than on a systematic assessment of crucial barriers and enablers. A different approach is needed to close the evidence–practice gap and thereby achieve the triple aim of improved health, improved patient experience and reduced healthcare costs. Uptake of evidence-based practices in emergency medicine, as well as many other disciplines in medicine, calls for an increased focus on implementation science research. Such studies identify barriers and facilitators to implementation of evidence-based practice and use behavioural theory to guide development of implementation strategies and employ rigorous evaluation designs to determine whether—and importantly, why—strategies to reverse the gap are effective. These are the cornerstones of implementation science.

What are the key aspects of implementation science?

Although it is a relatively new field, implementation science explicitly focuses on mechanisms of change in order to understand and improve the process of implementation. We believe that research to close the evidence–practice gap should be guided by the following three key principles:

(1) Behaviour change is inherent to the translation of evidence into practice, policy, and public health improvements. To effectively engage in implementation science research, it is necessary to understand the role of behaviour change in developing and evaluating an implementation strategy. In most situations, an evidence–practice gap exists because individuals or organisations are not doing something that is recommended. Strategies that encourage providers to follow clinical practice guidelines, patients to improve medication adherence or communities to increase the uptake of screening programmes can all be considered ‘behaviour change interventions’, as they are designated, coordinated activities intended to change specific behaviours. Behavioural theory is therefore helpful to understand the determinants of current behaviours and to design and evaluate targeted implementation strategies to achieve the desired change.6 ,7

One example of how behavioural theory is used to structure understanding of barriers and develop implementation strategies is the COM-B model (capability, opportunity, and motivation) and the related Behaviour Change Wheel (BCW).6 The COM-B model specifies that changing the occurrence of any behaviour requires changing Capability, and/or Opportunity and/or Motivation. ‘Capability’ refers to the ability to engage in the thoughts or physical processes necessary for the behaviour, ‘Opportunity’ relates to factors in the environment or social setting that influence behaviour and ‘Motivation’ is the conscious belief as well as unconsciously based emotions/impulses that direct behaviour.6 Thus, the COM-B model can be used to ‘diagnose’ why the desired behaviour is not occurring. Once a behaviour is understood in terms of these three domains, the BCW can be used to identify functions that an effective intervention could deliver to overcome barriers or enhance enablers within each domain (eg, functions such as education or training to increase ‘Capability’). The BCW goes on to identify evidence-based behaviour change techniques that can be used to enact different intervention functions (eg, counselling or health coaching to deliver education). In doing so, the BCW provides a common language to understand, describe and target behaviour change across different contexts and health problems. Implementation science approaches such as the COM-B diagnosis make explicit the thinking about behavioural barriers related to an evidence–practice gap. This explicitness is thought to help improve the relevance (and therefore effect) of interventions in their specific settings as well as generalisability of behaviour change interventions across settings.

(2) Engagement with a range of individuals and stakeholder organisations is imperative to achieve effective translation and sustained improvement in implementation outcomes. Historically, many initiatives to promote healthy behaviours and improve the quality of healthcare delivery have been implemented without direct input from targeted individuals/communities.8 ,9 In contrast, in community-engaged research, community input is incorporated into the development of the question, execution of the project, analysis of the results and/or dissemination of the findings.10 A fundamental premise of community-engaged research is that community stakeholders have credible, intimate and necessary understandings of the concerns, values, assets and activities of their communities.

The initial steps to starting a community-engaged research project are to identify groups or relationships relevant to your area of research and to make efforts to connect with them to start a conversation about the evidence–practice gaps or health topics you care about and to see if these are important or of interest to them. Stakeholders will vary depending on the research question and can include individuals (patients, providers, community members and so on), delivery systems (clinics, hospitals) and others (payers, government agencies, funders and so on).10 Community-engaged research can occur on a spectrum from ‘more intensive’ to ‘less intensive’. A ‘more intensive’ degree of community-engaged research would involve stakeholder collaboration in all aspects of the research. A ‘less intensive’ approach to community-engaged research would seek stakeholder input for specific steps of the study. By incorporating stakeholder input and participation in research, the results generated are more likely to be useful and applicable for the intended communities.

(3) Implementation science research benefits from flexibility and often non-linear approaches in order to fit within real-world situations. In practice, this means that a cyclical, rather than linear, approach and long-term view are necessary. This is because translating evidence into practice requires attention to real-world settings in which many contextual variables will influence the implementation process and require revisiting earlier steps in the process. For example, new barriers can become apparent over time or reflect changes in the environment, such as the addition of new guidelines or technologies that impact the processes involved in the behaviour.11

What steps are involved in implementation science research?

In this section, we describe a step-wise approach to conducting implementation science research across three phases: (1) preimplementation planning: engaging stakeholders and making the case for evidence translation; (2) designing the implementation strategy: using behavioural theory/frameworks to identify barriers and facilitators to implementation and guide development of implementation strategies and (3) evaluating the implementation strategy: employing rigorous evaluation designs to determine whether strategies to reverse the gap are effective and why or why not (box 1). We describe the activities involved in each phase and provide an example related to the prescription of controller medications to children presenting to the ED with an asthma exacerbation that links these activities with the three principles outlined above.

Box 1

Steps for conducting implementation science and preintervention planning research

  • Preintervention planning steps

    1. Describe the evidence to be translated and its relation to a health problem. Steps 1 and 2 can occur concurrently.

      1. What evidence (health-related behaviour, test, procedure, treatment, intervention, programme) will be translated?

      2. Justify the evidence is ready to be translated (including in the local context).

      3. What health problem will translation of the evidence improve? Justify selection of this health problem as a priority in the setting you plan to work.

    2. Identify stakeholder communities and conduct outreach to work with them (if not completed in step 1).

      1. List key communities/stakeholders involved in translating your evidence

      2. Consider vested interests of key communities/stakeholders

      3. Describe plan for engaging communities/stakeholders

    3. Describe the evidence-practice gap

      1. Performance gap: What is the difference between current and ideal practice and behaviours? What are the underlying conditions and context?

      2. Outcome gap: How much improvement in health outcomes (safety, effectiveness, efficiency, patient-centredness, timeliness and/or eliminating disparities in care) could be achieved if the performance gap was eliminated?

      3. Could unintended consequences result from attempts to change practices or conditions contributing to performance gap?

    4. Determine the population, organisation and/or stakeholder readiness for change

      1. Strategic: Is addressing the problem area part of strategic priorities?

      2. Structural: Are there local programmes or resources that will facilitate implementation and sustain the improvement activity after the project team is done?

  • Intervention design steps

    1. Describe evidence–practice gap in behavioural terms (Who needs to do what differently?)

    2. Select behaviours upon which to frame the implementation strategy

    3. Identify barriers and enablers of selected behaviours using a theoretical framework

    4. Select evidence-based strategies for behaviour change (using the chosen theory or framework)

  • Implementation strategy evaluation steps

    1. Identify and measure mediators of change Note: may repeat some steps above or look at institutional/community behaviour change theories and frameworks.

    2. Select process, implementation and health outcomes

 3. Select appropriate and feasible study designs

Preimplementation planning

Preimplementation planning begins with identifying the evidence to be translated (health-related behaviour, test, procedure and so on) and its relation to a health problem. The case for translation is strongest when the effectiveness of the practice change has been clearly demonstrated in clinical trials and/or the practice is recommended by professional societies or other professional organisations. To make the case for translation, it is helpful to describe the evidence–practice gap in terms of performance and outcome gaps.

The performance gap is the difference between current and ideal practice/behaviour, ideally in the setting in which the research is taking place. An example of how the theory-based behavioural components of the COM-B model might be used to define the performance gap of provision of inhaled corticosteroids (ICS) by ED physicians to paediatric patients with persistent asthma is found in figure 1. Existing literature and internal data sources can be used to identify the evidence–practice gap, in this case that paediatric ED providers do not prescribe ICS for patients being seen for asthma exacerbations despite evidence-based recommendations and guidelines. An initial version of the COM-B model, also based on existing literature, attempts to understand the behaviour of non-prescribing of ICS in context. In the figure, likely capability, motivation and opportunity-related barriers and enablers are shown. For example, a motivational barrier frequently identified in the literature may be that there are competing demands in the hectic ED environment, making it difficult to add any practice change interventions. However, additional barriers that may not have been considered may only be understood from a careful study of the ED physicians’ behaviour in their particular setting, meaning that physicians must be engaged in completing this step. Approaches that can be used to complete this COM-B diagnosis for a particular setting include observation, surveys, administrative data reviews and in-depth interviews, as well as other approaches.

Figure 1

Applying COM-B to the provision of inhaled corticosteroids (ICS) to paediatric patients with persistent asthma by Emergency Department (ED) physicians.

The outcome gap is defined as the difference between current health outcomes and those that are expected to be achieved if the recommended practice was observed. In our example, the outcome gap is when children with moderate-to-severe asthma are not prescribed ICS at ED discharge, patient-centred care is compromised; patients have poorly controlled asthma and decreased quality of life, and the likelihood of unscheduled ED visits and their accompanying costs rises. The outcome gap represents the potential improvements in healthcare quality (safety, effectiveness, efficiency, patient-centeredness and equity) or healthcare costs that could be achieved if the practice variation was reduced. The case for translation can be further strengthened by indicating why the health problem the intervention seeks to improve is a priority in the setting in which the research will take place or to the funder of the evidence translation project.

In undertaking an implementation-focused research endeavour, it is essential to identify and engage all potential stakeholders across levels (providers, patients, community systems and policy makers) to assess readiness for change and whether there is adequate consensus that the evidence is ready to be translated and that the health problem is part of strategic priorities. For the example of improving prescription of ICS at discharge from the ED, it is clear that you would need to engage ED physicians to improve an understanding of barriers they face to prescribing, to understand what ICS prescribing means to them and if they feel well trained to prescribe. It is also likely that patients and their families as well as ED staff are other important stakeholder community groups, and it could be also important to include pharmacists as stakeholders, to identify any barriers relevant to the nature of the prescription process. Stakeholder engagement can also involve identifying and reaching out to local programmes or resources that can be used to facilitate implementation and ensure sustainability after the research is completed. It is also important to consider and acknowledge any potential unintended negative consequences that may arise as a result of changing current practice conditions.

Designing the implementation strategy

Implementation science promotes a systematic approach to designing a strategy to facilitate uptake of an evidence-based intervention. The systematic approach includes (1) identifying behaviours contributing to the evidence–practice gap; (2) identifying key determinants of current behaviour and the desired behaviour change using a theoretical framework and (3) selecting components of the implementation strategy that target the key determinants (using the chosen theory or framework). Designing an implementation strategy begins with listing the specific behaviours that need to occur to facilitate uptake of an evidence-based intervention and then selecting one or more target behaviours to focus on. The target behaviours should be specified in as detailed a manner as possible (who needs to do what differently, when, where, how, with whom?).6 ,12 The specification enables assessment of key barriers and enablers of the target behaviour(s). This step should be guided by a behavioural theory or framework and frequently involves qualitative research. Next, behaviour change functions are selected, that can address the behaviour diagnosed, such as education, training, persuasion etc. Last, potential behaviour change techniques (including those from other fields such as Quality Improvement (QI)) can be mapped to key barriers and enablers, again using behaviour change theory or frameworks. Mitchie et al6 have identified 93 distinct behaviour change techniques and described their usefulness for targeting specific determinants of behaviour change. Again at this stage, consultation with local stakeholders is critical to ensure that selected techniques and their delivery are feasible, relevant and acceptable. When possible, policy or structural changes that can enable behaviour change should also be considered.

Following the COM-B diagnosis (figure 1), the BCW framework can be employed to determine which intervention function(s) might best address the barriers to ICS prescribing. For example, the motivational barrier about competing demands might be addressed in several ways including: restructuring the environment (to free up time to discuss future medication planning for selected asthma encounters and thereby change the perception that there are too many competing demands), persuading physicians that their professional identity includes prevention (to increase a sense of ownership over prevention of asthma) or promoting goal setting about ICS prescribing (to improve confidence in ability to integrate ICS prescribing into ED practice settings). Once the choice of intervention function is made, the final step would be to select a delivery strategy, the ‘behaviour change technique’. For example, if the intervention function of persuasion is selected, respected role models in the environment could be identified and trained to provide information and serve as credible resources about the importance of prescribing ICS and physicians could receive feedback on their own ICS prescribing behaviour. If an implementation strategy involved increasing motivation by providing prompts or cues, an electronic medical record could be used to delivery prompts, particularly if this was guided by information in the medical record suggesting ICS for appropriate patients. (On the other hand, rapid proliferation of electronic prompts might render a prompt related to ICS prescription less effective.) Being able to incorporate flexible strategies to deliver intervention content may be important to adapt to broader changes in setting the intervention is planned for.

Evaluating the implementation strategy

The evaluation of an implementation strategy should focus on (1) process—how components of the strategy were delivered or adapted and the fidelity to intervention components and principles; (2) mediators of change—whether the components modified targeted barriers or enhanced targeted enablers and (3) outcomes—frequently, whether uptake of the evidence-based intervention increased (or decreased if deimplementation is the goal). These forms of evaluation can be summarised as ‘process evaluation’—how well the intervention is being implemented (ie, fidelity) and ‘summative evaluation’—whether change occurred as a result of the intervention (ie, mediators and outcomes). Health outcomes can also include those related to the quality of healthcare (safety, effectiveness, efficiency, patient-centeredness and equity) when feasible, but in general it should already be known that the uptake of the evidence-based intervention improves healthcare quality.

It is important to examine measures of reach, in addition to effectiveness. In the clinical example we are using, we propose the use of persuasion by credible sources as one intervention to increase ICS prescribing to paediatric asthma patients in urban ED settings. To examine reach, we would determine to what extent the intervention reached specific populations of providers in the ED setting (eg, different ages or genders) or different patient populations. Adoption might be assessed in a multicentre study with different types of ED settings. There may be some settings where the intervention practices became part of the practice expectations, whereas they are not adopted so readily in others. This could provide guidance for on-going trainings and reinforcements unique to particular environments.

Evaluation frameworks such as the RE-AIM (Reach, Effectiveness, Adoption, Implementation and Maintenance) framework can be helpful to guide the selection of process and health outcomes to assess.13 In addition to using frameworks in the evaluation process, it can be important to consider pros and cons of experimental versus quasi-experimental designs, and whether or not qualitative research would enhance the ability to fully understand the effects of the implementation, such as how well the persuasion efforts of peer role models reflect trainings or protocols, or what factors may have led to unintended consequences or spillover effects.

Implementation science versus QI

There are many commonalities between implementation science and both QI and Monitoring and Evaluation (M&E), but there are some important differences as well. According to current implementation science thinking, and as shown in the COM-B examples, the behavioural diagnosis and steps to address barriers to critical behaviours that affect the implementation process are central to Implementation Science, whereas in QI and M&E they often are not. Additionally, the goals of QI research are often less focused on creating generalisable knowledge than on addressing the QI problem at hand. Implementation science focuses more on understanding the aetiology of gaps between expected results and observed outcomes, in ways that can be relevant beyond a given situation, whereas QI and M&E research may stop once identification and barriers related to performance of specific projects are determined. Despite these differences, many QI and M&E-related research studies are aligned with implementation science principles and these disciplinary distinctions are not always relevant.

Summary

As in other disciplines, there are wide gaps in the uptake of a range of evidence-based interventions in emergency medicine. Studies are now needed that employ theory-based approaches to understand key behavioural determinants and to design, evaluate and adapt targeted implementation strategies that address the targeted behaviours. These studies should be conducted with broad involvement from multiple relevant stakeholders, should engage multiple disciplinary perspectives and should be facilitated by research designs and selection of outcomes that best enable implementation research questions to be addressed. Moving forward will require increasing knowledge about implementation science among trainees and practitioners as well as sustained efforts to expand the capacity of emergency medicine researchers to address the implementation research questions that merit focused attention.

Acknowledgments

We thank Emily Larimer for providing a case study from her coursework in the UCSF Implementation Science Training Program.

References

Footnotes

  • Contributors Each author contributes to the writing and concepts for the paper. MAH was responsible for the final version.

  • Funding This work was supported by the following funding sources: National Institutes of Health, National Center on Minority Health and Health Disparities P60MD006902 (MAH and AG) and National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR000004 (MAH, AG and AC).

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles