This article was downloaded by: [Heriot-Watt University] On: 07 March 2015, At: 09:35 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Prevention & Intervention in the Community Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/wpic20

The Importance of Improving Implementation Research for Successful Interventions and Adaptations a

b

c

Lisa M. Dorner , Eboni C. Howard , Alina Slapac & Katherine Mathews

d

a

Educational Leadership and Policy Analysis , University of MissouriColumbia , Columbia , Missouri , USA b

American Institutes of Research , Chicago , Illinois , USA

c

Click for updates

Department of Educator Preparation, Innovation, and Research , University of Missouri–St. Louis , St. Louis , Missouri , USA d

Department of Obstetrics, Gynecology and Women's Health , Saint Louis University , St. Louis , Missouri , USA Published online: 16 Oct 2014.

To cite this article: Lisa M. Dorner , Eboni C. Howard , Alina Slapac & Katherine Mathews (2014) The Importance of Improving Implementation Research for Successful Interventions and Adaptations, Journal of Prevention & Intervention in the Community, 42:4, 315-321, DOI: 10.1080/10852352.2014.943637 To link to this article: http://dx.doi.org/10.1080/10852352.2014.943637

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,

Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions

Journal of Prevention & Intervention in the Community, 42:315–321, 2014 Copyright # Taylor & Francis Group, LLC ISSN: 1085-2352 print=1540-7330 online DOI: 10.1080/10852352.2014.943637

The Importance of Improving Implementation Research for Successful Interventions and Adaptations LISA M. DORNER Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

Educational Leadership and Policy Analysis, University of Missouri-Columbia, Columbia, Missouri, USA

EBONI C. HOWARD American Institutes of Research, Chicago, Illinois, USA

ALINA SLAPAC Department of Educator Preparation, Innovation, and Research, University of Missouri–St. Louis, St. Louis, Missouri, USA

KATHERINE MATHEWS Department of Obstetrics, Gynecology and Women’s Health, Saint Louis University, St. Louis, Missouri, USA

This special issue explores the theoretical underpinnings, triumphs, and challenges of implementing four early childhood education interventions. In doing so, each article highlights the importance of studying the implementation context as part of the evaluation process. This commentary reflects on the entire issue, ultimately arguing that future evaluations must continue to conduct—and improve on— implementation research. Specifically, to understand evaluation findings and scale up or adapt interventions effectively, researchers must examine implementation processes systematically, using both quantitative and qualitative methods. This includes: explaining how interventions were designed, theorizing the relationships between implementation processes and outcomes, defining the implementation phase under study, examining the validity and reliability of implementation measures, and using accessible language in reports. KEYWORDS early childhood, evaluation, research, interventions, mixed methods

implementation

Address correspondence to Lisa M. Dorner, University of Missouri-Columbia, 202 Hill Hall, Columbia, MO 65211, USA. E-mail: [email protected] 315

Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

316

L. M. Dorner et al.

In education and other social service settings, there is an ever-increasing push to ensure that interventions, programs, and practices work. By ‘‘work,’’ policy makers, agency staff, funders, and even the general public usually mean that programs or practices have an evidence base; they have been proven effective through empirical research, the scientific gold standard of which is the randomized experiment (Cook, 2002; Howard, 2012; Shalvelson & Towne, 2002). However, this concept of ‘‘work’’ actually conflates two different processes. Randomized experiments or clinical trials may isolate the impact of an intervention, but assessing this impact in isolation does not tell us whether the program will work in real-life contexts, with other populations and communities (Weisz, Sandler, Durlak, & Anton, 2005). Agency personnel have long recognized that context matters; as Meredith I. Honig states (2009, p. 333): ‘‘what works depends;’’ (emphasis in original). We must always examine ‘‘what works for whom, where, when, and why,’’ a mantra echoed by John Easton (2010), the director of the Institute of Education Sciences. That is, we must examine how interventions are designed and implemented, as well as the relationships between implementation and outcomes, if we wish to understand how programs really will work outside of constrained study environments (Durlak & DuPre, 2008). This special issue is an important step forward in the way we typically conduct and communicate research, providing insight into the implementation processes of four interventions in early childhood: Autism Spectrum Disorders Nest (ASD Nest), Chicago School Readiness Project (CSRP), Foundations of Learning (FOL), and Getting Ready Conjoint Behavioral Consultation (CBC). Here, we comment briefly on these reports, ultimately arguing that research on ‘‘evidence-based practices’’ must recognize the importance of studying implementation processes, and then study it systematically, clearly defining and measuring implementation factors using both quantitative and qualitative data. This way, we will not only know whether programs work, but also how to re-create their successes (and avoid their failures). This commentary is informed by our experiences working as researchers and agency personnel who have engaged in community-research partnerships. With prior experience as a program director at a non-profit educational organization, Dr. Lisa Dorner specializes in applied, inter-disciplinary research and evaluation. She and Dr. Alina Slapac have worked closely with practitioners to analyze the development of new schools, teachers’ behavior management skills, and family engagement in education. Dr. Eboni Howard brings her expertise in mixed-method research for evaluating early childhood and human service programs, as an active participant in the Office of Planning, Research and Evaluation’s Application of Implementation Science to Early Care and Education Research working group (for information on this group’s inception, see http://www.researchconnections.org/childcare/ collaboration.jsp). Dr. Katherine Mathews has worked for over a decade at the university–community research interface and now brings her clinical=

Implementation Research

317

agency experience to the foreground, as we explore what these articles mean for the individuals and organizations that must ultimately implement them.

Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

IMPLEMENTATION RESEARCH: FROM EXPLORATION TO ACTIVE REALIZATION What do we mean by implementation? The implementation of an intervention is often understood as the final stage of a long process, occurring once an activity has been defined and people have started to put it into practice (Durlak, 2010). However, consider what we have learned from the papers in this special issue, and what a generation of interpretive analysts from various fields have taught us (e.g., Lipsky, 1980; Spillane, 2000; Stone, 2002; Yanow, 1996): implementation starts before one actually enacts a new program, as individuals come to understand what the intervention is, and it continues through the activation of new practices. Metz and Bartley (2012) define four stages of implementation in detail—exploration, installation, initial implementation, and active implementation—a heuristic that is helpful for implementation researchers and that we found helpful in analyzing the articles in this special issue. Most of the projects reviewed here examined earlier stages of the implementation process. Lloyd et al. discussed important lessons learned during FOL’s exploration phase, as they examined the degree to which their approach met community needs and whether implementation of the intervention was even feasible. Getting ‘‘buy-in’’ from stakeholders (FOL), providing extensive training to program staff (CSRP), and understanding how participants make sense of interventions (ASD Nest) are all critical in the earlier stages of implementation. In contrast, Clarke, Sheridan, & Woods (this issue) provided information on important factors from the active implementation stage, discussing how the CBC program was put into practice by teachers and parents. Thus, we learn from these four projects that studying implementation entails careful examination of what comes before, during and after an intervention. At the start, we must question, like ASD Nest did: How do people make sense of this intervention? Does the intervention address a perceived need, with high priority? What relationships exist and may be built upon? What are the organizational contexts, their capacity=resources, and the factors that may contribute to the successes or challenges of implementation? (Spillane, Gomez, & Mesler, 2009). Then, during and after the intervention, researchers must examine, as most of the projects here did: dosage (How much of the intervention was offered? What level of training or technical support was provided?), participation rates and responsiveness (How much of the intervention was received, and by whom?), program modification=fidelity (What did stakeholders change, which seems different from the creators’

318

L. M. Dorner et al.

intentions?), and the study’s conditions itself (How did a randomized trial maintain fidelity? Did the condition provide adequate control for changes that come merely from intense focus on an issue?) (Durlak & DuPre, 2008; Spillane et al., 2010).

Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

WHERE SHOULD WE GO FROM HERE? We applaud this issue and the programs described here for collecting data and reporting on implementation processes in early childhood education. The authors all lend important insights into how much implementation matters. But this is just a start. In this section, we recommend future implementation studies must systematically (1) define what is meant by implementation, and which stage one is studying; (2) use accessible language and articulate measures, findings, and how implementation factors theoretically relate to outcomes; (3) improve measurements used; and (4) collect and analyze both quantitative and qualitative data on implementation. Examining information about implementation is challenging due to a lack of consensus on key concepts: ‘‘Science cannot study what it cannot measure accurately and cannot measure what it does not define’’ (Durlak & Dupre, 2008, p. 342). In these articles, authors used the term ‘‘implementation’’ without clearly defining what they meant. There was also variation in which implementation phase researchers analyzed, and why. Moving forward, it is important for researchers to define the implementation factors they examine, and conceptually articulate why those factors are important for program outcomes. We also need to take care in our reporting. Practitioners— arguably, the central audience of implementation research—may not understand the language used in reports like these. Results need to be translated into ideas useful for agency staff; for instance, teachers and families of the CBC project need to know exactly what an ‘‘effect size’’ might mean for them (e.g., four fewer discipline incidents each year, on average), and if such statistics are even meaningful given sample sizes. Next, in measuring if a program is actively implemented, researchers must apply the same procedures to determine construct and psychometric validity as they do in studies that assess program outcomes. As we embrace implementation research as part of effectiveness studies, we need to embrace high standards of measurement development and assessments of construct validity and reliability. We also need to provide explicit audit trails in qualitative work in order to ensure trustworthiness and transferability, the equivalents of quantitative validity and reliability (Merriam, 2009). In general, the reports in this special issue lacked detail on their research methods for studying implementation (e.g., we are unsure exactly what observational and interview data were collected, for how long, and by whom). Future studies

Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

Implementation Research

319

must be as explicit about their research methods for implementation as they are about the evaluation details. Moreover, although these projects described the setting and participants of each study, rich descriptions of the cultural contexts were not provided. To know how to ‘‘scale up’’ interventions, and to even consider whether they will work in a new context, agency personnel need to see, hear, and understand the possibilities. Qualitative research excels at producing such insights by seeking out participants’ perspectives, analyzing narratives, and understanding process (Creswell, 2009; Patton, 2002). Of course, agency personnel also want to know the numbers, and randomized trials help us understand: How often or for how many did an intervention work? Thus, we suggest that intervention studies must collect both qualitative and quantitative data on implementation processes, with qualitative data collection and analysis completed ‘‘in tandem,’’ not secondarily (Spillane et al., 2010). Systematic, mixed method data collection will assist researchers and practitioners to know what was the actual program that was delivered, to what extent fidelity was achieved, if and how the program was adapted, which program components mattered most, and, in the end, what is necessary to implement a similar project in the future. Studying and reporting adaptations systematically will help researchers understand how to improve interventions, by learning from program staff on the ground, as pointed out by CSRP. Improved data collection and analysis will lend insights into what might have prevented a successful or more successful outcome (e.g., particular histories between agencies, researchers, and research institutions). For instance, researchers working with Dr. Mathews believed that African Americans did not want to participate in their project because of (warranted) mistrust of medical and research systems. A qualitative interview study found something much more complicated (Scharff et al., 2010). There was strong buy-in and belief in the power of research, but potential participants wanted to understand the goals better; they needed to know which communities would benefit from the interventions, and how. Developing trust, in culturally competent ways, among program designers, practitioners, and researchers is an important part of implementation that must be documented, as also suggested by CSRP.

CONCLUSION Practitioners and researchers alike will ‘‘benefit from employing . . . designs and methods that provide as much evidence as possible so that the informed reader can agree or disagree with the conclusions drawn’’ (Spillane et al., 2010, p. 22). Ultimately, findings from intervention research are introduced into new contexts, by a range of stakeholders. Mixed-method research, stronger definitions of implementation factors, and theories linking

320

L. M. Dorner et al.

implementation to outcomes must be shared widely and in accessible terms. In short, process is as important as outcome. The collaborative decision making regarding planning, adaptation, implementation, and evaluation of interventions should ultimately better inform researchers, practitioners, and the general public, empowering us all to continue to problem solve, create, and contribute to innovative, and adaptable, evidence-based practices.

Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

REFERENCES Cook, T. D. (2002). Randomized experiments in educational policy research: A crticial examination of the reasons the educational evaluation community has offered for not doing them. Education Evaluation and Policy Analysis, 24(3), 175–199. Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Los Angeles, CA: Sage. Durlak, J. A. (2010). The importance of doing well in whatetever you do: A commentary on the special section, ‘Implementation research in early childhood education’. Early Childhood Research Quarterly, 25, 348–357. Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of imlementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350. Easton, J. Q. (2010). New research initiatives for the Institute of Education Sciences. IES Research Conference Keynote Address, National Harbor, Maryland. Retrieved from http://ies.ed.gov/director/pdf/easton062910.pdf Honig, M. I. (2009). What works in defining ‘‘what works’’ in educational improvement: Lessons from education policy implementation research. In D. Plank, G. Sykes, & B. Schneider (Eds.), Handbook of education policy research (pp. 333–347). New York, NY: Routledge. Howard, E. (2012). Statewide implementation of child and family evidence-based practices: Challenges and promising practices. Washington, DC: Technical Assistance Partnership for Child and Family Mental Health. Lipsky, M. (1980). Street-level bureaucracy: Dilemmas of the individual in public services. New York, NY: Russell Sage Foundation. Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass. Metz, A., & Bartley, L. (2012). Active implementation frameworks for program success: How to use implementation science to improve outcomes for children. Zero to Three, 32, 11–17. Patton, M. Q. (2002). Qualitative evaluation and research methods (3rd ed.). Thousand Oaks, CA: Sage Publications. Scharff, D. P., Mathews, K., Williams, M., Hoffsuemmer, J., Martine, E., & Edwards, D. (2010). More than Tuskegee: Understanding mistrust about research participation. Journal of Healthcare for the Poor and Underserved, 21(3), 879–897. Shalvelson, R. J., & Towne, L. (Eds.). (2002). Scientific research in education. Washington, DC: National Academies Press.

Downloaded by [Heriot-Watt University] at 09:35 07 March 2015

Implementation Research

321

Spillane, J. P. (2000). Cognition and policy implementation: District policymakers and the reform of mathematics education. Cognition and Instruction, 18(2), 141–179. Spillane, J. P., Gomez, L. M., & Mesler, L. (2009). Notes on reframing the role of organizations in policy implementation. In D. Plank, G. Sykes, & B. Schneider (Eds.), Handbook of education policy research (pp. 409–425). New York, NY: Routledge. Spillane, J. P., Pareja, A. S., Dorner, L. M., Barnes, C., May, H., Huff, J., & Camburn, E. (2010). Mixed methods in randomized controlled trials (RCTs): Validation, contextualization, triangulation, and control. Educational Assessment, Evaluation, and Accountability, 22(1), 5–28. Stone, D. A. (2002). Policy paradox: The art of political decision making (Revised ed.). New York, NY: Norton. Weisz, J. R., Sandler, I. N., Durlak, J. A., & Anton, B. S. (2005). Promoting and protecting youth mental health through evidence-based prevention and treatment. American Psychologist, 60(6), 628–648. Yanow, D. (1996). How does a policy mean? Interpreting policy and organizational actions. Washington, DC: Georgetown University Press.

The importance of improving implementation research for successful interventions and adaptations.

This special issue explores the theoretical underpinnings, triumphs, and challenges of implementing four early childhood education interventions. In d...
93KB Sizes 1 Downloads 4 Views