Evaluation and Program Planning 45 (2014) 151–156

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Right timing in formative program evaluation Jori Hall *, Melissa Freeman, Kathy Roulston University of Georgia, United States

A R T I C L E I N F O

A B S T R A C T

Article history: Received 1 August 2013 Received in revised form 17 April 2014 Accepted 22 April 2014 Available online 30 April 2014

Since many educational researchers and program developers have limited knowledge of formative evaluation, formative data may be underutilized during the development and implementation of an educational program. The purpose of this article is to explain how participatory, responsive, educative, and qualitative approaches to formative evaluation can facilitate a partnership between evaluators and educational researchers and program managers to generate data useful to inform program implementation and improvement. This partnership is critical, we argue, because it enables an awareness of when to take appropriate action to ensure successful educational programs or ‘‘kairos’’. To illustrate, we use examples from our own evaluation work to highlight how formative evaluation may facilitate opportune moments to (1) define the substance and purpose of a program, (2) develop understanding and awareness of the cultural interpretations of program participants, and (3) show the relevance of stakeholder experiences to program goals. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Program evaluation Collaboration Decision making Social context Qualitative research

1. Introduction Educational researchers who design programs, as well as managers responsible for implementation, are typically required by funding agencies to submit summative evaluations to determine the extent to which program objectives were met. While summative evaluations supported by quantified evidence are important, they are inadequate to provide information about program implementation decisions or how outcomes transpired (Patton, 2002). In contrast, formative evaluations that are conducted during the course of development or program delivery (Mathison, 2005) provide information on how a program is unfolding so that midcourse corrections can be made. Although formative evaluations provide process information that can illuminate potential and actual implementation progress (Stetler et al., 2006), formative data are frequently underutilized. In this paper we argue for the value of incorporating formative evaluation practices throughout the life of a program; rather than simply as a procedural checkpoint after the program has been designed. That is, we argue for the importance of deliberately fostering opportunities for educational researchers, program managers, and evaluators to work collaboratively in ways

* Corresponding author at: Department of Lifelong Education, Administration and Policy, University of Georgia, Athens, GA 30602, United States. Tel.: +1 706 542 1801; fax: +1 706 542 5873. E-mail addresses: [email protected] (J. Hall), [email protected] (M. Freeman), [email protected] (K. Roulston). http://dx.doi.org/10.1016/j.evalprogplan.2014.04.007 0149-7189/ß 2014 Elsevier Ltd. All rights reserved.

that position formative data as central to program and evaluation design and implementation. However, we realize that even when such opportunities are encouraged, formative evaluations are often challenged by program design, context and implementation. When faced with these challenges, we have found ourselves taking action and then reflecting on how our evaluations could have been done differently. These events and wonderings brought to our attention the intersections of formative evaluation program theory and practice. Our reflections on our actions during program evaluations has reaffirmed the importance of right timing as well as how participatory, responsive, educative, and qualitative approaches can be operationalized in formative evaluation. Accordingly, this paper aims to reinvigorate the value and potential of formative evaluation by reporting on what we have learned. To do this, we provide empirical examples from our formative evaluation work. These examples serve three main purposes. The first is to highlight the intersections of formative program evaluation theory and practice with attention to the aforementioned evaluation approaches. The second purpose of the examples is to introduce the notion of right timing or kairos. We find Aaron Hess’s (2011) description of the Greek term kairos as ‘‘timeliness of speech’’ (p. 138) helpful in thinking about formative program evaluation’s potential. In our application of kairos to the evaluation process, evaluators become keenly attuned to identifying precise moments in which decisions about actions must be taken to foster success. In other words, formative evaluators have the opportunity to focus on decision-making concerning appropriate

J. Hall et al. / Evaluation and Program Planning 45 (2014) 151–156

152

action at the right time, in a specific context, across the life of a project. By examining these opportune moments in detail, evaluators seek to elicit information on how both program implementation and evaluation are affected by and need to respond to context-specific issues as they become visible. We acknowledge that the approaches to evaluation we highlight in this article, including kairos, do not necessarily guarantee desired outcomes. That being the case, the third purpose of our examples is to describe how the formative evaluations described could have been strengthened had we thought more clearly about the relationship between the program design, its implementation, and formative evaluation. Before reflecting on examples from our evaluation work to show possible ways formative evaluators might use kairos to inform the development and implementation of educational programs, we first describe the literature on formative evaluation, highlighting its essential purposes and interdependent approaches. 2. Formative program evaluation Formative program evaluation is premised on the assumption that both programs and evaluations are designs in and of themselves and that attention should be paid to their components. That is, they need to be developed, re-examined and assessed. It is the process of thoughtfully considering and reconsidering how program and evaluation design elements function and relate to each other that is at the heart of effective formative program evaluations. Similarly to Maxwell (2005), we believe formative evaluation does not assume any particular ordering of design components. Rather, it focuses on the ways in which the evaluation and program designs interact and evolve as a result of the partnership established between the evaluator, educational researchers or managers, and other relevant stakeholders. However, we believe that for this partnership to be beneficial, evaluators need to be mindful of multiple interdependent features of the evaluation–program interaction in a way that fosters opportunities to communicate or intervene at opportune moments. These four approaches support each other while also focusing on specific aspects of the evaluator–stakeholder relationship. Table 1 provides an overview of the interacting formative program evaluation approaches. First, and most importantly, we argue, a participatory approach to formative program evaluation is imperative. Participatorybased evaluations assume that some form of collaboration between evaluators, program managers, and other program stakeholders will advance a project’s practical or transformative aims (Cousins & Whitmore, 1998). For example, theorists argue that such partnerships can promote the utilization of evaluation findings (Patton, 1997), contribute to better alignment between program design and the contextual realities of program

implementation (Huebner, 2000), and aid in consensus-building of diverse stakeholder groups for effective program planning (Chaco´n-Moscoso, Anguera-Argilaga, Pe´rez-Gil, & Holgado-Tello, 2002). We believe that formative evaluation should build on these assumptions while also fostering a particular kind of collaborative relationship. Like Rolfsen and Torvatn (2005), formative evaluators understand that relationship-building and effective communication go hand in hand. As multiple parties gain an understanding of what each contributes to program and evaluation development, they are more able to identify and communicate with each other the kinds of information that are mutually beneficial to the collaboration (Chaco´n-Moscoso et al., 2002; see column 1, Table 1). Second, a responsive approach to formative evaluation is needed to demonstrate and deepen cultural and contextual receptivity. While creating a program design and plan to systematically document the program are valuable activities, all aspects of an evaluation design cannot be predetermined as a ‘‘consequence of contextual realities’’ (Chatterji, 2005, p. 17), and therefore must be responsive; that is, open to change as the program design unfolds (Patton, 2002). Furthermore, Weston (2004) noted that contextual dimensions (i.e., policies, geographical settings, etc.) and cultural dimensions (i.e., values, customs, etc.) can interact with program designs in unforeseen ways, revealing points of disconnection between a local context and the design of a program (AEA, 2011). Formative data generated are central to being culturally and contextually responsive, since they can be used to attend to and make valid inferences about unanticipated conflicts and events, and the ways in which program objectives are articulated and accomplished within and across stakeholder groups and program activities (Patton, 2002; Stetler et al., 2006). Thus, formative evaluators seek ways to bring understanding and awareness to how contextual and cultural dimensions interact with program and evaluation designs (see column 2, Table 1). Third, because formative evaluation is based on a partnership, educative opportunities can serve to help educational researchers, program managers, and evaluators critically reflect on and learn from the interactions between program and evaluation designs (Preskill, 2008; Torres & Preskill, 2001). Thus our notion of educative reflects two related but distinct ideas. First, the ‘‘evaluator accepts that a significant part of her or his role is to promote greater understanding of the program [design] and its context among program staff, participants, and other stakeholders’’ (Greene, DeStefano, Burgon, & Hall, 2006, p. 56). Second, through the process of learning about and communicating understandings of the program, the evaluator increases her or his capacity to design and implement an evaluation that yields contextually relevant actionable data (i.e., for mid-course corrections, voicing diverse perspectives, dealing with sensitive topics,

Table 1 Essential approaches to formative program evaluation. Approach

Participatory

Responsive

Educative

Qualitative

Purpose

To build rapport, collaborative decision making, and common ground

To demonstrate cultural and contextual receptivity

To participate in discovery and meaning making as it is occurring

Characteristics

Collaborative Dialogical Mutually beneficial Engaged Genuine, authentic partnership

Reflexive Attentive Aware

To foster communicative exchanges that promote learning about program and evaluation that informs development and implementation Analytic Illuminative Judicial Transformative Mutual understanding of program and evaluation that fosters capacity building

Desired Outcome

Methodological revision, adaptation and flexibility to respond to emergent issues

Note: This chart presents the evaluation approaches needed to strengthen kairos in formative evaluation.

Naturalistic Emergent Experiential Interpretive Thick descriptions of program and evaluation implementation and practice that yield complex understandings of phenomena

J. Hall et al. / Evaluation and Program Planning 45 (2014) 151–156

etc.) in culturally responsive ways (AEA, 2011; see column 3, Table 1). Finally, we believe that qualitative methods are especially beneficial to formative evaluation because (a) understanding process requires rich descriptions of events as they occur; (b) elucidating differing perspectives on program practices requires gathering those perspectives from individuals in their own words; (c) and revealing implementation issues or inequities requires qualitative analysis of multiple data sources (Chatterji, 2005; Patton, 2002). While qualitative methods do not exclude the use of quantitative measures, the emphasis is on the way meaning is developing in use, in specific contexts, by particular people. This kind of analysis further supports partnership development since it requires ongoing conversations that foster critical reflection on how program components are understood to work, how those involved in the program, whether facilitator or recipient, seem to be affected, and which types of formative data will be useful to better understand program effects. We are not suggesting that what we are saying is new. The participatory nature, responsiveness to context and cultural dimensions, educative focus, and qualitative methods are common approaches found in many types of evaluation models: process evaluation (Stufflebeam, 2007), participatory evaluation (Cousins & Whitmore, 1998), responsive evaluation (Greene & Abma, 2001), and evaluative capacity (Preskill, 2008). Rather, our goal is to reposition formative program evaluation as more than providing feedback on a program’s effect or developmental process; but as a way of more effectively providing input during all stages of a program’s life. From our perspective, evaluation is not simply a method to determine the merit or worth of a program but a way of intelligently and actively intervening for the purpose of effective educational program development and implementation. Thus, we do not think of the approaches and how they are operationalized in a formative evaluation as discrete elements. Rather, we consider these approaches as a way of being-inrelation. When thinking of formative evaluation as an embodied relation, these qualities assist evaluators in becoming keenly attuned to identifying precise moments in which decisions about actions must be taken to promote successful program outcomes. In other words, we perceive these approaches central to the timely identification of possible points of intervention, and the building of mutually beneficial, authentic (i.e., open and honest) partnerships. Ideally, identification of these qualities help, researchers and evaluators focus on a range of possible issues, including identifying cultural and contextual problems; effectively communicating about those; and then developing subsequent intervening responses both deliberately and thoughtfully. This is why we argue for the use of these approaches as they promote attention to partnership interactions, stressing the continuous monitoring of how the partnership gets put into practice and the effects those practices have on the evaluation and the program design. The three examples that follow draw on our field experiences as qualitative evaluators and are used to illustrate these ideas. 3. Formative evaluation in the field: lessons learned These examples illustrate formative evaluation in practice, focusing on kairos or how evaluators intervened at opportune moments to impart or collect information deemed useful to support program development. Specifically, they illustrate how formative evaluation helps (1) define the substance and purpose of a program, (2) develop understanding and awareness of the cultural interpretations of program participants, and (3) show the relevance of stakeholder experiences to program goals.

153

3.1. Defining the substance and purpose of a program The first example is drawn from an evaluation of a planning retreat for a summer program intended to teach high school students about religious diversity and empower them to become engaged citizens. An advisory group hired two curriculum consultants to help design the youth program and invited the second author and her evaluation team to provide formative feedback on the group’s developmental process. This example focuses on the way the evaluation team altered their plan to provide information to stakeholders during the retreat. The retreat occurred in 2006 in the Southeast U.S. where the inaugural summer camp was to be held. The advisory group and curriculum developers convened on Friday evening along with the three evaluators who began documenting the planning process. They held meetings Saturday morning, and, that afternoon, welcomed 13 participating young people to provide feedback. After observing the meetings with the youth, the evaluators facilitated focus groups with the youth and interviewed members of the advisory group. On Monday the advisory group met to debrief and plan. The evaluation team was scheduled later that day to seek input on how the advisory group wanted them to focus the evaluation design for the summer camp. This was an ideal situation for formative evaluation. The program planners had not only involved evaluators from the beginning, they also invited young people who fit the youth camp target group to trial ideas for the camp. During the weekend, however, the evaluation team witnessed contradictory interpretations of the aims and substance of the youth camp. As a result, the team deemed it important to provide this information during the group’s debriefing rather than wait for the in-depth analysis they planned to send the planners the following month. The opportunity to intervene occurred Monday morning when the program planners and curriculum developers debriefed about the success and challenges of the weekend experience. One challenge concerned how the program should be taught. A curriculum developer relayed some of the youths’ comments: What I heard was, I’d say, really a challenge. . . We don’t want history the same way that it would be in a formal class, and we don’t want religion the same way it would be in a formal religion class. We want the things that enable us to bring in and see the relevance of the history and of an understanding of the broader issues of religion to our own personal religious experience, to our own personal concern about these things. I think the challenge for us really is to find a way of imparting that enthusiasm in a way that rends it into the history . . . without making it start out looking academic. . . This prompted another advisory group member to wonder if the evaluation team could report on issues they had identified. However, the advisory group leader reminded everyone that the evaluation team had a scheduled meeting that evening and implied that the feedback should be tabled until then. Not wanting to lose the opening they were given, the evaluation team took a moment to explain the different purposes these two meetings had for the overall evaluation: the feedback they could give that morning could inform their curriculum development process, while the meeting in the evening was intended to seek their input on focusing the evaluation design for the summer program. With that the evaluators were invited to share their feedback. The evaluators did this by first pointing out that the planners had already touched on many of the issues that they had observed, but one in particular necessitated elucidation. The evaluators believed that there was a disconnection between the way the program planners and the youth were talking about the program. In detailing how they understood the youth perspective, the

154

J. Hall et al. / Evaluation and Program Planning 45 (2014) 151–156

evaluation team introduced the planners to a way of thinking about teaching history and religion by appealing to a wellestablished educational theory: democratic pedagogy. That is, the evaluation team sought to help the program planners shift their conceptualization of the youth program from one of content and curriculum to pedagogy focused on participation and process. This feedback was intended to be responsive to the program context. That is, the feedback was intended to inform the planners’ program design, by including both what mattered to the planners in terms of content and what would work pedagogically. One of the evaluation team members provided an example: One of the things we talked about was given an idea like free speech, asking youngsters before they come to identify their big free speech faith issue. So they come bringing the matter that they are struggling with in their own everyday life. . .. That is one way. A democratic pedagogy must not just be teaching the democratic way of life, but teaching it democratically. Although many of the program planners seemed enthusiastic with the idea of ‘‘teaching democracy democratically,’’ this example raised other issues such as whether a young person ‘‘would actually welcome being asked that [question] in advance,’’ and the conversation quickly turned to the issue of whether it was feasible to ask youth for their ideas prior to attending the camp. In other words, although the evaluation team had been responsive, they lost their place on the podium and the debriefing session continued with the planners sharing feedback and discussing ideas, including those of the evaluators, about how to integrate action plans into the forthcoming summer program. As a result the evaluation team was left uncertain about what the planners had taken away from their recommendation. They could only hope that the report they would write would clarify this message. Although this was a formative evaluation with the intent of providing feedback to inform program design, the relationship program evaluators had to program designers had not been discussed or established at the outset. They were uncertain therefore of how much or how little to intervene in the planners’ deliberations and had not established these parameters when negotiating the contract. In this example, deliberate efforts to foster participatory relationships among stakeholders could have helped establish a closer collaboration between the curriculum developers and evaluation team since both were working as consultants for the advisory group for the purposes of program development. Having a recognized partnership would have helped the evaluation team work with the curriculum consultants and plan a more pedagogically sound delivery, rather than the expertdriven lecture provided. In a more collaborative role, the evaluators and curriculum consultants might have helped surface some of the assumptions they heard lurking behind the questions and concerns; perhaps employing, for example, a strategy suggested by Joyce (1989) where the evaluator works to facilitate the planners’ own understanding of the issue, rather than supply them with an interpretation and hope that they assimilate it into their design. Clearly, communicating to educate requires paying attention to the pedagogical effectiveness of our communication strategies. 3.2. Developing awareness and understanding of the cultural interpretations of program participants The second example draws on the third author’s evaluation of a curricula program in Mind Body medicine and Spirituality (MB/S) implemented at a family medicine residency from 2007 to 2010. Central to the funding proposal for this program from the Health Resources and Services Administration (HRSA), an agency of the U.S. Department of Health and Human Services, was attention to

preparing health care professionals to work with underserved and minority patients. It became apparent from data analysis during the first year that the ethnically and racially diverse residents preparing for careers as family care professionals oriented to questions concerning the application of MB/S interventions with minority and underserved populations as sensitive. Although data analysis showed that while some residents thought that approaches covered in the MB/S program would be particularly useful in treating underserved populations because they were ‘‘low or no-cost’’ alternatives to drug therapies, easily taught, and could be used to empower patients in self-care, others did not. Some residents focused on the barriers to implementing MB/S interventions with minority and underserved populations (e.g., lack of education; lack of finances; non-compliance). Still other residents oriented to the question as a challenge to their equitable treatment of patients; and were adamant that they did not treat minority and underserved populations any differently to other populations. Finally, some residents did not respond to the intent of the question. It was unclear whether all residents interviewed shared common understandings of the terms ‘‘underserved’’ and ‘‘minority,’’ and this topic was identified as one that required clarification in further rounds of interviews. Having identified this as an issue and one that had a direct impact on her ability to reasonably interpret other related issues emerging in the data, the evaluator intervened by adjusting her interview protocol. In successive rounds of interviews she prompted residents to discuss their understandings of the topic of underserved populations and the applicability of MB/S modalities to these groups. Typically, the evaluator provided an overview of findings from earlier reports and asked participants to comment or add to her interpretations. For example: Third Author: Now, HRSA funds projects which aim to assist or to deal with disparities in healthcare options and in particular working with underserved populations. And so, one of the questions that I’ve discussed with people over the last couple of years, patients who might be identified as underserved. Now, when I’ve talked to people, it seems that there’s a couple of ways of thinking about that. . .. [provides several examples]. So, is there anything I’ve missed in terms of how people think about what an underserved population might mean? This process of asking questions and checking understanding of participants’ answers was educative as it yielded rich data for more nuanced interpretation of participants’ viewpoints. This encompassed both how residents defined the term ‘‘underserved’’ and their perceptions of the applicability of MB/S in working with underserved populations. This process also allowed residents the opportunity to learn about views presented by others that were presented anonymously, and many expressed strong opinions about these. Over time, it became apparent that residents’ definitions of what populations count as ‘‘underserved’’ varied as did their perceptions of how this population would benefit from MB/S modalities. In this example, preliminary data analysis showed that a crucial program area—attention to preparing health care professionals to work with underserved and minority patients—was not understood in the same way by stakeholders. Because the evaluator was responsive to the inconsistencies identified in the preliminary analysis through her incorporation of specific interview questions to examine this issue in successive rounds of data collection, the evaluator not only strengthened her evaluation design, she developed more nuanced interpretations of participants’ understandings and increased her cultural competence to develop meaningful understandings of the program (AEA, 2011). This evaluation example underscores the notion that a program’s

J. Hall et al. / Evaluation and Program Planning 45 (2014) 151–156

design and an evaluator’s response to it are neither value nor culture free. Exploration of the cultural dimensions of a program can expose disconnections between assumptions upon which program designs are based, and participants’ everyday understanding and practices. Although evaluators are trained to identify emerging issues and adjust data collection practices accordingly, having an understanding of the multiple components guiding formative evaluation helps establish these decisions as part of the collaborative decision making process between the evaluator and program stakeholders. In this evaluation variations in participants’ definitions, viewpoints, and rationales concerning a variety of topics—of which cultural dimensions was but one—were reported to program implementers, and residency faculty in annual evaluation reports and the participants themselves during successive rounds of interviewing, thus serving an educative role within the program. Indeed, program faculty adjusted the design of the program over the course of the 3-year implementation in ways that were acknowledged by participants in later interviews. In this particular program, these changes tended to focus on aspects of the program that were seen as most pressing (e.g., allocation of time to the program within the overall curriculum), and did not take up the issue of cultural interpretations as a central issue. To make optimum use of formative evaluation, the evaluator would have needed to sort out communication networks and levels of stakeholder participation in decision making among the various groups involved in the program to determine the feasibility and effects that collaborative decision making could have had in this context. Formative program evaluation, therefore, requires an evaluator to consider the positionality of others in a given context, while interacting respectfully with stakeholders who may not share similar values (AEA, 2011; Greene et al., 2006; Hall, Ahn, & Greene, 2012). 3.3. Showing the relevance of stakeholder experiences to program goals The third example is drawn from a formative evaluation conducted in 2009–2011 by the first two authors during the inaugural years of Synergy Elementary Professional Development School (PDS) (a pseudonym). Decision-making related to goals was informed by committee representatives from Synergy school, the school district and a local university. A key decision made by this committee and an important component of the Synergy PDS was the implementation of a School-Wide Enrichment Model (SEM), which included high engagement learning activities known as ‘‘enrichment clusters’’ (i.e., groups of children with similar interests—irrespective of grade level—learn to investigate topics that are not traditionally part of the school curriculum). While observing PDS committee meetings during the first year of the school’s operation, the authors noticed that the partners often asked questions about student engagement with the enrichment activities. Therefore, the authors proposed a formative evaluation that included focus groups with 4th graders to better understand their experiences with SEM. Although some PDS committee members were more interested in a summative evaluation of how teachers had implemented SEM, in an attempt to be educative, the authors reminded the group about prior conversations concerned with how students experience enrichment activities. After the authors explained that the focus group data would provide insight into how different parts of the curriculum were perceived by children, their evaluation design was approved. The design for the focus groups aligned with Kushner’s (2000) stance on personalizing evaluation; that is, how a program component such as SEM ‘‘showed up’’ in the lives of Synergy

155

students. To do this, the authors constructed broad questions designed to elicit what was important to students when they came to school, such as, ‘‘What excites you about coming to school?’’ The researchers’ assumption, guided by what they had heard school personnel discuss, was that students would highlight the ‘‘enrichment clusters’’ in their responses, or specials like art and gym. Although clusters, art and gym were discussed by students, they also identified math as an activity that they connected to meaningfully. The reasons for this were consistent across the five focus groups: their teacher’s love of math, his approach to teaching, and the confidence they gained from learning. The authors were invited to share findings at two different PDS partner meetings. Although not the only and most significant finding, the authors highlighting of students’ positive responses about mathematics generated considerable discussion. At the time, the evaluators believed this intervention was necessary to help demonstrate the usefulness of formative data to PDS partners. It showed the partners how the evaluation design allowed students to consider all school activities—not just enrichment clusters— when describing their engagement. By highlighting participants’ perspectives, program developers and evaluators could better understand how participants—in this case children—were understanding the program design, which is critical to interpreting program outcomes (Kushner, 2000). Had a stronger partnership between the evaluator and school personnel been established from the beginning, conversations about the differences between what summative and formative evaluations could offer this developing program would have been an ongoing conversation. Furthermore, although the evaluators and program planners were working collaboratively in the sense of sharing interests and goals and reporting progress at intervals, decisions such as these ones about implementing enrichment clusters and about the student focus groups were being made independently of each other. And although both produced positive and useful results, a closer collaboration, perhaps between teachers leading the enrichment clusters, administrators overseeing the implementation, and evaluators could have resulted in more deliberate mid-course feedback thereby improving alignment between the evaluation design and the SEM program. In complex organizations such as schools where a variety of research and evaluation projects are competing for resources, we might have offered a way to map out these overlapping or competing designs in a way that would benefit everyone involved. 4. Conclusion This article engages the argument that working in partnership with program managers requires both a strong framework that engages the intersecting evaluation approaches (participatory, responsive, educative and qualitative) and opportune moments for communication or intervention to have effect. Our examples provide illustrations of the complexities and challenges of evaluation and program design and implementation, and how formative evaluation can promote a more balanced collaboration between program managers and evaluators. Each one of these examples shows how opportune moments for feedback are a crucial part of formative evaluation practice. The first illustrates kairos by taking advantage of an opportune time to respond to gaps in the development of the program. The second illustrates a timely methodological intervention to educate stakeholders regarding differences in understanding central program concepts during program implementation. And the third example exhibits kairos by demonstrating the value of formative to provide information regarding how the program accomplishes program goals. However, each formative evaluation was weakened by the lack of an established understanding of the relationship between evaluator

156

J. Hall et al. / Evaluation and Program Planning 45 (2014) 151–156

and program manager and mutual responsibilities regarding program design. Here, we see the potential and pitfalls of evaluation theory and practice. While we have learned of the potential of evaluation approaches to inform program design and the direction a program might take, we have, and perhaps more importantly, learned that even with a framework involving planning and partnership—we still cannot predict the effects and outcomes of our actions. Patton (2011), among others, understands the challenges between evaluation theory and practice all too well. In a similar vein, what we are proposing is more attention to the evaluation principles and methods, and more documentation of the effects on-the-ground as a way to inform discussions of the relationships between evaluation theory and practice over time (Chemlisky, 2012). It is critically important to study both the evaluation process as well as our roles as evaluators in order to contribute to theory and practice. Specifically, formative evaluators should actively seek to understand how evaluation theories and programs interact with each other, and how these interactions inform decision-making about the design and implementation of both educational programs and evaluation studies (Chemlisky, 2012). The notion of evaluation approaches and kairos put forward in this article contributes to an enhanced understanding of theory and practice. Ultimately, our hope is to not only promote the idea of ‘‘right timing’’ or kairos as a normal part of a formative evaluation process but to embrace what we see as essential approaches (participatory, responsive, educative and qualitative) to formative program evaluation and learn to integrate these effectively throughout the lifecycle of a program. References American Evaluation Association, Task Force of the American Evaluation Association’s Diversity Committee. (2011). Statement on cultural competence in evaluation Retrieved from http://www.eval.org/ccstatement.asp. Chaco´n-Moscoso, S., Anguera-Argilaga, M. T., Pe´rez-Gil, J. A., & Holgado-Tello, F. P. (2002). A mutual catalytic model of formative evaluation: The interdependent roles of evaluators and local programme practitioners. Evaluation, 8(4), 413–432. Chatterji, M. (2005). Evidence on ‘‘what works’’: An argument for extended-term mixed-method (ETMM) evaluation designs. Educational Researcher, 34(5), 14–24. Chemlisky, E. (2012). Balancing evaluation theory and practice in the real world. American Journal of Evaluation, 34(1), 91–98. Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 80, 5–23. Greene, J. C., & Abma, T. A. (2001). Responsive evaluation. New Directions for Evaluation, 92, 31–43. Greene, J., DeStefano, L., Burgon, H., & Hall, J. (2006). An educative, values-engaged approach to evaluating STEM educational programs. New Directions for Evaluation, 109, 53–72.

Hall, J. N., Ahn, J., & Greene, J. C. (2012). Values-engagement in evaluation: Ideas, illustrations, and implications. American Journal of Evaluation, 33(2), 195–207. Hess, A. (2011). Critical-Rhetorical Ethnography: Rethinking the Place and Process of Rhetoric. Communication Studies, 62(2), 127–152. Huebner, T. A. (2000). Theory-based evaluation: Gaining a shared understanding between school staff and evaluators. New Directions for Evaluation, 87, 79–89. Joyce, L. (1989). Giving feedback in formative evaluation: A nondirective strategy. New Directions for Program Planning, 42, 111–118. Kushner, S. (2000). Personalizing evaluation. Thousand Oaks, CA: Sage Publications. Mathison, S. (Ed.). (2005). Encyclopedia of evaluation. Thousand Oaks, CA: Sage Publications. Maxwell, J. (2005). Qualitative research design: An interactive approach (2nd ed.). Thousand Oaks, CA: Sage Publications. Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage. Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage Publications. Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: The Guildford Press. Preskill, H. (2008). Evaluation’s second act: A spotlight on learning. American Journal of Evaluation, 29(2), 127–138. Rolfsen, M., & Torvatn, H. (2005). How to ‘get through’: Communication challenges in formative evaluation. Evaluation, 11(3), 297–309. Stetler, C. B., Legro, M. W., Wallace, C. M., Bowman, C., Guihan, M., Hagedorn, H., et al. (2006). The role of formative evaluation in implementation research and the QUERI experience. Journal of General Internal Medicine, 21, S1–S8 http://dx.doi.org/ 10.1111/j.1525-1497.2006.00355.x Stufflebeam, D. (2007). Evaluation theory, models & application. San Francisco: Jossey-Bass. Torres, R. T., & Preskill, H. (2001). Evaluation and organizational learning: Past, present, and future. American Journal of Evaluation, 22(3), 387–395. Weston, T. (2004). Formative evaluation for implementation: Evaluating educational technology applications and lessons. American Journal of Evaluation, 25(1), 51–64. Jori N. Hall is an Associate Professor in the Department of Lifelong Education, Administration and Policy in the College of Education at the University of Georgia. She teaches graduate courses, advancing theory and practice related to mixed methods inquiry and qualitative research. Her other work includes providing evaluative feedback for science, technology, engineering, and mathematics (STEM) educational programs. Her research interests center on the intersections of educational accountability policy and responsive evaluation approaches. Melissa Freeman is an Associate Professor in the Department of Lifelong Education, Administration and Policy in the College of Education at the University of Georgia. She teaches graduate courses related to program evaluation theory and practice; qualitative research design and traditions; hermeneutics in the social sciences and participant observation. Her research interests include qualitative research and evaluation methods, especially ethnographic and participatory approaches. Kathy Roulston is a Professor in the Department of Lifelong Education, Administration and Policy in the College of Education at the University of Georgia. She teaches graduate courses related to qualitative interviewing; qualitative research traditions and design; and ethomethodology and conversation analyses. Her research interests include qualitative research methods; ethomethodology and conversation analytic studies; teachers’ work; and topics in music education.

Right timing in formative program evaluation.

Since many educational researchers and program developers have limited knowledge of formative evaluation, formative data may be underutilized during t...
298KB Sizes 4 Downloads 3 Views