Evaluation and Program Planning 45 (2014) 119–126

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Applying complexity theory: A review to inform evaluation design Mat Walton * School of Health and Social Services, Massey University, New Zealand

A R T I C L E I N F O

A B S T R A C T

Article history: Received 12 July 2013 Received in revised form 4 April 2014 Accepted 6 April 2014 Available online 13 April 2014

Complexity theory has increasingly been discussed and applied within evaluation literature over the past decade. This article reviews the discussion and use of complexity theory within academic journal literature. The aim is to identify the issues to be considered when applying complexity theory to evaluation. Reviewing 46 articles, two groups of themes are identified. The first group considers implications of applying complexity theory concepts for defining evaluation purpose, scope and units of analysis. The second group of themes consider methodology and method. Results provide a starting point for a configuration of an evaluation approach consistent with complexity theory, whilst also identifying a number of design considerations to be resolved within evaluation planning. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Complexity theory Methods Evaluation design

Over the last decade, an increasing literature has considered the implications of complexity theory or the theory of Complex Adaptive Systems (CAS) perspectives in development, health and social service policy, implementation and evaluation (Barnes, Matka, & Sullivan, 2003; Forss, Marra, & Schwartz, 2011; Haynes, 2008; Patton, 2011; Plsek & Greenhalgh, 2001; Sanderson, 2000, 2009; Stern et al., 2012; Vincent, 2012). Complexity theory is not a single coherent body of thought. Whilst complex interventions are often considered to be those with multiple objectives, strategies and components, implemented across multiple sites by multiple actors, the use of complexity in this paper refers to understanding the social systems within which interventions are implemented as complex (Shiell, Hawe, & Gold, 2008). This is what Byrne refers to as a ‘complexity theory frame of reference’ (2011, p. 12). A focus on the complexity of systems implies that apparently simple interventions, as well as complicated interventions, may be candidates for evaluation from a complexity perspective. The basics of a complexity theory frame of reference are now well described in multiple publications (Byrne & Callaghan, 2014; Eppel, Matheson, & Walton, 2011; Patton, 2011; Rickles, Hawe, & Shiell, 2007; Room, 2011). Briefly, a complex system is comprised of multiple interacting actors, objects and processes defined as a system based on interest or function (Gare, 2000). Complex systems are nested, which means that some elements of a complex system may themselves be complex systems, or some elements

* Correspondence to: PO Box 756, Wellington 6140, New Zealand. Tel.: +64 4 8015799x63351. E-mail address: [email protected] http://dx.doi.org/10.1016/j.evalprogplan.2014.04.002 0149-7189/ß 2014 Elsevier Ltd. All rights reserved.

shared between multiple complex systems (Byrne & Callaghan, 2014). An example could be viewing a school as a complex system, interacting with other complex systems of households, communities and the wider education sector. The interaction of components in a complex system gives rise of ‘emergent’ properties, which cannot be understood by examining the individual system components (Goldstein, 1999). Instead to understand the emergent phenomenon, the system from which it emerged must be understood as a whole (Anderson, Crabtree, Steele, & McDaniel, 2005), including identifying both the elements within a system and their interaction over time. The interactions within a complex system are non-linear, with the implication that change in once component of the system may have a negligible or large effect on the system as a whole (Byrne & Callaghan, 2014). Non-linearity also means that small differences between systems may, over time, lead to quite different emergent whole system properties (Room, 2011). While schools may appear similar, the education results might be quite different. The implication of non-linear relationships is a difficulty in predicting the type and scale of system adaptations to interventions (Morc¸o¨l, 2012). The system is open to feedback from the wider environment it is operating within, meaning that systems may differ between time, social and geographic contexts (Room, 2011). A complex system may show stability of emergent properties over time, with change suggesting a system has moved from one ‘attractor state’ to another. When the attractor state of a system changes, at the point of change, there are a number of possible attractor states the system could move to, within a ‘phase space’ (Capra, 2005; Room, 2011). While it is difficult to predict if and how a system will change in response to interventions,

120

M. Walton / Evaluation and Program Planning 45 (2014) 119–126

one target may be to understand the phase space of possible attractor states. For example, a change of government administration will often bring with it a change in ideology, which will in turn define the range of intervention options available for responding. Again using schools as an example, evidence of unhealthy diets of children within schools impacting upon education achievement may be addressed by focussing on individual student behaviour or the school food environment. The degree to which the school environment is regulated, such as allowing or banning competitive and less healthy food options, will be partly determined by the perceived role of state versus market held by decision makers (Fleischhacker, 2007; Walton, Signal, & Thomson, 2013). However, previous decisions that may limit government action in regulating products, such as international trade agreements, will also play a role in defining possible interventions and hence the phase space of the school health system. The potentially unintended impacts of outcomes from one complex system (e.g. trade) on other complex systems (e.g. schools) results from the open boundaries of systems. There are several challenges for evaluation implied by the understanding of complex systems described above. To summarise, the challenges posed by complex social systems for evaluation relate to uncertainty in the nature and timing of impacts arising from interventions, due to the non-linear interactions within complex systems and the ‘emergent’ nature of system outcomes (Dyson & Todd, 2010). There are also likely to be differing values and valuation of outcomes from actors across different parts of a complex system, making judgements of ‘what worked’ contested (Barnes et al., 2003). Due to the open boundaries of complex systems, there are always multiple interventions operating and interacting, creating difficulties identifying the effects of one intervention over another (Schwartz & Garcia, 2011). Across the existing complexity informed literature, there is little consensus regarding what the key characteristics of a complexity informed policy or programme evaluation approach should be. Questions relating to: the purpose of evaluation; how evaluation questions are defined; which concepts from complexity theory are most relevant; and broad evaluation design principles need to be considered before looking at detailed method considerations. To advance consideration of these broad evaluation design considerations, this paper reviews both practical examples and theoretical discussion of evaluation approaches using a complexity theory frame of reference. The aim of the review is to identify themes to be considered in applying a complexity frame of reference to evaluation. 1. Methods This study provides a narrative thematic review of identified academic journal literature (Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005; Mays, Pope, & Popay, 2005) related to complexity theory and evaluation. This review draws upon 46 articles in peerreviewed journals identified from a search of bibliographic databases (including Scopus, Web of Knowledge, Social Service Abstracts, Sociological abstracts), limited to English language. Search terms were: complexity theory or complex adaptive system or CAS or soft system or eco* system; and policy eval* or prog* eval* or policy analysis or formative eval* or process eval* or outcome eval* or impact eval* or context eval*. This search identified 214 articles. Upon review of titles and keywords, 76 articles were selected for full review. Abstracts of papers citing these 76 articles were also reviewed for inclusion. In addition, reviewers of an earlier draft of this manuscript suggested a number of journal and articles for potential inclusion, which were hand searched. Fortysix articles were included in the full review. The most common

reasons for exclusion were: no discussion of evaluation methods; and not explicitly informed by complexity theory or a related systems theory. As with complex systems themselves, the boundaries of the relevant literature are open and boundary judgements can always be contested. Search terms were selected to focus attention on articles explicitly identifying with complexity theory or CAS, rather than wider application of ‘systems thinking’. The search terms also limited articles to those with an evaluation component, rather than more general policy, organisational or social science focus. Within the focus on complexity theory, an overlap between complexity and certain system theory fields is acknowledged (Midgley, 2008; Richardson, Gregory, & Midgley, 2007). For this reason, two ‘systems’ rather than complexity terms were included in the search strategy. Soft systems and ecological systems were considered terms referring to specific systems approaches, but also used in a broader way to distinguish from ‘hard’ systems theories (Maani & Cavana, 2000). Inclusion of these two terms doubled articles identified. Despite this approach, search results indicate that much of the literature informed by social-ecological models in health promotion and psychology, or systems informed operational research, has been excluded but may usefully contribute ideas to a complexity informed evaluation practice. Other terms relevant to complexity theory, such as ‘context’, were trialled but captured many articles well outside complexity and systems fields. A feature of evaluation literature is the large volume of work published in books, conference proceedings or project reports. These are obviously not captured in this review, limited to peerreviewed journal articles. With a focus restricted to peer-reviewed journals and explicit reference to both complexity and evaluation, there is no claim that the current review provides a definitive statement of the issues and methods associated with complexity theory in evaluation practice. However, the aim of the review is to identify common themes in the application of complexity theory, and not to provide a definitive ‘state of play’. Notes from each paper were made under the following headings: where has complexity theory been applied to policy/ programme evaluation; what design and methods are associated with complexity theory; what are reported advantages/limitations of design and methods; if an opinion or theoretical paper, what are the suggested advantages or limitations of methods; what assumptions are being made about the nature of interventions; and what (if any) impacts on the policy process are discussed? The notes grouped under each question were compared to identify what the characteristics of a complexity informed evaluation approach are, when and where such an approach is appropriate, and implications for the policy process in which the approach is applied. Twenty-three of the 46 papers were theoretical or opinion in nature, while 23 were focussed on describing or reflecting upon application of methods to a particular policy or programme. Table 1 shows the distribution of papers by year. It can be seen that the volume of peer-reviewed journal publications that consider a complexity theory frame of reference increased from

Table 1 Publication year of articles included in review. Year of publication

Number

2012–2013 2010–2011 2008–2009 2006–2007 2005

11 13 11 6 5

M. Walton / Evaluation and Program Planning 45 (2014) 119–126

2008. The type of article has also changed. Earlier articles included in this review tended to be of a type that considered the potential of a complexity frame of reference. More recent articles are providing examples of where complexity concepts have been applied and providing more detailed consideration of complexity consistent methods. The included papers are briefly described in Table 2. 2. Results The following themes were identified from the reviewed literature and are discussed in detail below:        

Developing an understanding of system. Attractors, emergence and other complexity concerns. Defining appropriate level and unit of analysis. Timing of evaluations. Participatory methods. Case study and comparison designs. Multiple and mixed methods. Layering theory to guide evaluation.

121

2.1. Developing an understanding of the system A complex system is made up of many parts, the interaction of which creates emergent outcomes of interest to evaluation. Several authors discuss the need to develop a picture of the system operating to aid analysis both of interaction and of changes in the system parts. Techniques such as system dynamics (Levy et al., 2010), social network analysis (Hawe, Shiell, Riley, & Gold, 2004) and agent-based modelling (Hoffer, Bobashev, & Morris, 2009; Morell, Hilscher, Magura, & Ford, 2010) have been used to examine interactions between system elements. In developing a picture of the system, in and of itself and to guide modelling, authors have drawn upon detailed ethnographic data (Hoffer et al., 2009) and variations of theories of change approach developed by participants to identify elements within systems under study (Blackman, Wistow, & Byrne, 2013; Mason & Barnes, 2007). Such theories of change can be utilised to inform decisions regarding system boundaries (Verweij & Gerrits, 2013), which interactions between system elements to focus upon within Agent-Based Models (Hoffer et al., 2009; Morell et al., 2010) or case conditions included within a Qualitative Comparative Analysis (Blackman et al., 2013).

Table 2 Papers included in review. Reference

Short description

Adam et al. (2012)

Reviews recent evaluations of health system strengthening initiatives in low and middle income countries. Conclude that there is a need for more comprehensive evaluations of a wider range of effects, with no evaluation exploring system effects that reflect complex adaptive systems. Describes the challenges complexity adds to evaluation utilising the Health Action Zone evaluation as an example. Discusses the use and limitations of Theory of Change method in the face of complexity. Discusses whether the comparison of health inequalities targets and policies between England, Wales and Scotland can be considered evaluation, as the variability between sites makes it difficult to say ‘what works’. Questions limitations of systems perspective and highlights discourses and suggests New Institutional Theory could be useful. Utilises Qualitative Comparative Analysis (QCA) to examine efforts to reduce health inequalities. The authors explicitly draw upon complexity theory to view QCA as a method to compare cases for identifying combinations of conditions that contribute to an outcome. Discusses use Qualitative Comparative Analysis (QA) as a complexity theory consistent evaluation methodology for understanding causal combinations of conditions. Provides an example of applying QCA to evaluate local authority based interventions to reduce teenage conceptions. Considers how to design, implement and evaluate a health care service innovation. Discusses RAP process for design – a collaborative qualitative process. Identifies N of 1 RCT and time-series analysis to test applicability of RCT findings into specific contexts. Outlines an evaluative framework designed for particular intervention that: (1) defines borders of Complex Adaptive Systems; (2) determines outcome measures of entire system; (3) sets pre and post intervention periods; and (4) sets frequency of performance reports. Outlines a multi-method evaluation framework that integrates qualitative and quantitative, top-down and bottom-up. Paper has a useful introduction on trends in evaluation (including complexity of goals and design) that require pluralist approaches. Discusses challenges with current economic evaluation methods, including limitations of methods themselves, and degree to which methods are used and useful to decision makers. Suggests some innovations in economic evaluation drawing upon complexity theory. Argues for a critical realist informed complex systems approach to evaluation in action learning. Action learning seems concerned with development within organisations, and within a complex frame, developing the skills to read complex systems and sensibly generalise to other situations. Provides a summary of Byrne’s critical realist complexity theory and outlines implications for understanding complex causality through case-comparison methods. Discusses systems thinking in evaluation and proposes four rules to foster ‘systems thinking’. Argues that rather than learn new methods for evaluation, we can apply systems thinking to existing methods. Argues for a theory driven approach to evaluation utilising complexity theory and idea of negotiated order. Focussing on the role of agency in creating change in a complex system, Negotiated order used to understand how policies are implemented at local level & their interaction with existing organisational structures, policies, power relations, professional agendas, etc. Evaluation of attempts to meet Accident and Emergency target in Scotland. Uses case comparison approach. Focuses upon scale in complex systems and suggests target did not take into account whole system level, but instead only A&E subsystem, which meant out of hospital change needed was difficult to achieve. Focuses on evaluation of partnerships in health and social services–identifying partnerships as important to tackling wicked issues. Suggests that both Theory of Change and Realist Evaluation would be useful to evaluate partnerships and that Critical Realism provides the ontological position to combine these approaches and address their weaknesses. Discusses the use of Theory of Change approach to a complex intervention in a complex setting. Advantages and disadvantages of method considered. Implicit implications for policy process, role of evaluators, and what is ‘‘evidence’’. Considers advantages a complexity frame could offer. Table 7.2, p. 98: a shift of focus from knowledge to capability, with consequences for measurement; less structured ‘‘dose monitoring’’ as the means of evaluating implementation and more use of qualitative and narrative approaches; a focus on the structures in which knowledge is embedded (for example, through the use of social network analysis); long time frames that incorporate the possibility of system phase transitions; measurement at multiple levels; more observation and analysis of the preintervention context and the natural change processes within it; no reason to abandon cluster randomised trial designs, as long as interventions adhere to a recognisable theory of action and that remains replicable across the sites.

Barnes et al. (2003) Blackman et

al. (2010)

Blackman et al. (2011)

Blackman et

al. (2013)

Boustani et al. (2010)

Bristow et

al. (2009)

Brousselle and Lessard (2011)

Burgoyne (2010)

Byrne (2013) Cabrera, Colosi, and Lobdell (2008) Callaghan (2008)

Datte´e and Barlow (2010)

Dickinson (2006)

Dyson and Todd (2010) Hawe et al. (2009)

122

M. Walton / Evaluation and Program Planning 45 (2014) 119–126

Table 2 (Continued ) Reference

Short description

Hawe et al. (2004)

Discusses methods used for context and process evaluation within PRISM trial in Victoria, Australia. Informed by system theory and concerned with complex interventions. Outlines using multiple methods to assess changes in context over time, feedback and intervention adaptations. Provides an example of a three stage method for evaluating policy system change over time. Method utilises both qualitative and quantitative data, but is essentially a qualitative analysis. Identifies phase shifts in policy systems over long time period. Develops an Agent Based Model of heroin market which draws upon ethnographic case study. Model is used to answer two research questions that seek to determine how two groups within the market effect the market’s overall operation. Suggests that Agent Based Models are useful addition to decision making regarding future programme directions. But need evaluation prior to ABM development to provide data for model. Developed indicators of a complex adaptive systems evaluation approach against which 54 health promotion evaluations were compared. No explicit complex adaptive system evaluation was identified, although aspects of complexity were incorporated in the majority of evaluations. Argues that economic evaluation should draw upon complexity theory to deal with related issues of complexity and reflexivity. Applies SimSmoke system dynamics model to evaluate effect of tobacco control policies in the Republic of Korea on smoking prevalence and deaths. An example of applying complexity theory concepts to evaluating fee exemption policies for obstetric care. Argues that an evaluation of complex intervention should assess programme effectiveness and uncover causal mechanisms. Considers how to construct Theory of Change in complex intervention and suggests they should be narrative (rather than logic diagrams) and used to promote learning and intervention refinement, rather than giving evidence of ‘what works’. Comparative study of two community based interventions explicitly using a complexity lens to understand outcomes. Study evaluates effectiveness at achieving goals, and focuses on mechanisms to understand variable success. Provides detail of method to evaluation Healthy Eating–Healthy Action Strategy. Not specifically informed by complexity theory, but references papers that are, e.g. Barnes et al., 2003. Proposes an evaluation approach in complex systems that utilises Agent Based Modelling alongside traditional evaluation approaches. The modelling can be used to identify areas for evaluative activity, while the evaluation data may also be used to inform the modelling. Suggest that the Agent Based Model can become a monitoring tool. Focus on policy development more than evaluation. Identifies issues of competing values, and scales (local, regional, etc.) in defining policy problems, and suggests that these competing views of the system/policy need to be transparently identified and considered within analysis. To do this need participatory processes. Use of multi-criteria decision making tools appropriate. Focusses on development programme design and evaluation. Argues against pre-packaged programmes. Suggests a three phase approach. (1) Locally define programme focus and approach. (2) Analyse the complex systems within which programme will run. (3) Use ideas from New Institutional Economics to evaluate cost-effectiveness of programme. Focuses on evaluation of library community outreach. Uses evaluation as a intervention planning tool (formative), as well as considering outcomes. Briefly discusses systems and complexity theories and tools and methods for studying complex systems. Suggests that methods need to focus on relationships, how people recognise new emergent phenomenon and the values and beliefs of those within the system under study. Focuses on the problem of aggregation of evaluation findings in complex systems. Aggregation of micro-variables not appropriate, as will not capture system wide outcomes. macro-level aggregation will not tell us much about what is leading to outcomes. Argues for meso level aggregation. Paper describes a method for evaluating implementation of a policy across levels (national, local, organisation), and interaction between levels. Worked example of forestry innovation policy in Austria. Also emphasises participatory methods, as system defined through interviews/surveys to build up picture of policy network Discusses the role of context in evaluation and proposes a framework that highlights five types of context to consider in evaluation. Not informed explicitly by complexity theory, but similar notions of context used within framework. Discusses the use of intervention logic for evaluation. Distinguishes between complicated and complex interventions. Does not take a complexity theory perspective as such, but draws upon social-ecological theory and does refer to complex adaptive systems. Evaluates the implementation of the Welsh Network of Healthy School Schemes. Theoretical paper that proposes combining insights from complexity theory and Deweyan pragmatism to develop a practical rationality approach to evidence based policy. This approach blurs distinction between policy making and evaluation, suggesting a learning process utilising trials, pilots and experimentation of policies within a deliberative process that combines wisdom of social scientists, implementers and those impacted by policy. Argues that combining modelling of complex systems with local knowledge is required for implementation and evaluation of multilevel dynamic systems interventions. Highlights the use of ethnography and participatory action research combined with network analysis and qualitative comparative analysis. Describes development and application of a complexity theory informed tool to evaluate implementation challenges. The tool uses complexity theory concepts as a lens to view data and a group ‘sense-making’ process to consider analysis and possible solutions. Drawing upon complexity adaptive systems and expectancy theories, the article details an evaluation approach that supports implementation of performance-based contracting initiative in Uganda. The evaluation approach was a ‘theory-based’ evaluation, utilising multiple methods across multiple levels of the system under study. Focuses on systems thinking for policy design. Not informed explicitly by complexity theory, but relevant. Not considering evaluation, but policy design with implied knock on effects for how evaluation might be conducted. Utilised complexity theory within a qualitative case study design to examine the organisational response to TB epidemic across London. Findings highlighted large number of organisations involved in the TB ‘system’, with different dynamics and influences at different levels within the system (pan-London vs local authority). Concludes that complexity theory is useful perspective to consider organisational response, but also requires understanding of wider organisational and policy context. Argues that a complex systems approach is required for evaluating infrastructure projects and draws upon Byrne and Ragin to highlight the use of case-comparison and Qualitative Comparative Analysis as the basis for understanding combinations of causal conditions in complex systems. Provides a thought experiment of how a UK evaluation into local employment initiative would have been evaluated if informed by complexity ideas and contrasts with actual evaluation. Argues that realist and complexity theory traditions in evaluation methodology are compatible, with complexity theory providing a theoretical lens about how complex systems behave, upon which evaluation subject relevant substantive theories can be layered.

Haynes (2008) Hoffer et al. (2009) Israel and Wolf-Branigin (2011) Kania et al. (2013)

Lessard (2007) Levy et al. (2010) Marchal et al. (2013) Mason and Barnes (2007) Matheson et al. (2009) McLean et al. (2009) Morell et al. (2010)

Munda (2004)

Nordtveit (2010)

Olney (2005) Parsons (2007)

Radej (2011)

Rametsteiner and Weiss (2006)

Rog (2012) Rogers (2008) Rothwell et al. (2010) Sanderson (2009)

Schensul (2009)

Simpson et al. (2013)

Ssengooba, McPake, and Palmer (2012)

Stewart and Ayres (2001) Trenholm and Ferlie (2013)

Verweij and Gerrits (2013)

Walker (2007) Westhorp (2012)

M. Walton / Evaluation and Program Planning 45 (2014) 119–126

Because of the open nature of complex systems, boundaries are constructs with decisions of inclusion and exclusion reflecting positions of actors involved in boundary definitions (Lessard, 2007; Munda, 2004). For example both Rametsteiner and Weiss (2006) and Matheson, Dew, and Cumming (2009), included national level policy makers within the systems under study and identified connections between local actors and national level policy makers as important to understand the ability for some individuals to have system wide impact. A narrower focus on the local system would not have allowed for interaction with the national level to be identified, and likely reflects the interest of evaluators and others defining evaluation scope. 2.2. Attractors, emergence and other complexity concerns Complexity theory includes a number of concepts that describe how complex systems develop and behave over time. Two concepts of central concern are emergence and attractor states (Byrne & Callaghan, 2014). Emergent properties of systems are generated through the operation of the system as a whole and cannot be identified through examining individual system parts. Attractor states depict a pattern of system behaviour and represents stability, with a change in attractor state representing a qualitative shift in the system, with likely impacts on emergent phenomena (Room, 2011). Complexity terms including emergence and attractors stand out in their absence from many of the papers that discuss evaluation from a complexity frame. In a review of health promotion evaluation applying a CAS perspective, Kania et al. (2013) found few evaluations that incorporated all nine indicators of a CAS approach they developed and suggest that opening an evaluation to emergence and adaptation challenges the role of evaluating predetermined goals. Morell et al. (2010) and Marchal, Van Belle, De Brouwere, and Witter (2013) both suggest that concepts of ‘attractors’ and ‘emergence’ provide a framework to understand both stability and change within complex systems. Of the papers that do consider attractors and emergence, keeping a holistic view of the system over long time periods is seen as important (Hawe, Bond, & Butler, 2009; Haynes, 2008). For example Hawe et al. (2009, p. 97) state that ‘emergent properties of change processes might be lost if evaluators localise their attention to lower-level micro-phenomena and fail to see the bigger picture’.

123

indications of macro-level influences. In contrast, Haynes (2008) outlines a method focused on identifying emergent outcomes at an aggregated level by interpreting quantitative monitoring data over time against a storyline of policy changes. This method does not consider emergent outcomes at sub-national level, for example whether there has been a change in inequality for certain groups, or local adaptation within the aggregate policy storyline. Trenholm and Ferlie (2013) examined the management of TB within London and explicitly captured both pan-London and local authority level and found quite different dynamics influencing system responses at each level. Understanding the dynamics between levels of a system is suggested by Schensul (2009) as the focus of analysis, whilst Callaghan (2008, p. 404) recognises the need to ‘understand both the dynamics of system change over time, at the macro- and micro-level, and also to explain how that occurs through the meaningful action of individuals in the local setting’. Radej (2011) argues that assessment of overall positive or negative impact of an intervention can only be made through aggregation of micro-level evaluative assessments. That is, through aggregation a more holistic understanding of the system and emergent system behaviour can be derived. However, a holistic understanding cannot be gained through simple aggregation of micro-level variables, as emergent system outcomes are greater than the sum of system parts. Radej (2011) proposes a meso-level analysis, where two or more domains that are weakly incommensurate are combined. An example here may be considering employment numbers and occupations trends together as a way of considering labour policies. In a hypothetical example, an increase in numbers employed can be assessed in relation to the types of jobs (sector, manual, skilled, part-time, etc.). While a simple growth in numbers of jobs may indicate only a transition within a stable cyclical attractor state (job numbers can go up and down over time without changing the nature of the employment system), a change in the type of jobs being created may signal an emergent property. If a macro-indicator is used, such as labour force participation rates, then the nature of attractor state shift may not be identifiable. In discussing economic evaluation, Brousselle and Lessard (2011) suggest that the complexity of findings may be better presented to decision-makers as disaggregated findings, rather than single cost-benefit or value-formoney calculations. They identify cost-consequence tables as a useful method.

2.3. Defining the appropriate level and unit of analysis 2.4. Timing of evaluations Complex systems are made up of a diverse range of components, including individuals, organisations, physical resources and other complex systems (Byrne, 2011). Complex systems also show repeated patterns at different levels (O’Sullivan, 2004; Room, 2011). Whilst emergence is a system level property, to understand system changes that lead to emergence, a view of the changes within the system, and shifting focus between micro- and macro-views of the system are required (Buijs, Eshuis, & Byrne, 2009). Within the reviewed literature there is a clear call for evaluation to focus upon multiple levels, whilst also noting the challenge this creates with practical considerations meaning often a pragmatic single level is focussed upon. Practically, Blackman et al. (2013) suggest that utilising the system level contained within the policy intervention is legitimate, such as local authority areas. Barnes et al. (2003) discuss theories of change as useful when evaluating local initiatives, yet limited for drawing evaluative conclusions at national level from multiple local initiatives. Developing multiple local evaluative system descriptions does not identify or explain emergent outcomes at higher levels of aggregation. Byrne (2013) and Blackman et al. (2013) would suggest that comparing multiple cases would allow for understanding mechanism and context interaction that provides

Two implications for the timing of evaluations are evident. First, that non-linear interactions and potential for sudden system transformation suggest we cannot predict when the effects of an intervention will present. Therefore long evaluative time frames may be required (Datte´e & Barlow, 2010; Hawe et al., 2009; Haynes, 2008; Marchal et al., 2013). Second, that evaluation should, if possible, occur concurrently alongside programme development and implementation (Adam et al., 2012; Barnes et al., 2003). Within his macro-focussed method, Haynes (2008) used a 27 year period to identify system level changes linked to corresponding ‘storyline’ of policy developments. Long timeframes pose a challenge to the question of what should be evaluated. Long timeframes also suggests that evaluative activity needs to be on going and that the line between evaluation and monitoring may be blurred. Attribution of outcomes to specific interventions becomes more complicated over time with the number and variation of local adaptations, national level policy changes, and social and economic contextual changes likely to increase. Several authors point to the role of evaluation for understanding local adaptations and feeding back into implementation processes (Barnes et al., 2003; Burns,

124

M. Walton / Evaluation and Program Planning 45 (2014) 119–126

2006; Olney, 2005; Simpson et al., 2013). Given the increase in complications of attributing interventions with outcomes as timeframes increase, a focus on local adaptations may be more immediate and relevant to current implementation decisions and therefore provide a more tangible focus for evaluation. 2.5. Participatory methods From a complexity frame, participatory methods have been used to: gather perspectives of actors across the system to develop system descriptions (Mason & Barnes, 2007; Rothwell et al., 2010; Verweij & Gerrits, 2013); understand how interventions are adapted at the local level (Barnes et al., 2003; Callaghan, 2008); and make explicit different value claims of actors across the system (Callaghan, 2008; Lessard, 2007; Nordtveit, 2010; Olney, 2005). Weiss (1998) identified a continuum of participatory approaches, from stakeholder evaluation through to empowerment evaluation. Stakeholder evaluation utilises stakeholders to help shape the evaluation questions and interpret data. At the empowerment evaluation end of the continuum, those involved with a programme are supported to conduct the evaluation, aiming to empower the individuals and organisations through organisational learning. Methods associated with a complexity frame span this continuum. For example, in studying attempts to address health inequalities in the UK National Health Service, Blackman, Wistow, and Byrne (2011) applied Qualitative Comparative Analysis (QCA). Stakeholders from across the NHS sites studied were involved in designing data collection and defining variables that went into the QCA analysis, as well as interpreting analysis results. Towards the empowerment end, Burns (2006) utilises an action research approach to evaluating system change through the Communities First programme in Wales. 2.6. Case study and comparison designs Various forms of case study design are discussed within the complexity literature (Anderson et al., 2005; Barnes et al., 2003; Blackman et al., 2013; Burgoyne, 2010; Byrne, 2005; Dyson & Todd, 2010; Matheson et al., 2009). The advantages of a case study design are identified as the ability to develop a detailed understanding of a system (or limited number of systems), in line with complexity theory concepts. For example: collecting information on the system history and initial conditions at time of intervention; understanding horizontal complexity (organisations and actors that make up the system at a local level), as well as the interaction between horizontal and vertical complexities (the place of national level actors within local systems); and identifying local adaptation. Matheson et al. (2009) identifies initial conditions (the level of health and social inequality within case study communities) and differences in access to central government actors between case study communities, as important for understanding different experiences and activities between cases. Ethnographic methods within case study designs are utilised by Schensul (2009) and Hoffer et al. (2009), providing a detailed study of the components within the systems under study and their interaction. While Hoffer et al. (2009) use the ethnographic study to develop an agent-based model to test effect of possible mechanisms on emergent outcomes, Schensul (2009) suggests the use of case-comparison methods, including QCA, also discussed by Byrne (2013), Blackman et al. (2013) and Verweij and Gerrits (2013). 2.7. Multiple and mixed methods Also widely used from a complexity frame, is the use of multiple and mixed methods and data types. Bristow, Farrington, Shaw, and

Richardson (2009) identify multi-method evaluation approaches as a logical response to the challenge of providing contextualised information on what works, while Rog (2012) suggests the variety of context issues evaluators are likely to face requires a portfolio of methodological strategies. In their review of health promotion evaluation and CAS, Kania et al. (2013) identified designs that mixed quantitative with qualitative methods as more able to identify why interventions worked. Complexity suggests no one piece of data will be able to provide a complete view of the system under study. An example is the evaluation of PRISM (Hawe et al., 2009), where time-trend comparison, network analysis, and narratives of practice accessed through interviews and activity diaries of key informants were all used. A drawback of multiple and mixed methods may be the volume of data generated and a requirement for interdisciplinary teams and associated communication and coordination requirements, which can all push up scale and cost of evaluations (Marchal et al., 2013) 2.8. Layering theory to guide evaluation A notable feature of several articles reviewed was the utilisation of a complexity frame of reference as only one of multiple theoretical strands. A number of authors are also looking at integrating wider social science and evaluation theory to provide the bridge between a complexity theory ontological position and epistemological certainty. Approaches such as realism (Westhorp, 2012); new institutional theories (Blackman et al., 2010); Deweyan pragmatism (Sanderson, 2009) and Straus’s concept of negotiated order (Callaghan, 2008) are all discussed. What these have in common is a way of guiding consideration of different levels of a system and micro–micro interactions. Westhorp (2012) provides a set of complexity theory concepts that can be used to identify theories to guide evaluation and provide greater explanatory power. She suggests that multiple theories can be nested for explanation at multiple levels of a system. Examples of theories used this way in other articles reviewed include Trenholm and Ferlie’s (2013) identification of New Public Management as having an influence within the TB system they studied, while Morell et al. (2010) drew upon Rogers’ theory of diffusion of innovation to help organise data and create parameters for an agent-based model. The use of theory can aid in decisions regarding appropriate level of analysis, defining boundary systems and timing of evaluations. 3. Conclusion This paper has sought to identify a range of implications of a complexity frame of reference for policy and programme evaluation. Though restricted to a small number of papers in the academic literature, some common themes have been identified. The themes can be broken into two groups. The first group identifies implications of applying complexity theory for defining the purpose, scope and relevant units of analysis for evaluation activity. The second group of themes consider method implications of applying a complexity frame of reference, including the use of case studies, mixed and participatory methods. Across the reviewed literature, there was more consistency in terms of the second group of method related themes, with variable explicit application of complexity theory concepts. In part the variable uptake of complexity theory concepts could reflect that complexity theory is not one coherent theoretical body of thought. Richardson and Cilliers (2001) identify three broad applications of complexity: new reductionism; soft complexity; and complexity thinking. They describe new reductionism school as attempting to mathematically model complex systems to find the few simple rules that govern system behaviour. This has also been termed restricted complexity (Buijs et al., 2009). While a few articles

M. Walton / Evaluation and Program Planning 45 (2014) 119–126

included non-linear modelling of complex systems (e.g. system dynamics and agent-based models), they were not of a type strictly concerned with identifying system rules. The second complexity school described by Richardson and Cilliers is soft complexity, or the uncritical use of a complexity metaphor. Several of the reviewed articles that have discussed complexity theory or CAS without detailing how complexity concepts are being utilised may fall into this school. The third school, labelled complexity thinking, takes the understanding of how complex systems behave over time as a series of ontological positions upon which epistemological approaches are considered. The reviewed articles that do explicitly discuss complexity concepts may be coming from this school. The reviewed literature, however, showed that the epistemological implications of a complexity thinking ontology are not yet fully resolved, with no indication of a widely preferred and established way to work through issues such as defining the appropriate unit of analysis for evaluation. The commonality amongst the second group of themes suggests that evaluation practice may be usefully progressing with broad categories of method in spite of wider epistemological debates. These evaluations may well be complexity theory consistent, even if a number of aspects are left ill defined. The themes identified in this article may provide a basic checklist for an evaluator interested in applying a complexity frame of reference. Whilst providing a broad configuration of method (case study design, mixed and participatory methods, developing a view of the system over time), the themes also point to a number of design considerations to be resolved (defining system level and unit of analysis, appropriate timeframe for evaluation and explicit application of complexity theory concepts). As noted above, much of the evaluation literature is somewhat hidden in working papers, conference presentations and client reports. For this reason it is to be expected that a range of examples of complexity theory informed evaluations have been missed from this review. The themes identified should not be set in stone, but be subject to critical reflection and development and used as a starting point for discussion. For example, three books, outside the scope of this review, have recently been published that consider use of complexity theory and Complex Adaptive Systems science for evaluation and the social sciences (Byrne & Callaghan, 2014; Morell, 2010; Patton, 2011). The themes identified in this review can be seen within these books. For example, all three books layer complexity theory with other theoretical perspectives. Byrne and Callaghan (2014) utilise complexity theory alongside critical realism as well as the relational sociology of Bourdieu, Straus’s notion of negotiated order and Freire’s ideas of Praxis. Patton explicitly utilises CAS as ‘sensitising concepts’ and situates Developmental Evaluation within his broader approach of utilisation focussed evaluation (Patton, 2011). Morell (2010) draws upon CAS in thinking about surprise in evaluation, but also applies ideas regarding innovation in organisational settings and life-cycle of interventions. All three books also provide guidance on developing and understanding of the system under study. Byrne and Callaghan (2014) place importance on case-comparison methods, where the system under study is considered a case. While aspects of programmes help define the boundaries of the case (system) under study (e.g. schools, households, communities), the open nature of complex systems also require an active process of ‘casing’. Relevant questions for understanding systems under study from a Developmental Evaluation perspective include asking about the nature of interrelationships within systems, their structure and processes as well as patterns of outcomes (Patton, 2011). Complex system boundaries are socially constructed, so we should be asking about what systems are being targeted for change

125

and what ‘change’ means to various people involved. For Morell (2010), understanding the system is for the purpose of anticipating the likelihood of ‘unforeseeable’ surprises cropping up in evaluation. Informed by several CAS concepts, Morell suggests working through a set of questions at the start of the evaluation with stakeholders to work out how much surprise to expect and manage expectations of the intervention and the evaluation. Thus participation is built into the evaluation from the start, and a close relationship with stakeholders throughout the evaluation lifecycle is part of an ‘agile’ evaluation. In a similar way, developmental evaluators (ideally) are part of the design and implementation team of the evaluation. While Byrne and Callaghan get quite specific about the range of methods they see as consistent with a case-comparison approach to understanding causality in complex systems, neither Morell nor Patton place emphasis on case comparison and steer away from specific method guidance, with Patton advocating ‘situational responsiveness’. As with the three books discussed above, specific frameworks and guidance for applying a complexity frame of reference are available and perhaps more can be expected with the apparent interest in the field shown by increasing number of applied journal articles. Such frameworks provide guidance in navigating options and tensions in the eight themes identified here, while the themes may be useful in comparing complexity framed approaches. Acknowledgements This project is supported by the Marsden Fund Council from Government funding, administered by the Royal Society of New Zealand (MAU1107). References Adam, T., Hsu, J., De Savigny, D., Lavis, J. N., Rottingen, J. A., & Bennett, S. (2012). Evaluating health systems strengthening interventions in low-income and middleincome countries: Are we asking the right questions? Health Policy and Planning, 27(Suppl. 4), iv9–iv19. Anderson, R. A., Crabtree, B. F., Steele, D. J., & McDaniel, R. R., Jr. (2005). Case study research: The view from complexity science. Qualitative Health Research, 15(5), 669–685 http://dx.doi.org/10.1177/1049732305275208 Barnes, M., Matka, E., & Sullivan, H. (2003). Evidence understanding and complexity: Evaluation in non-linear systems. Evaluation, 9(3), 265–284. Blackman, T., Hunter, D., Marks, L., Harrington, B., Elliott, E., Williams, G., et al. (2010). Wicked comparisons: Reflections on cross-national research about health inequalities in the UK. Evaluation, 16(1), 43–57. Blackman, T., Wistow, J., & Byrne, D. (2011). A qualitative comparative analysis of factors associated with trends in narrowing health inequalities in England. Social Science and Medicine, 72(12), 1965–1974. Blackman, T., Wistow, J., & Byrne, D. (2013). Using qualitative comparative analysis to understand complex policy problems. Evaluation, 19(2), 126–140. Boustani, M. A., Munger, S., Gulati, R., Vogel, M., Beck, R. A., & Callahan, C. M. (2010). Selecting a change and evaluating its impact on the performance of a complex adaptive health care delivery system. Clinical Interventions in Aging, 5, 141–148. Bristow, G., Farrington, J., Shaw, J., & Richardson, T. (2009). Developing an evaluation framework for crosscutting policy goals: The accessibility policy assessment tool. Environment and Planning A, 41(1), 48–62. Brousselle, A., & Lessard, C. (2011). Economic evaluation to inform health care decisionmaking: Promise, pitfalls and a proposal for an alternative path. Social Science and Medicine, 72(6), 832–839. Buijs, J.-M., Eshuis, J., & Byrne, D. (2009). Approaches to researching complexity in public management. In G. A. Teisman, v. Buuren, & L. Gerrits (Eds.), Managing complex governance systems (pp. 37–55). New York: Routledge. Burgoyne, J. G. (2010). Evaluating action learning: A critical realist complex network theory approach. Action Learning: Research and Practice, 7(3), 239–251. Burns, D. (2006). Evaluation in complex governance arenas: The potential of large system action research. In B. Williams & I. Imam (Eds.), Systems concepts in evaluation: An expert anthology (pp. 181–196). Point Reyes, CA: American Evaluation Association. Byrne, D. (2005). Complexity configurations and cases. Theory Culture Society, 22(5), 95–111. Byrne, D. (2011). Applying social science: The role of social research in politics, policy and practice. Bristol: The Policy Press. Byrne, D. (2013). Evaluating complex social interventions in a complex world. Evaluation, 19(3), 217–228 http://dx.doi.org/10.1177/1356389013495617

126

M. Walton / Evaluation and Program Planning 45 (2014) 119–126

Byrne, D., & Callaghan, G. (2014). Complexity theory and the social sciences: The state of the art. Oxon: Routledge. Cabrera, D., Colosi, L., & Lobdell, C. (2008). Systems thinking. Evaluation and Program Planning, 31(3), 299–310. Callaghan, G. (2008). Evaluation and negotiated order: Developing the application of complexity theory. Evaluation, 14(4), 399–411. Capra, F. (2005). Complexity and life. Theory Culture Society, 22(5), 33–44 http:// dx.doi.org/10.1177/0263276405057046 Datte´e, B., & Barlow, J. (2010). Complexity and whole-system change programmes. Journal of Health Services Research and Policy, 15(Suppl. 2), 19–25. Dickinson, H. (2006). The evaluation of health and social care partnerships: An analysis of approaches and synthesis for the future. Health and Social Care in the Community, 14(5), 375–383. Dixon-Woods, M., Agarwal, S., Jones, D., Young, B., & Sutton, A. (2005). Synthesising qualitative and quantitative evidence: A review of possible methods. Journal of Health Services Research and Policy, 10(1), 45–53. Dyson, A., & Todd, L. (2010). Dealing with complexity: Theory of change evaluation and the full service extended schools initiative. International Journal of Research and Method in Education, 33(2), 119–134. Eppel, E., Matheson, A., & Walton, M. (2011). Applying complexity theory to New Zealand public policy: Principles for practice. Policy Quarterly, 7(1), 48–55. Fleischhacker, S. (2007). Food fight: The battle over redefining competitive foods. The Journal of School Health, 77(3), 147. Forss, K., Marra, M., & Schwartz, R. (Eds.), Evaluating the complex: Attribution, contribution, and beyond (Vol. 18,). New Brunswick: Transaction Publishers. Gare, A. (2000). Systems theory and complexity introduction. Democracy and Nature, 6(3), 327–339. Goldstein, J. (1999). Emergence as a construct: History and issues. Emergence, 1(1), 49– 72. Hawe, P., Bond, L., & Butler, H. (2009). Knowledge theories can inform evaluation practice: What can a complexity lens add? New Directions for Evaluation, 2009(124), 89–100 http://dx.doi.org/10.1002/ev.316 Hawe, P., Shiell, A., Riley, T., & Gold, L. (2004). Methods for exploring implementation variation and local context within a cluster randomised community intervention trial. Journal of Epidemiology and Community Health, 58(9), 788–793. Haynes, P. (2008). Complexity theory and evaluation in public management. Public Management Review, 10(3), 401–419. Hoffer, L. D., Bobashev, G., & Morris, R. J. (2009). Researching a local heroin market as a complex adaptive system. American Journal of Community Psychology, 44(3), 273– 286. Israel, N., & Wolf-Branigin, M. (2011). Nonlinearity in social service evaluation: A primer on agent-based modeling. Social Work Research, 35(1), 20–24. Kania, A., Patel, A. B., Roy, A., Yelland, G. S., Nguyen, D. T. K., & Verhoef, M. J. (2013). Capturing the complexity of evaluations of health promotion interventions: A scoping review. Canadian Journal of Program Evaluation, 27(1), 65–91. Lessard, C. (2007). Complexity and reflexivity: Two important issues for economic evaluation in health care. Social Science and Medicine, 64(8), 1754–1765. Levy, D. T., Cho, S. I., Kim, Y. M., Park, S., Suh, M. K., & Kam, S. (2010). SimSmoke model evaluation of the effect of tobacco control policies in Korea: The unknown success story. American Journal of Public Health, 100(7), 1267–1273. Maani, K. E., & Cavana, R. Y. (2000). Systems thinking and modelling: Understanding change and complexity. Auckland: Pearson Education New Zealand Limited. Marchal, B., Van Belle, S., De Brouwere, V., & Witter, S. (2013). Studying complex interventions: Reflections from the FEMHealth project on evaluating fee exemption policies in West Africa and Morocco. BMC Health Services Research, 469. Mason, P., & Barnes, M. (2007). Constructing theories of change: Methods and sources. Evaluation, 13(2), 151–170. Matheson, A., Dew, K., & Cumming, J. (2009). Complexity, evaluation and the effectiveness of community-based interventions to reduce health inequalities. Health Promotion Journal of Australia, 20(3), 221–226. Mays, N., Pope, C., & Popay, J. (2005). Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. Journal of Health Services Research & Policy, 10, S6. McLean, R. M., Hoek, J. A., Buckley, S., Croxson, B., Cumming, J., Ehau, T. H., et al. (2009). ‘‘Healthy eating – Healthy action’’: Evaluating New Zealand’s obesity prevention strategy. BMC Public Health, 9. http://www.biomedcentral.com/1471-2458/9/452 Midgley, G. (2008). Systems thinking, complexity and the philosophy of science. E:CO Emergence: Complexity and Organization, 10(4), 55–73. Morc¸o¨l, G. (2012). A complexity theory for public policy. New York: Routledge. Morell, J. A. (2010). Evaluation in the face of uncertainty: Anticipating surprise and responding to the inevitable. New York: Guildford Press. Morell, J. A., Hilscher, R., Magura, S., & Ford, J. (2010). Integrating evaluation and agentbased modeling: Rationale and an example for adopting evidence-based practices. Journal of MultiDisciplinary Evaluation, 6(14), 32–57. Munda, G. (2004). Social multi-criteria evaluation: Methodological foundations and operational consequences. European Journal of Operational Research, 158(3), 662–677. Nordtveit, B. H. (2010). Development as a complex process of change: Conception and analysis of projects, programs and policies. International Journal of Educational Development, 30(1), 110–117.

Olney, C. A. (2005). Using evaluation to adapt health information outreach to the complex environments of community-based organizations. Journal of the Medical Library Association, 93(4 Suppl.), S57–S67. O’Sullivan, D. (2004). Complexity science and human geography. Transactions of the Institute of British Geographers, 29(1), 282–295. Parsons, B. A. (2007). The state of methods and tools for social systems change. American Journal of Community Psychology, 39(3–4), 405–409. Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: The Guilford Press. Plsek, P. E., & Greenhalgh, T. (2001). Complexity science: The challenge of complexity in health care. BMJ, 323(7313), 625–628 http://dx.doi.org/10.1136/bmj.323.7313.625 Radej, B. (2011). Synthesis in policy impact assessment. Evaluation, 17(2), 133–150. Rametsteiner, E., & Weiss, G. (2006). Assessing policies from a systems perspecitve – Experiences with applied innovation systems analysis and implications for policy evaluation. Forest Policy and Economics, 8(5), 564–576. Richardson, K. A., & Cilliers, P. (2001). What Is complexity science? A view from different directions. Emergence, 3(1), 5–23. Richardson, K. A., Gregory, W. J., & Midgley, G. (2007). Editorial introduction to the special double issue on complexity thinking and systems theory. Emergence Complexity & Organization, 9(1/2), vi–viii. Rickles, D., Hawe, P., & Shiell, A. (2007). A simple guide to chaos and complexity. Journal of Epidemiology & Community Health, 61, 933–937. Rog, D. J. (2012). When background becomes foreground: Toward context-sensitive evaluation practice. New Directions for Evaluation, 2012(135), 25–40. Rogers, P. J. (2008). Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation, 14(1), 29–48. Room, G. (2011). Complexity. Institutions and public policy. Cheltenham: Edward Elgar Publishing. Rothwell, H., Shepherd, M., Murphy, S., Burgess, S., Townsend, N., & Pimm, C. (2010). Implementing a social-ecological model of health in Wales. Health Education, 110(6), 471–489. Sanderson, I. (2000). Evaluation in complex policy systems. Evaluation, 6(4), 433. Sanderson, I. (2009). Intelligent policy making for a complex world: Pragmatism evidence and learning. Political Studies, 57(4), 699–719. Schensul, J. J. (2009). Community, culture and sustainability in multilevel dynamic systems intervention science. American Journal of Community Psychology, 43(3–4), 241–256. Schwartz, R., & Garcia, J. (2011). Intervention Path Contribution Analysis (IPCA) for complex strategy evaluation: Evaluating the smoke-free Ontario strategy. In K. Forss, M. Marra, & R. Schwartz (Eds.), Evaluating the complex: Attribution, contribution and beyond (pp. 187–207). New Brunswick: Transaction Publishers. Shiell, A., Hawe, P., & Gold, L. (2008). Complex interventions or complex systems? Implications for health economic evaluation. BMJ, 336(7656), 1281–1283 http:// dx.doi.org/10.1136/bmj.39569.510521.AD Simpson, K. M., Porter, K., McConnell, E. S., Colo´n-Emeric, C., Daily, K. A., Stalzer, A., et al. (2013). Tool for evaluating research implementation challenges: A sense-making protocol for addressing implementation challenges in complex research settings. Implementation Science, 8(1). Ssengooba, F., McPake, B., & Palmer, N. (2012). Why performance-based contracting failed in Uganda – An ‘‘open-box’’ evaluation of a complex health system intervention. Social Science and Medicine, 75(2), 377–383. Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the range of designs and methods for impact evaluations (Working Paper No. 38) London: Department for International Development Retrieved from http://www.dfid.gov.uk/Documents/publications1/design-method-impact-eval.pdf. Stewart, J., & Ayres, R. (2001). Systems theory and policy practice: An exploration. Policy Sciences, 34, 94–97. Trenholm, S., & Ferlie, E. (2013). Using complexity theory to analyse the organisational response to resurgent tuberculosis across London. Social Science and Medicine, 93, 229–237. Verweij, S., & Gerrits, L. M. (2013). Understanding and researching complexity with qualitative comparative analysis: Evaluating transportation infrastructure projects. Evaluation, 19(1), 40–55 http://dx.doi.org/10.1177/1356389012470682 Vincent, R. (2012). Insights from complexity theory for evaluation of development action: Recognising the two faces of complexity. London: PANOS/IKM Emergent Research Programme Retrieved from http://wiki.ikmemergent.net/files/1203-IKM_Emergent_Working_Paper_14-Complexity_Theory-March_2012.pdf. Walker, R. (2007). Entropy and the evaluation of labour market interventions. Evaluation, 13(2), 193–219. Walton, M., Signal, L., & Thomson, G. (2013). Public policy to promote healthy nutrition in schools: Views of policymakers. Health Education Journal, 72(3), 283–291 http:// dx.doi.org/10.1177/0017896912442950 Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River: Prentice Hall. Westhorp, G. (2012). Using complexity-consistent theory for evaluating complex systems. Evaluation, 18(4), 405–420 http://dx.doi.org/10.1177/1356389012460963

Mat Walton is a lecturer in the School of Health and Social Services, Massey University, New Zealand. His research focuses on public health policy design and evaluation, the application of complexity theory and policy interaction.

Applying complexity theory: a review to inform evaluation design.

Complexity theory has increasingly been discussed and applied within evaluation literature over the past decade. This article reviews the discussion a...
402KB Sizes 5 Downloads 5 Views