Evaluation and Program Planning 48 (2015) 31–46

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

The development of education indicators for measuring quality in the English-speaking Caribbean: How far have we come? Anica G. Bowe * Oakland University, Rochester MI 48309, United States

A R T I C L E I N F O

A B S T R A C T

Article history: Received 21 December 2013 Received in revised form 8 August 2014 Accepted 17 August 2014 Available online 16 September 2014

Education evaluation has become increasingly important in the English-speaking Caribbean. This has been in response to assessing the progress of four regional initiatives aimed at improving the equity, efficiency, and quality of education. Both special interest groups and local evaluators have been responsible for assessing the progress of education and providing an overall synthesis and summary of what is taking place in the English-speaking Caribbean. This study employed content analysis to examine the indicators used in these education evaluation studies since the declaration of the Caribbean Plan of Action 2000–2015 to determine these indicators’ appropriateness to the Caribbean context in measuring education progress. Findings demonstrate that the English-speaking Caribbean has made strides in operationalizing quality input, process, and output indicators; however quality outcome indicators beyond test scores are yet to be realized in a systematic manner. This study also compared the types of collaborative partnerships in conducting evaluation studies used by special interest groups and local evaluators and pinpointed the one that appears most suitable for special interest groups in this region. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Education evaluation Indicators Special interest groups English-speaking Caribbean

1. Introduction 1.1. The rise of education evaluation in the English-speaking Caribbean Education evaluation has become increasingly important in the English-speaking Caribbean over the past 20 years. This rise in education evaluation studies has been due to Caribbean governments and scholars desiring to assess the outcomes of four key education initiatives that have taken place (Miller, 2000). These initiatives are the 1940s Universal Secondary Education (USE) reform; the 1990 Education for All (EFA) initiative; the 1997 Caribbean Plan of Action for the Early Childhood Education Care and Development initiative; and the 2001 Organization of Eastern Caribbean States (OECS) education development program. The 1940s USE began as part of the adult suffrage movement for equity in education. The remaining three initiatives were responses to the Framework for Action goals outlined by the United Nations conferences in Jomtein, Thailand, 1990 and later, Dakar, Senegal in 2000. All four initiatives in various capacities focus on improving

* Tel.: +1 6128020144. E-mail address: [email protected] http://dx.doi.org/10.1016/j.evalprogplan.2014.08.008 0149-7189/ß 2014 Elsevier Ltd. All rights reserved.

access to education, quality of education, human capital, and institutional capacity. They have also given rise to other successive initiatives such as Foundations for the Future 1991–2000, Pillars for Partnership and Progress 2000–2010, and the OECS Education Sector Strategy 2012–2021 (www.oecs.org) among others. More background on these initiatives can be found in the evaluation reports and studies of Leacock (2009), Miller (2000, 2009), the World Bank (2002), and at www.oecs.org. The evaluations of these initiatives have been overseen by two main entities; special interest groups and local evaluators. Special interest groups provide financial support to this region and work collaboratively with task force units housed in local government ministries to conduct evaluations of these initiatives (Caribbean Community Secretariat, n.d.; Miller, 2000). The most visible of these special interest groups are the World Bank, United Nations Education, Scientific, and Cultural Organization (UNESCO), United Nations Children’s Fund (UNICEF), and the Inter-American Development bank (IDB). Others include, but are not limited to, US Agency for International Development (USAID) and the Department for International Development (DFID). Local evaluators also partner with the governments to evaluate these initiatives. The most visible of these are the Caribbean Development Bank and scholars associated with local universities. The Caribbean Development Bank conducts evaluations through

32

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

partnership with task force units, local field researchers, and consultant teams (as noted in their Country Assessment Reports, caribank.org). Conversely, scholars generally work in partnership with other local or international scholars and evaluation firms. With the rise in evaluation studies, their sponsorship, and the government expenditure dedicated toward them, it is of interest to examine the extent to which these evaluation studies appropriately measure education progress in this region and, consequently, accurately represent the state of education. This study is designed to address this gap in knowledge regarding the validity and utility of the indicators used in these evaluation studies. By doing so, this study addresses the conclusiveness of what we currently know about education progress in this region. Miller (2000) is a key evaluation report that summarizes education progress in the English-speaking Caribbean since the 1990 United Nations conference in Jomtein, Thailand. In response to this conference, the Caribbean member states had decided to improve the quality of basic education and increase access to early education and secondary education (Miller, 2000). Miller pointed out that a major limitation to evaluation studies up to that time point (that is, between 1990 and 1999) was that the indicators used to measure education progress were primarily quantitative and focused more on access to education at all levels rather than the qualitative aspects of education such as the systematic evaluation of interventions, physical conditions of primary schools, and teacher professional training. This was a limitation to evaluating education progress in this region because during that time the English-speaking Caribbean’s focus was on quality as well as access. Therefore, according to Miller, the indicators up to that time did not fully measure the priorities of the region. Just a few months before the 2000 United Nations conference at Dakar, Senegal, the Caribbean Community (CARICOM), which includes the English-speaking Caribbean, updated national and regional education goals to be achieved by year 2015 which can be found in the Caribbean Plan for Action 2000–2015 (Caribbean Community Secretariat, n.d.). According to this plan, individual countries were responsible for: establishing early childhood care and education; improving teacher quality; improving technology use in the classroom and establishing a national education monitoring system; tracking the performance and accountability of stakeholders, national investments, and resources; involving civil society in the education processes and management; providing inclusive and relevant secondary, tertiary, and life skills education to youth and adults; promoting attitudes and behaviors characteristic of the ideal Caribbean person; and improving the quality of basic education (Caribbean Community Secretariat, n.d.). The English-speaking Caribbean had also endorsed the six millennial developmental education goals set forth by the Dakar conference which occurred a few months later (Education Planning Division, n.d.). These goals included early childhood education, equity in education for disadvantaged groups, equitable access to education and life skills for youth and adults, increasing adult literacy, eliminating gender disparities, and improving education quality. Notably, the majority of the goals from the Caribbean Plan for Action and the Dakar conference overlap which speaks to the congruency of these goals for national strategies. At the regional level, CARICOM was responsible for establishing a regional education monitoring system, defining measurable benchmarks for literacy, and assisting in developing valid and reliable quantitative and qualitative indicators to measure education progress (Caribbean Community Secretariat, n.d.). It is also important to recognize that a sub-region community of CARICOM, namely the OECS, also developed goals for education reform to be met by 2010. These can be found in the document Pillars for Partnership and Progress 2000–2010 (www.oecs.org). These goals were adopted by member states of OECS and not by the

entire CARICOM, therefore, they are not spelled out here. Thus, since 2000, there was a need for evaluators to delineate and utilize indicators that measured the above goals. The extent to which evaluators demonstrated awareness, alignment, and measured national and regional goals adopted by CARICOM in the year 2000 forms the crux of this study. The identification of appropriate indicators for the Englishspeaking Caribbean hinges partly upon a clear understanding of its education context. More specifically, Neirotti (2012) asserts that an understanding of the sociopolitical context of a nation’s development is necessary to fully understand the function of evaluation taking place. Neirotti describes the sociopolitical context as ‘‘trends in the development of a nation, forms in which the state works, the conditions of civil society and its relationship with the state as well as the shaping of public policy’’ (Neirotti, 2012, p. 9). In general, the sociopolitical context of Caribbean nations cannot be understood without an understanding of small state theory and post-colonial theory because these two heavily influence whole system reform there (De Lisle, 2012). Evaluators there ought to consider how small state theory and post-colonial theory apply to this developing region and be readily prepared to embrace or overcome factors that either promote or impede their work. Within the realm of small state theory and post-colonial theory lie more obscure factors that shape the sociopolitical context of the English-speaking Caribbean. Due to the miniscule nation sizes of the English-speaking Caribbean, however, these factors remain obscure because this region is often grouped into the larger Latin America and Caribbean community who often face very different issues and challenges. Two of these more obscure factors: trends in this region’s education history and the condition of civil society are critical to the practice of evaluation in this region. These factors are presented below to better acquaint the international community – in particular special interest groups who might not be as familiar with the English-speaking Caribbean context – with other aspects of the sociopolitical context of education in this region and to alert it to common misunderstandings about this region that threaten the validity and use of evaluation studies. 1.2. The historical, developmental and theoretical context of education in this region The English-speaking Caribbean has a rather unique history for a developing region because it provided universal primary education to its citizens decades before the 1990 United Nations Jomtein, Thailand conference (Miller, 2000; Warrican, 2009). It had achieved gender parity in enrollment in primary education by the time of that conference as well. Only a handful of other developing countries had similar results at that time (Clarke, 2011), but what separates the English-speaking Caribbean from these is that their achievement was a regional accomplishment. By the follow-up United Nations conference in Dakar, Senegal, 2000, at least half of the countries that make up the English-speaking Caribbean had achieved universal secondary education and nine years later in 2009, only one third of them had yet to accomplish this (Miller, 2009; Warrican, 2009). Finally, the gap in education enrollment in basic education between the richest and poorest quintile of citizens who reside in even the poorer countries in this group is almost absent (World Bank, 2008, figure 6.1) which again is rare for developing nations. Taken together, this region has a trend in being forerunners in accomplishing education milestones as compared to other developing regions. The classification scheme imposed upon the English-speaking Caribbean by the World Bank and UNESCO is a potential obstacle to evaluation studies conducted by special interest groups in this region. World Bank and UNESCO lump the English-speaking

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

Caribbean together with Latin America and the Caribbean. The Latin America and the Caribbean region is classified as developing, but unlike many developing nations, the English-speaking Caribbean’s public spending allocations are closer to industrialized countries rather than developing countries, and its resource allocation for health and education is higher on average than non-Caribbean developing countries (Swaroop, 1996). There is however great variation amongst these countries in terms of expenditure on education, proportion of students who successfully complete schooling in the expected timeframe (Schrouder, 2008), economy sizes, and development status (Kouame and Reyes, 2011). Due to the fact that the developmental status and education accomplishments of this region are often overlooked or misunderstood, evaluation studies conducted by special interest groups might use indicators that do not sufficiently match the sociopolitical context of education in this region. This was the case of evaluations conducted in the English-speaking Caribbean between 1991 and 1999 (Miller, 2000). For example, by convention, judgments about education systems in developing countries are usually based upon indicators of coverage and efficiency as evidenced by World Bank and UNESCO reports. These indicators typically include expansion, enrollment, repetition, transition, and school completion rates as well as calculations for expenditures and cost. Beyond coverage and efficiency, there also exist indicators of quality. These indicators are typically input indicators and examples of these are pupil–teacher ratios, learning resources, classroom environment, and teacher qualifications (Marshall et al., 2012). Miller noted that up to that time point, coverage and efficiency, but not quality indicators were being used. This study contends that continuing to use coverage and efficiency indicators while neglecting quality indicators will do little to advance education progress in the English-speaking Caribbean because the majority of the goals lie beyond these rudimentary measures. Another challenge facing evaluation studies up to that time point was that a culture of monitoring and evaluation in this region was weak. Miller (2000) noted that there were errors in the statistics collected, some public schools did not engage in the mandatory reporting, and that interventions were not evaluated. In addition, there was, and still remains, a lack of financial resources to carry out large scale evaluations. Although the English-speaking Caribbean’s spending allocations are similar to industrialized countries, the majority of them have very limited budgets for spending; have economies that are reliant on export, tourism, foreign investment, and remittance inflows; are currently experiencing high debt; and are particularly susceptible to natural disasters (Kouame and Reyes, 2011). Taken together, it is arguable that indicators used in the English-speaking Caribbean up to that point did not match the sociopolitical climate because special interest groups who developed these indicators were not aware of past educational accomplishments nor did governments up to that time have the capacity to conduct self-initiated large scale evaluations. This suggests that up to the time of the Miller (2000) report, this region was especially vulnerable to adopting indicators that were outlined in borrower– lender contracts and did not have the expertise to adequately measure ones that might have been more important to their context. Whether or not this vulnerability is still present post Miller report will be addressed in the conclusion of this paper. Vedder’s (1994) work further explicates this vulnerability of developing countries and highlights potential concerns when special interest groups define and impose measures of education quality that do not match the goals of that particular developing country. His work also surmises that compared to special interests groups, local evaluations conducted at the country level tend to be more consistent with the education goals and perceptions of education quality held by the people. Thus, evaluators can paint very different pictures of a region’s education’s status depending

33

on whether the evaluator is local and using measures consistent with local values or external and imposing their own ideas regarding indicators of progress and quality. Therefore, depending on who is overseeing the evaluation study, special interest groups or local evaluators, it is possible to get very different pictures of education progress in the English-speaking Caribbean region. It is important then, that evaluation studies utilize both types of evaluators to determine the appropriateness of claims. The extent to which evaluation reports used both types of evaluators is presented in the findings section. In light of the socio-political context of education in the Englishspeaking Caribbean, this study draws upon literature examining education indicators in developing and industrialized nations to identify appropriate indicators to measure education progress for the English-spending Caribbean. This study purports that since this region has already achieved certain milestones in education development common to industrialized countries and has shifted much of its focus to measuring quality, it is best to draw upon the literature that describes indicators for quality and not just coverage and efficiency. 1.3. Conceptual framework: possible indicators for the Englishspeaking Caribbean Various researchers have examined the use of indicators for developing and industrialized countries. In particular, this study draws up on the work of Hanushek (1995), Walberg and Zhang (1998) and Miller, Sen, and Malley (2007) to make a case for possible indicators for the English-speaking Caribbean. A summary of their work is presented followed by a table to portray their recommendations (Table 1). Hanushek’s (1995) meta-synthesis of indicators for developing and industrialized countries found that many of these indicators display similar effects (positive, negative, or none) regardless of country status. Specifically, teacher education, condition of facilities, and expenditure per pupil more often than not had a statistically significant and positive effect on student outcomes in developing countries. Conversely, other factors such as teacher– pupil ratio, teacher experience, and teacher salary were not as influential. Hanushek’s findings for the United States were similar except for the effect associated with facilities. This might be because there is less variation in facilities amongst public schools in the United States as compared to the variation amongst public schools in developing regions (Hanushek, 1995). In line with Hanushek’s findings, Fuller (1987) meta-synthesis of the effects of school inputs for developing countries also found that teacher level of tertiary education and various instructional materials (e.g. textbooks, desks, school libraries) were consistently related to student achievement, but others like class sizes, teacher experience, and teacher salary were not. Thus, Hanushek’s findings give ground for the argument that education evaluation studies in the English-speaking Caribbean ought to include indicators for teacher education, condition of facilities, and expenditure per pupil (Table 1). Further, his work suggests that it might serve this region well to draw upon suggested indicators from industrialized countries to guide their efforts in measuring quality. Much of the work in developing indicators for industrialized countries has been conducted by the Organization for Economic Co-operation and Development since the 1980s (Walberg and Zhang, 1998). By 1995, this organization had developed a total of 635 indicators and categorized them into three blocks: the context of education; the cost, school processes and resources for education; and the results of education. Walberg and Zhang utilized data reduction techniques on the 635 items, which resulted in twelve indicator items (Table 1). A more recent revision of the 635 indicators can be found at (http://www.oecd.org/).

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

34 Table 1 Recommended indicators to measure quality. Author

Category of indicator

Indictor

Hanushek (1995)

School inputs

Walberg and Zhang (1998)

Context of education

                            

Cost, resources, & school process

Results of education/student outcomes

Miller et al. (2007)

Population and school enrollment Academic performance Context for learning

Expenditure for education and education returns

condition of facilities expenditure per pupil teacher education perceptions about education and its budget proportion of 35 to 44 year olds who attended upper secondary school (high school) amount of time spent on teaching lower secondary math (middle school math) enrollment rate of 20 year olds in tertiary education expenditure per secondary student initial central funds of each education level starting salary of primary school teachers* achievement gains in reading between 9 and 14 years of age reading level of 9 year olds education level of workers in chemical manufacturing percentage of population with engineering and architecture degrees proportion of unemployed youth and young adults enrollment in K-12 education foreign students in postsecondary education fourth grade math and science literacy fifteen year old math literacy class size* teacher–pupil ratio* teacher reported professional development in math and science how principals use assessment reports distribution of population by education and income employment rates per pupil expenditure percent of adults who completed higher education by age group and sex public school teacher salaries* time spend on math learning

Note: * Correspond to contradictions in recommended indicators among these researchers.

Almost a decade later, Miller, Sen, and Malley (2007) delineated four categories of education indicators for industrialized countries; population and school enrollment, academic performance, context for learning, and expenditure for education and education returns. The indicators for these categories are also found in Table 1. Interestingly, three of their suggestions (and one by Walberg and Zhang (1998)) contradict Hanushek’s findings for class size, teacher salary, and teacher–pupil ratio. That is, Hanushek’s work found that in general, teacher–pupil ratio and teacher salary have nonsignificant effects in developing countries. Second, Hanushek contends that there is little evidence supporting the argument that smaller class sizes are better than larger class sizes in industrialized countries. Due to these contradictions, this study places asterisks next to class size, teacher salary, and teacher– pupil ratio to inform the reader that observable relationships of these three factors ought to be interpreted more cautiously. In summary, this study uses the recommendations of these researchers in a comparative framework to judge the extent to which these indicators, or ones similar to them, are represented in education evaluation studies in the English-speaking Caribbean after the Miller (2000) report.

on challenges encountered by the evaluators in carrying out their work to distinguish between gaps in knowledge due to limitations with the indicators used versus gaps in knowledge due to a design flaw of the evaluation study. The main research questions that guided this study were: 1. What were the indicators used in these evaluation reports? a. To what extent do indicators align with the goals of the region? b. To what extent are indicators measurable? 2. To what extent where local evaluators and special interest groups consistent in their use of these indicators? 3. To what extent are indicators aligned with suggestions from the literature summarized in Table 1? A secondary question that guided this study was: 4. What were the challenges encountered by evaluators that prevented them from answering their research questions?

2. Methodology 2.1. Selection criteria for the evaluation studies

1.4. Purpose This study takes a closer look at the indicators used in education evaluation studies in the English-speaking Caribbean since the Miller (2000) report. It uses the Miller (2000) study as a pivotal landmark and examines whether there was a shift toward the delineation of quality indicators that were aligned with the goals of CARICOM, aligned with suggestions from the literature, were measurable, and therefore informed on education progress made. By examining these indicators, this study extends the conversation beyond classifying the indicators as quantitative or qualitative and onto examining whether they were appropriate to the English-speaking Caribbean context. Lastly, this study reports

There were three selection criteria for these evaluation studies. These studies had to have been conducted in the English-speaking Caribbean, published during the years 2000–2014, and had as a focal point the goal to measure education progress or reform. This date range was selected because these studies proceed the Miller (2000) report and thereby captured evaluation efforts from that time to present date. This study defines the English-speaking Caribbean as Anguilla, Antigua & Barbuda, Bahamas, Barbados, Belize, British Virgin Islands, Cayman Islands, Dominica, Grenada, Guyana, Jamaica, Montserrat, St Kitts & Nevis, St Lucia, St Vincent & The Grenadines, Trinidad & Tobago, and Turks & Caicos Islands because English is the official language of these countries and

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

territories. The databases searched were Academic Search Premier, Google scholar, ERIC, JSTOR, as well as the University of the West Indies, World Bank, UNESCO, UNICEF, Caribbean Development Bank, and IDB websites. Keywords used to identity these studies were various combinations of the words evaluation, education, reports, West Indies, Caribbean, Anglo-Caribbean, English-speaking Caribbean, Eastern Caribbean, as well as the names of individual English-speaking Caribbean countries. These criteria identified 48 evaluation studies; 16 were studies overseen by the World Bank, UNESCO, UNICEF, and IDB. The remaining 32 were overseen by local entities such as the Caribbean Development Bank and local scholars. 2.2. Analysis Content analysis was used as a method of analysis. According to Hsieh and Shannon (2005) content analysis is a family of analytical approaches to qualitative data and these approaches can be classified as conventional, direct, or summative. This study adopted a direct approach to analyzing the data. The goal of a directed approach is to ‘‘validate or extend conceptually a theoretical framework or theory. Existing theory or research can help focus the research question. It can provide predictions about the variables of interest or about the relationships among variables, thus helping to determine the initial coding scheme or relationships between codes’’ (Hsieh & Shannon, 2005, p. 1281). Content analysis was performed on each evaluation report to identify the type of education indicators used and the challenges encountered in the studies. Hsieh and Shannon (2005) outlined two coding strategies to use for direct content analysis. The first was to highlight passages containing instances of the particular phenomenon and then code the data using predetermined coding. The second strategy was to begin coding right away without highlighting passages. The second strategy was adopted because the researcher felt confident that relevant text describing indicators and challenges could only be found in certain sections of the evaluation report (methodology and findings). First, codes for indicators were developed. Next, the indicators were classified into categories suggested by Hanushek (1995), Walberg and Zhang (1998), and Miller, Sen, and Malley (2007). Due to the overlap in these researchers’ categories, this study adopted a combination of their suggestions. The resulting categories were Population & School Enrollment; Academic Performance; Context of education/Context of learning; Expenditure/Cost and Education Returns; School processes & Resources; and Community Factors. Second, codes for challenges encountered during the evaluation studies were developed. 3. Findings and discussion Since 2000, two major efforts have been undertaken to outline quality indicators for the English-speaking Caribbean. The first was conducted by a collaboration of local bodies comprising of Ministries of Education and the OECS Education Reform Unit in year 2000 (OECS Education Reform Unit, 2002). This collaboration created a Performance Management handbook that instructed schools on how to go about creating education management and information systems (EMIS). This collaboration identified 41 indicators to be included in these databases. Unfortunately, the implementation of EMIS in this region remains a struggle. As a result, evaluation studies which purposively include these indicators have yet to be conducted (Cassidy, 2006). The second major effort was undertaken by a two-stage process involving UNESCO (Jules & Panneflek, 2000) and the World Bank (di Gropello, 2003). Although they were successful in outlining and measuring quantitative indicators for the region, these authors

35

noted that they struggled to operationalize quality outcome indicators in a manner that could be measured. Thus, taken together, these two independent efforts demonstrate that the English-speaking Caribbean and special interest groups recognize that quality indicators are a top priority for this region and are in pursuit of this goal. As noted above, this study identified 48 education evaluation reports (including the di Gropello (2003) and Jules and Panneflek (2000) studies) conducted in this region since the Miller (2000) report. The findings regarding indicators used in these studies are summarized below. The tables and figures present the types of indicators used, their alignment with regional goals, the extent to which they were measurable, the consistency of use between special interest groups and local evaluators, and the extent of their alignment with suggestions from literature. The examination of these indicators is followed by challenges encountered by the evaluators, if any, that prevented them from answering their questions. 3.1.1. Research question 1: indicators used Tables 2a and 2b outlines the evaluation studies, the evaluand, indicators used, country and regional goals these indicators measured, and specific challenges, if any, encountered by each of the studies. Findings demonstrate that the majority of these indicators were indeed measures of quality. An important aspect to consider when examining quality indicators is whether they represent input, process, output, or outcomes indicators. This study adopts Fitzpatrick, Sanders, and Worthen (2004) definitions of input, process, output, and outcomes indicators. Inputs are facilities, equipment, materials, personnel etc. needed to run a program. Processes are activities that make up the program; outputs are the direct effects of program activities, and outcomes are the immediate or long term goals of the program. Based on their descriptions, the majority of the quality indicators were input, process, or output indicators. There were a few quality outcome indicators outlined, for example, adult illiteracy reduction, assessment of policy to develop adult literacy, expansion of training in essential skills (di Gropello, 2003) but these were noted as not measurable by the authors. Other quality outcome indicators presently being measured are literacy rates, highest certificate attained by heads of households, and enrollment in tertiary education as measured by the Caribbean Development Bank (e.g. see KAIRI Consultants Limited, 2000,2001,2007a). More recently, the World Bank as well as IDB have called attention to employer and labor market needs (World Bank, 2008; Office of Evaluation & Oversight, 2014) and school to labor market transition rates (Office of Evaluation & Oversight, 2014). Employer needs, labor market needs, and labor market transition rates are important outcome indicators because they inform on education returns. Further, although the relationship between education attainment and economic viability in this region appears linear and positive (Blom & Hobbs, 2008); the shape and size of this relationship might vary for some countries (e.g. see Na¨slund-Hadley, Alonzo & Martin, 2013). Overall, we see that after the Miller (2000) report this region made large strides in outlining quality input, process, and output, indicators, though it still has ways to go in outlining quality indicators and putting the necessary processes in place that allow quality outcome indicators to be measured. 3.1.2. Research question 1a: alignment with regional goals Table 2a demonstrates that many of the indicators were aligned with the goals of this region. Fig. 1 further demonstrates that while all 48 studies amply measured both sets of goals, more studies

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

Caribbean Plan for Action 2000-2015 Millenium Developmental Goals goals

36

Equity to disadvantaged groups Early childhood, care and development Improving literacy Equitable access to education and life skills… Education quality (any level) Gender disparities Early childhood, care and development Inclusive/life skills/relevant secondary &… Teacher quality Quality of basic education (primary education) Tracking stakeholder performance &… Technology use & monitoring system Establishing regional monitoring system Involving civil society in education process Developing indicators to measure education… Developing benchmarks for literacy Promoting ideal Caribbean person

0% 10% 20% 30% 40% 50% 60% 70% 80% Fig. 1. Percent of studies that had at least one indicator measuring this goal (N = 48

measured millennium developmental goals than the Caribbean Plan for Action goals. The goals that were prioritized in this region were equity in education for disadvantaged groups; early childhood care and development; access to secondary and/or adult education and life skills that was inclusive and relevant; improving teacher quality; and improving literacy. The least measured goals were developing indicators to measure progress, developing benchmarks for literacy, and promoting the ideal Caribbean person. In fact, no study had indicators that measured methods of promoting the ideal Caribbean person which speaks to the illusiveness of this goal. Fig. 2 demonstrates that studies conducted by local evaluators were more aligned with millennium development goals whereas those conducted by special interest groups were more aligned with the Caribbean Plan for Action 2000–2015 goals. This is a surprising finding and contradicts Vedder’s (1994) argument that special interest groups might be less attuned to the values and goals held by the people. 3.1.3. Research question 1b: to what extent were the indicators measurable? Table 2a demonstrates that the majority of these quality indicators were defined in a manner that was quantifiable. That is, they were measured either as categorical items, numbers, proportions, or percentages. Examples of categorical items were as follows: were school buildings refurbished (yes/no) (World Bank, 2009a, 2009b, 2011), were literacy and numeracy programs established (yes/no) (World Bank, 2009a), did teacher training take place (yes/no) (Dye, Jennings, Lambert, Hunt, & Wein, 2002; Rodrigues, 2000); and does teacher education policy align with teacher education (yes/no) (Wideen, Kanevsky, & Northey, 2007). Examples of numbers, proportions, and percentages were: the number of new schools built, the number of schools refurbished, the proportion of teachers in rural areas with graduate degrees, the number of teachers trained, the proportion of high school graduates employed, percent of students who used various mode of transport to school, and disability rate of students (Blom and Hobbs, 2008; Education Evaluation Center, 2012; KAIRI Consultants Limited, 2007a; Miske Witt & Associates, 2008; World Bank,

studies).

2009a, 2009b). In contrast, there were a few indicators outlined but were not measurable as mentioned above. Similarly, a study conducted by IDB also reported that it was not able to measure improvements in education quality because quality was not well defined (Office of Evaluation & Oversight, 2002). 3.1.4. Research question 2: to what extent where local evaluators and special interest groups consistent in their use of these indicators? Table 3 illustrates comparisons for type of indicator used, how often the indicators were used, and which indicators were of highest priorities by evaluator type. Table 3 informs that twice as many evaluation studies were overseen by local evaluators as compared to special interest groups. This indicates that the region has expertise in evaluation research and does not have to rely primarily on external organizations to carry out evaluation work. It also suggests that the picture of evaluation progress painted by these studies should be more credible to citizens due to the majority of them being conducted by locals who are aware of the Caribbean context. Of the six categories of indicators, the one that was most represented was School process and Resources (14 indicators), followed closely by Context of learning/Context of education (13 indicators); Population & school enrollment (13 indicators); Community Factors (9 indicators); Expenditure/Cost and education returns (8 indicators); then Academic performance (3 indicators). Note that both local evaluators and special interest groups made use of all categories thereby refuting the argument that special interest groups might not be using indicators instrumental to measuring progress. The heavy focus on School process and Resources and Context of learning/Context of education aligns well with the fact that this region is in pursuit of quality. Notwithstanding, Population & school enrollment indicators are also high. From previous research we know that this region has already achieved universal primary/basic education. Thus, the heavy focus on disparities by demographics by both local and special interest groups and that fact that local evaluators are also heavily focused on mode of transport to school together suggest that access to education at either the early

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

37

Table 2a Specific indicators used in evaluation studies. Who initiated the evaluation

Focus

Indicators used

Country/regional goal measured

Challenges

World Bank (2002) World Bank (2009a, 2009b, 2011)

Preliminary Analyses for OECS education projects OECS education development project appraisal, St Kitts & Nevis, St Lucia, Grenada

CPA: 2, 3, 8, 9 MDG: 2, 4, 6

Expected data collection did not take place because personnel inside Ministries/Departments of Education in individual countries responsible for data collection and preliminary analyses did not do so (World Bank, 2011). The quality and/or management indicators were not well defined nor clearly linked to outlined activities of the project. In general there was poor link between indicators, objectives, and activities (World Bank, 2009b, 2011).

di Gropello (2003)*

Education indicators for region. World Bank report

Whether or not the new school constructed, rehabilitation of schools, upgraded science labs, if resources centers were established, the share of non-salary recurrent education expenditure, completion rates, enrollment rates, transition rates, proportion of students passing at least 5 CXC, number of qualified teachers in schools, pupil–teacher ratio, use of new curriculum, number of teachers trained in special education/guidance/counseling, whether or not in-service training was provided for, primary, secondary, and technical teachers, whether or not the literacy and numeracy program was established, the implementation of schoolbased improvement projects, whether or not the database was established, and whether or not district officers received training in secondary school management. Enrollment rates, completion rates, repetition, transition and survival rates, years to graduate, expenditure as a percent of gross domestic product (GDP), expenditure per student, expenditure per graduate, expenditure allocation ratios, CXE achievement scores.

CPA: 4, 9, 11 MDG: 2, 3

Expected data collection did not take place because personnel inside. Ministries/Departments of Education in individual countries responsible for data collection and preliminary analyses did not do so. Difficulty in operationalizing quality indicators in a manner that could be measured.

Blom and Hobbs (2008)*

Education system and global economy. World Bank report

Trends for employment rates for various skilled professions, unemployment rates, education level of citizens, internal efficiency, income, academic achievement scores, employer needs, funding sources at various levels of education, and enrollment into tertiary education.

CPA: 6, 8 MDG: 6

Education indicators.

Young children participation in early childhood programs, proportion of first graders who received early childhood education, enrolment ratios, intake rates, public expenditure on education, teacher qualifications, teacher–pupil ratio, repetition rates, survival rates, coefficient of efficiency, reading and math achievement scores of certain school-age cohorts, literacy rates of 12 through 24 year olds, adult illiteracy reduction, assessment of policy to develop adult literacy, expansion of training in essential skills, whether or not there was an increase in education for better living, and improved dissemination of knowledge for better living.

CPA: 1, 4, 6, 11 MDG: 1, 3,4,5,

Smooth transition from nursery to primary school in Guyana

Perceptions about the curriculum and school environments, perceptions about relations between nursery and primary schools, whether or not workshops took place, drop out, repetition rates, and parent involvement. Enrollment, child friendliness of the school, level of student government involvement in the school, instructional methods used, quality of facilities, education level of teachers, classroom resources, school and community opinions about the school context, and teacher professional development. NA (not implemented as designed. Program was a failure).

CPA: 1, 2, 5 MDG: 1, 2

Special Interest Groups World Bank (n = 6)

UNESCO (n = 1) Jules and Panneflek (2000)*

UNICEF (n = 3) Rodrigues (2000)

Dongen (2002)

Escuela Nueva index in Guyana

Iyo (2001)

SHAPES Health and Nutrition Program in Belize

Inter-American Development Bank (n = 6)

CPA: 2, 3,5,8, 11 MDG: 2, 6

CPA: 5, 8 MDG: 6

Difficulty in operationalizing quality outcome indicators in a manner that could be measured. These were adult illiteracy reduction, assessment of policy to develop adult literacy, expansion of training in essential skills, whether or not there was an increase in education for better living, and improved dissemination of knowledge for better living

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

38 Table 2a (Continued ) Who initiated the evaluation

Focus

Indicators used

Country/regional goal measured

Tsang et al. (2002)*

Access, Equity and performance in education in Barbados, Guyana, Jamaica & Trinidad and Tobago. Examining major issues in education, remedial policies, and best suited financing strategies for implementing policies

CPA: 1,2,4,8 MDG: 1,2,3,5,6

Na¨slund-Hadley et al. (2013)

Articulate progress made and identify challenges and opportunities in Belize Education Sector

Office of Evaluation and Oversight (2002)

Provide credible and useful information on Bank performance in Guyana to improve the Bank’s effectiveness

Office of Evaluation and Oversight (2014)

Provide credible and useful information on Bank performance in Barbados to improve the Bank’s effectiveness

Office of Evaluation and Oversight (2010)

Provide credible and useful information on Bank performance in Jamaica to improve the Bank’s effectiveness Accomplishments and challenges in implementing EMIS within education sections in this region

Recurrent expenditure and sources of resources in the education sector, spending on education over time, unit costs of education, efficiency of system, enrollment in K-12 and tertiary education, unemployment, perceptions about education quality, no. of trained teachers, whether or not teachers had appropriate learning materials, conditions of physical facilities, teacher moral, teacher pedagogy, institutional capacity in management, CEE 11+ and CXC exams, education disparities (income, geography, ethnicity, sex), attendance, teacher absenteeism, cost of tertiary education, country-level strategic plans, curriculum, school-intervention plans. K-12 and tertiary age school attendance rates (age, sex, wealth, location, ethnicity), geographic access to education, transition and completion rates, internal efficiency, primary and secondary exam scores by geographic area & sex, tertiary entry exams, expenditure per student by district, returns for education, expenditure on education, cost of basic education, proportion of qualified teachers at K12 and tertiary levels, teacher pedagogical approach, teaching materials, management in the education sector, education policy analysis. No. schools constructed, completion and delivery of school development projects, enrollment, attendance, expenditure on education, student-teacher ratio, condition of facilities, teaching materials, standard test scores, no. teachers available, no. qualified teachers, literacy and numeracy skills/scores of primary school leavers. Enrollment rates, CXC passes, transition rates from school to labor market, unemployment, country strategic plan to improve quality of tertiary education, data management/ monitoring system inside schools, availability of information to match employer needs to technical/vocational training, technical/ vocational faculty professional development opportunities. Resources allocations in education.

Data monitoring systems at country level.

CPA: 3, 9

Use of diagnostic assessment for special education, disability rate of students, no. of schools serving special need populations, curriculum alignment, quality of teacher training programs, no. of qualified staff for special ed, no. of schools prepared meet the needs of special education students, examined education policy for special needs, teacher pedagogical approach, perceptions of students and parents about inclusive education, national exam scores, suspensions/expulsion/truancy, teacher knowledge improvements, compared education indicators in school policies and plans to see if they aligned to goals, assessed management administration, education of public to issues of students with special needs, cost effectiveness analysis of providing inclusive education.

CPA: 2,6,8 MDG: 2,6

Cassidy (2006)

Local Entities & Local Scholars Government of Trinidad & Tobago (n = 9) Miske Witt and Seamless education system Associates (2008) in Trinidad & Tobago that is inclusive of students with special needs

Challenges

CPA: 1, 2,4,8 MDG: 1,2,3,6

CPA: 2,4,8 MDG: 2,4,6

CPA: 2,3,4,5,6,9 MDG: 3,6

CPA: 4

Improving quality of education results are not measurable

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

39

Table 2a (Continued ) Who initiated the evaluation

Focus

Indicators used

Country/regional goal measured

Challenges

Northey et al. (2007)

Examined the curriculum, instruction, testing, and evaluation and Spanish as first foreign language as part of modernizing education

CPA: 2,4,5,8 MDG: 6

Time constraints on collecting data.

Moore (2010)

Needs assessment for Improving efficiency and effectiveness of early childhood care and education Determine status of continuous assessment program in the full treatment schools in Trinidad & Tobago

Review education policy documents, perception of stakeholders of curriculum/ instruction/testing/evaluation/and Spanish at primary level, examine curriculum, teacher pedagogy, structural organizational factors, teacher resources, perceived role of teacher, impact of teacher workshops, teacher content knowledge in Spanish, availability of teacher professional development. Needs assessment by reviewing of policy documents, organizational structure, management performance, availability of technology and possibility of EMIS variation for early childhood program. Leadership/organizational issues, whether or not a monitoring & evaluation system in place, teacher training, current assessment designs, teacher implementation of assessment practices, teacher knowledge of appropriate assessment practices, perceptions of principals of assessments, principal role in site training and management, teachers beliefs and practices about teaching, types of classroom assessments, students’ perceptions of continuous assessment, parents experience and knowledge with continuous assessment, teacher demographics, teacher qualification, teacher experience, teacher self-efficacy, classroom culture, differentiated instruction, professional learning culture in schools. Analyze curriculum, teaching/learning resources, analyze assessment instruments, standardized test scores examined by region/ sex/year, entry rates to take secondary exams, principal and teachers views on school interventions, teacher practices, school-based initiatives, perceptions about why boys’ underperform, stakeholders involvement with school processes, teacher training, classroom environment, education policy documents. Geographic coverage of services, presence and components of services, expenditure, teacher qualification, student-teacher ratio.

De Lisle (2010)

George (2009)

Gender issues in primary/ secondary schooling in Trinidad & Tobago

Araujo et al. (2013)

Examining early childhood and development services in Jamaica and Trinidad & Tobago Transition from early childhood care to primary education

Franklyn (2010)

Wideen et al. (2007)

Modernizing teacher development in Trinidad & Tobago

Education Development Center (2008)

Early childhood care and education study

Stakeholders perceptions about what transition should look like and necessary policy, curriculum, classroom learning environments, learning materials, structure of physical environment, staff-student ratio, teacher practices, type of support services, adequacy of physical facilities, parent and community involvement, policy documents. Teacher education policy, teacher practices, organization of teacher professional development unit, stakeholder’s and participants perceptions of teacher development needs, social climate among teachers. Review of policy documents, enrollment, influences of support mechanisms on the reform, pre and in-service teacher education programs, creation of private/public partnerships, differences between reform and old programs, type and no. of centers, management of current system, curriculum, teacher practices, teacher professional development, communication between MOE and early childhood centers, factors impacting quality of services, no. of special needs students, buy-in from program administrations and end users, student-teacher ratio, staff qualification, monitoring and evaluation processes and databases, availability of human resources, community sponsorship, assessment practices, opportunities for parent involvement in the school, parents beliefs about best practices, home practices that support learning.

CPA: 1, 3,4,9 MDG: 1,2,6

CPA: 2,3,4,5,8 MDG: 6

CPA: 2,4,8 MDG: 2,3,5,6

CPA:1,2 MDG: 1,6

CPA: 1,2,4,5 MDG: 1,2,6

CPA: 2,5

CPA: 1,2,3,4, 5,9 MDG: 1,2,6

Time constraints to data collection, access to current written information, unexpected institutional changes

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

40 Table 2a (Continued ) Who initiated the evaluation

Focus

Caribbean Development Bank (n = 19) Poverty Assessment KAIRI Consultants Limited (2000, Reports for individual 2001, 2007a, countries. 2007b, 2008, 2009a, 2009b, Included data for universal 2010, n.d.) primary and secondary SALISES (2012) education, equity in Halcrow Group education for the poverty Limited (2002, assessment reports. 2003a, 2003b, 2010, 2012)

Dye et al. (2002) Lockheed et al. (2006)

New Horizon Project in Jamaica Progress report Effects of project on standardized test scores

Barrow and Delisle (2010) Education Evaluation Center (2012)

Secondary Education Modernization Program in Trinidad & Tobago Education Sector Enhancement Program in Barbados

Indicators used

Country/regional goal measured

Challenges

Indicators were examined by age cohort, sex, and poverty quintile level (poorest to richest). They included literacy rates, enrollment rates, mode of travel to school, persons having required textbooks, highest grade level completed, highest certificate attainted by heads of households, types of tertiary education available, adult and continuing education opportunities, occupations, distance learning, early child hood education enrollment, perceptions about education, education expenditure as percent of GDP.

CPA: 1,6 MDG: 1,2,3,4,5

Lack of follow-through by field research personnel (KAIRI Consultants Limited, 2009a). Field research personnel had difficulty obtaining information from community because of lack of relationship or because community was skeptical about the benefits of providing information (KAIRI Consultants Limited, 2000, 2007a, 2009a,b)

Attendance, teacher professional development in language arts, math, technology; classroom resources, teacher practices, implementation of new curriculum, and student achievement scores, degree of parent involvement, implementation of health and nutrition programs, linking national monitoring databases. Teacher perceptions about curriculum.

CPA: 2,3,4,5,8,9,10 MDG: 2,4,6

No previous achievement scores available to make growth comparisons. This however was addressed through propensity score matching of comparative schools.

Teacher and staff professional development, availability of classroom resources, human resources training, availability and quality of facilities, facilities rehabilitation, revised curricula, implementation of national monitoring system.

CPA: 2,3,4,8,9 MDG: 2,3,6

CPA: 8 MDG: 6

Note: * Indicates multi-country studies where intra-country comparisons were made. CPA is Caribbean Plan for Action 2000–2015 goals. MDG is Millennium Developmental Goals for education.

education or secondary levels might still be a challenge for certain student groups. In a different vein, this might instead be a reflection of the pressure from special interest groups to use certain monitoring indicators. The highest percentage in each category corresponds to the most popular indicator for that category. The indicators (by category) that local evaluators tended to utilize more often were: enrollment and disparities by demographics; employment/ unemployment rates and public expenditure on education; curriculum and teacher practices; classroom resources; perceptions of stakeholders; and achievement scores. The indicators (by category) that special interest groups tended to utilize more often were enrollment; public expenditure on education; curriculum; classroom resources; parent involvement; and achievement scores. Taken together, we see that in all but one category, local evaluators and special interest groups were similar in their selection for most popular indicator. This might be due to common perceived values of these indicators or, in a different vein, this might simply be due to the availability of these types of data. 3.1.5. Research question 3: to what extent are indicators aligned with suggestions from the literature summarized in Table 1? Table 4 is also organized by evaluator type (local evaluator versus special interest group), and category of indicator (Population & School Enrollment, Expenditure/Cost and Education Returns, Context of Learning/Context of Education, School Processes and Resources, Community Factors, and Academic Performance). This table allows comparisons for whether or not the indicator suggested by the literature was used, and by which type of evaluator it was more often used.

The findings in Table 4 demonstrate that this region made use of at least half of the indicators from each category of recommended indicators. Further, they were more often utilized by local evaluators than special interest groups. However, in general the percentages are low to medium thus an overall picture of education progress in this region based upon these suggested indicators remains fuzzy. A comparison of Table 3 to Table 4 shows that many indicators used in these 48 studies do not match with suggested indictors. On one hand, this raises questions regarding the validity of indicators being used in this region. Contrarily, it is also quite possible that the indicators being used are in fact more appropriate for the English-speaking Caribbean context. A third possibility is that both sets of indicators would actually yield similar findings. Future evaluation research can examine the utility of these recommended indicators as compared to the ones already being delineated by this region and determine which ones are most apposite to its context. 3.1.6. Research question 4: challenges encountered Challenges encountered by the evaluation studies are summarized in the last column of Table 1. Two common challenges faced in the evaluation studies of World Bank and UNESCO were a lack of data collection and/or the unavailability of baseline data (di Gropello, 2003; World Bank, 2011). Notably, two of the three multi-country studies carried out by World Bank and UNESCO tended to struggle with data collection (di Gropello, 2003; Jules & Panneflek, 2000). These evaluation studies attributed this shortcoming to the fact that they relied on untrained personnel in task force units housed at ministries of education to carry out data collection processes which resulted in the negligence of data being collected.

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

41

Table 2b Country and regional goals for the English-speaking Caribbean post-2000. Millennium developmental goals (UN Dakar, Senegal 2000)

1. Early childhood, care and development 2. Teacher quality 3. Technology use & monitoring system 4. Tracking stakeholder performance & Accountability 5. Involving civil society in education process 6. Inclusive/life skills/relevant secondary & tertiary education 7. Promoting ideal Caribbean person 8. Quality of basic education (primary education) 9. Establishing regional monitoring system 10. Developing benchmarks for literacy 11. Developing indicators to measure education progress

1. 2. 3. 4. 5. 6.

Caribbean Plan for Action

Millenium Developmental Goals

Caribbean plan for action 2000–2015

Early childhood, care and development Equity to disadvantaged groups Equitable access to education and life skills to youth and adults Improving literacy Gender disparities Education quality (any level)

Education quality (any level) Equity to disadvantaged groups Improving literacy Equitable access to education and life skills to… Early childhood, care and development Gender disparities Teacher quality Quality of basic education (primary education) Tracking stakeholder performance & Accountability Technology use & monitoring system Establishing regional monitoring system Early childhood, care and development Involving civil society in education process Inclusive/life skills/relevant secondary & tertiary… Developing indicators to measure education… Developing benchmarks for literacy Promoting ideal Caribbean person

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% Local evaluators studies (n = 32)

Special Interest Groups studies (n = 16)

Fig. 2. Focus of special interest groups vs. local evaluators (N = 48 studies).

Interestingly, UNICEF and IDB did not experience the same challenges as World Bank and UNESCO. In fact, none of their evaluation studies reported difficulty in data collection or with the availability of data that prevented them from answering their evaluation questions. An inspection of these authors reveals that UNICEF hired consultants to evaluate the project and IDB contracts with international and local scholars or evaluation firms to carry out their studies. This is in contrast to the World Bank and UNESCO who rely on country-appointed liaison officers who are not necessarily trained in research methodology to oversee the data collection processes. Other challenges experienced by special interest groups were poor linkages between indicators, objectives, and activities (World Bank, 2009b, 2011); and poorly defined quality outcome indicators that were not measurable (di Gropello, 2003; Jules & Panneflek, 2000). Therefore, although Table 2a demonstrates that input, process, and output quality indicators have been outlined since the Miller (2000) report, this region may still struggle with linking these intermediate indicators to objectives, processes, outputs, and quality outcomes. For local evaluators, six of the 32 studies experienced challenges involving data collection (KAIRI Consultants Limited, 2000, 2007a,b, 2009a; Lockheed, Harris, Gammill & Barrow, 2006; Northey, Bennet, & Canales, 2007; Wideen et al., 2007). KAIRI Consultants Limited (2009a) had field officers who neglected to follow through and collect the data. In a different vein, KAIRI

Consultants Limited (2000, 2007a) encountered community members who were either reluctant to complete surveys and participate in activities because they were not familiar with the field officers or they perceived their participation to be a waste of time. Similar to IDB, the Caribbean Development Bank contracts with local and international evaluation firms, however, in contrast to IDB, this organization has these evaluation firms train personnel in government ministries to carry out the research and data collection processes (see methodology section in these evaluation reports). Therefore, like the World Bank and UNESCO the Caribbean Development Bank uses task force units to carry out the evaluation processes. However, it does not suffer from the lack of data collection to the extent that the World Bank and UNESCO do because it develops personnel’ capacity in research methodology and data management. The challenges experienced by the fourth and fifth studies conducted by local entities were due to unforeseen time constraints and unexpected institutional changes (Northey, Bennet, & Canales, 2007; Wideen et al., 2007). Fortunately, these challenges did not prevent them from answering their evaluation questions. The sixth study that was temporarily challenged was due to the unavailability of baseline data from previous years (Lockheed et al., 2006). This, however, was remedied by these researchers utilizing propensity score matching to identify a comparison group so effects on learning could be determined. Thus, the lack of baseline data did not impede the evaluation

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

42

Table 3 Types of indicators used by local evaluations and special interest groups’ evaluations. Category

No. of indicators

Indicator

Population & school enrollment

13

Attendance

Local (n = 32) Number

8

School Processes and Resources

Community Factors

13

14

9

Number

Percent

3%

4

25%

0 20 0 0 0 0 20 1 1 1 19 2

0% 63% 0% 0% 0% 0% 63% 3% 3% 3% 59% 6%

3 10 5 4 5 2 7 0 0 0 0 0

19% 63% 31% 25% 31% 13% 44% 0% 0% 0% 0% 0%

0

0%

1

6%

19 0 0 0 19 0 1

59% 0% 0% 0% 59% 0% 3%

3 1 1 3 6 1 2

19% 6% 6% 19% 38% 6% 13%

Curriculum (revised, developed, etc.)

8

25%

8

50%

Learning environment is student centered (observations) Pupil teacher ratio Student support groups (peer counseling, etc.) Teacher absenteeism Teacher experience in teaching Teacher content knowledge Teacher beliefs about their practice Teacher moral Teacher practices (observations) Teacher qualification/education level Teacher’s perception of the new curriculum Social climate between teachers

4 3 0 0 1 2 1 0 8 3 2 1

13% 9% 0% 0% 3% 6% 3% 0% 25% 9% 6% 3%

2 4 4 1 0 0 0 1 4 7 4 0

13% 25% 25% 6% 0% 0% 0% 6% 25% 44% 25% 0%

Administration/staff structure, capacity or training in management Availability or geographic location of school buildings Classroom resources (technology, books, materials, science labs, etc.) Establishment of monitoring system Human resources at school (teacher aids, special ed, nurses) Type of education program available (K-12, tertiary, adult, continuing education, distance, etc.) Quality of school buildings (lockers, cafeteria, resource centers, special needs adaptability etc.) School development plans/school improvement projects/ New academic programs Assessment system appraisal Quality of teacher education programs Professional learning culture of school Communication between ministry and school Faculty/Teacher/staff professional development Health and Nutrition Programs

7

22%

6

38%

3 25

9% 78%

6 9

38% 56%

5 6 20

16% 19% 63%

8 1 0

50% 6% 0%

3

9%

6

38%

3

9%

6

38%

2 2 1 1 6 2

6% 6% 3% 3% 19% 6%

0 0 0 0 7 1

0% 0% 0% 0% 44% 6%

1

3%

2

13%

0 28 1 1 1 3 0 7

0% 88% 3% 3% 3% 9% 0% 22%

2 3 0 0 0 4 2 2

13% 19% 0% 0% 0% 25% 13% 13%

Education level of citizens by category Employment/unemployment rates/occupations Expenditure-per district Expenditure-per graduate Expenditure-per student Expenditure-public on education Family Income Funding sources

Context of learning/Context of education

Percent

1

Completion rate Enrollment (early childhood, primary or secondary) Repetition rate Survival rate Transition to secondary education Years to graduate Disparities by demographics Expulsion rate Suspension rate Truancy rate Mode of transport to school Proportion of students with special needs/disabilities Expenditure/Cost and education returns

Special interest groups (n = 16)

Community programs (media, essential skills, special needs info etc. Employer needs Perception of stakeholders Stakeholder involvement (non-parent) Community partnerships Parent knowledge of school processes Parent involvement Country-level action plans Legislative framework

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

43

Table 3 (Continued ) Category

No. of indicators

Academic performance

3

Indicator

Local (n = 32)

Special interest groups (n = 16)

Number

Percent

Enrollment in tertiary education

19

59%

Number 3

Percent 19%

Achievement scores Highest certificate attained by head of household

24 19

75% 59%

11 0

69% 0%

Note: The indicators were not always operationally defined the same way. For example, teacher qualification in one instance meant the number of teachers certified to teach, whereas in another instance it meant the level of qualification (B.A. vs. M.Ed.).

efforts for local evaluators because their level of skill in research methodology allowed them to overcome these hurdles. 4. Lessons learned: recommendations The 48 evaluation studies demonstrate that there are four models by which evaluators in this region carry out their studies. The first model used by the special interest groups World Bank and UNESCO entrusts personnel in government ministries to carry out data collecting processes. The second model used by the special interest groups UNICEF and IDB offers contracts to consultants, scholars, and evaluation firms to carry out the evaluation process. The third model used by the Caribbean Development Bank hires

local or international firms to train personnel in government ministries to carry out data collecting processes and analyses. The fourth model uses local scholars to conduct the evaluations. A stark difference in the evaluation studies conducted by the World Bank and UNESCO compared to all others is that they suffered from the unavailability of data and inconclusive findings due to the use of untrained personnel conducting these evaluations. These challenges can easily be remedied by following the model of the Caribbean Development Bank, and train local personnel in research methodology. The fact that local entities such as the Caribbean Development Bank utilize experts to train government personnel to carry out evaluation efforts suggests that this region is slowly developing its

Table 4 Suggested indicators vs. used indicators. Suggested indicators

Indicators actually used

Category of indicator

Indictor

Local (n = 32)

Context of education/Context for learning

 perceptions about education and its budget  proportion of 35 to 44 year olds who attended upper secondary school (high school)  teacher reported professional development in math and science  class size  teacher–pupil ratio  how principals use assessment reports

28* 0

88% 0%

3* 0

19% 0%

6*

19%

7*

44%

0 3 1

0% 9% 3%

0 4 0

0% 25% 0%

 amount of time spent on teaching lower secondary math (middle school math)  condition of facilities  enrollment rate of 20 year olds in tertiary education  teacher education

0

0%

0

0%

3 19 3

9% 59% 9%

19 3 7

59% 19% 44%

Resources, & school process

Academic Performance

studies

Special interest groups (n = 16)

 achievement gains in reading between 9 and 14 years of age  reading level of 9 year olds  fourth grade math and science literacy  fifteen year old math literacy

0

0%

0

0%

1 1* 19

25% 25% 59%

0 0* 7

0% 0% 44%

Population and school enrollment

 enrollment in K-12 education  foreign students in postsecondary education

19 0

59% 0%

10 0

63% 0%

Expenditure/Cost and Education returns

   

19 19 0 19

59% 59% 0% 59%

1 3 3 0

8% 19% 19% 0%

0 0 0

0% 0% 0%

0 0 0

0% 0% 0%

1 0 0 19

3% 0% 0% 59%

2 0 0 4

13% 0% 0% 25%

      

distribution of population by education and income employment rates per pupil expenditure percent of adults who completed higher education by age group and sex time spend on math learning education level of workers in chemical manufacturing percentage of population with engineering and architecture degrees initial central funds of each education level public school teacher salaries starting salary of primary school teachers proportion of unemployed youth and young adults

Note: * Indicates a proxy for this measure. For the indicator perceptions about education and its budget, the actual measure was community and stakeholders perceptions of the intervention, curriculum, or school system. For teacher professional development in math or science, the actual measure could have included training in early childhood education, literacy, and/or technology as well. For fourth grade math and science literacy, the actual measure only included math. Finally, the indicators in italics were ones that had contradictory findings in the literature.

44

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

evaluation capacity within its local governments. This growing capacity is also evidenced by the recent collaborative work of government ministries and the OECS Education Reform Unit in delineating regional indicators. Therefore, the English-speaking Caribbean might be less vulnerable to utilizing imposed indicators or insufficient indicators in evaluating education reform as compared to the decade preceding the Miller (2000) report. Arguably, this vulnerability will steadily decrease as this region develops the evaluation expertise to measure what it values. The findings demonstrate that this region has made large strides since the Miller (2000) report in outlining quality input, process, output, and to a much lesser extent, outcome indicators. These examined 48 reports, however, lacked detailed explanations of how the inputs, processes, and outputs were linked to education outcomes. This shortcoming ultimately demonstrates a lack of understanding of the program theory behind the initiatives. Understanding the program theory is a vital component of evaluation work for interventions (Chen, 1990; Weiss, 1997). Although the use of logic models for social change efforts is highly debated in evaluation literature for a variety of reasons (Isaacs, Perlam, & Pleydon, 2004; Melvin and Henry, 2013; Miller, 2013) at minimum they allow personnel and other stakeholders with weak research and evaluation training to visualize how inputs, activities, outputs and outcomes of the initiatives relate to one another (Knowlton & Philips, 2009). A detailed logic model would also limit the number of instances that poor linkages between inputs, outputs, and outcomes occur. One of the studies conducted by the World Bank did refer to a logic model of school effectiveness developed in Trinidad (di Gropello, 2003) but their application of this model did not articulate quality indicators in a manner that was useful. Thus, the second most pressing issue in this region is to develop a strong conceptual understanding of how country level and regional initiatives work and the necessary components that are required for these initiatives to have their intended effect. The OECS, a sub-region of the English-speaking Caribbean, has already recognized this need and has addressed it as evidenced by their recent document, the OECS Education Sector Strategy 2012–2021 (www.oecs.org). This document presents detailed models outlining links between objectives, strategies, outputs, and outcome indicators for each of its regional strategic objectives. The beginning of a program theory for Caribbean schools is also found in the recent work of Ramdass and Lewis (2012). Their work demonstrates major steps toward developing a model of school organizational health for elementary Trinidadian schools. School organizational health is a school-level quality indicator and it would be especially useful to include in monitoring systems. Ramdass and Lewes work support the notion that this region should not hastily discard indicators that are not recommended by the literature but rather move toward verifying which indicators are important to the socio-political context of education in this region. Taken together, the works of the OECS Education Sector Strategy as well as Ramdass and Lewis (2012) demonstrate that this region is beginning to delineate theories of change at the regional and school levels. It is imperative that other initiatives follow suit so progress can be mobilized and measured. A third important step for this region to take is to collect and house data in common country and regional databases. Although ministries of education in this region are trying to implement education monitoring systems, these implementations are only in their rudimentary stages at best (Cassidy, 2006). In order to establish common databases, individual countries should consider centralizing the data for efficient evaluation and monitoring. A possible hindrance toward centralization of data is that large funding agencies such as the World Bank supports decentralization

of education governance (Lincove, 2006). Decentralizing is attractive in many developing regions because it was the main mechanism by which the United States and other industrialized countries accomplished mass schooling (Chaudhary, Musacchio, Nafziger, & Yan, 2012). Notably, the larger populations of the English-speaking Caribbean such as Guyana, Jamaica, Trinidad and Tobago (Tsang, Fryer & Arevalo, 2002) and Belize (Na¨slund-Hadley, Alonzo & Martin, 2013) have decentralized education systems. This study contends, however, that since the remaining countries have populations that are miniscule, decentralization of education governance can be counterproductive to data collection and monitoring for this region. In support of this notion, the IDB also draws attention to managerial challenges that decentralization brings in this region’s efforts to establish education management information systems. IDB states that as decentralizing takes root in these education sectors, ‘‘assuring the development of systems that meet the needs of educators at all levels will require much more attention to the alignment and integration of subsystems across levels and the development of data and information standards than has been the practice until now’’ (Cassidy, 2006, p. 3). Finally, special interest groups such as the World Bank, UNESCO, UNICEF, and IDB might consider making a concerted effort to support evaluation capacity building in this region. Training in database management and essential research skills has already begun. Assuredly, continuous training will equip this region with the tools necessary to create and maintain monitoring databases and collect information on the indicators they deem important. 5. Conclusion The evaluation practices of special interest groups and local entities since the Miller (2000) report demonstrate that both are aware of the developmental status and accomplishments of this region. This is evidenced by the use of quality indicators appropriate for the English-speaking Caribbean context by both types of evaluators. Further, special interest groups appear attuned to the values of this region as evidenced by a higher percentage of their studies compared to local evaluation studies being aligned with the Caribbean Plan for Action 2000–2015 goals. Since the Miller (2000) report, evaluation studies in this region have also shifted their focus from predominantly access to basic education and onto measuring equity in education opportunities for disadvantaged groups, the establishing of compulsory early childhood education, improving the quality of secondary and adult education, and teacher development among others. This is not to say, however, that indicators measuring access, efficiency, and coverage have been discarded, but rather, they are continuingly being measured which is consistent with suggestions from current literature. In short, evaluation practices in this region have met the challenge set forth by Miller (2000) more than a decade ago to delineate indicators that measure education quality. Further, we can judge the findings of these reports as more or less conclusive because together they adequately account for the socio-political context of this region. Important next steps now are for CARICOM to learn from the example set forth by the OECS Education Sector Strategy 2012–2021 plan and articulate theories of change behind reform efforts to make clear how inputs, processes, and outputs are linked to desired educational outcomes for this region. References Araujo, M. C., Lo´pez Bo´o, F., & Puyana, J. M. (2013). Overview of early childhood development services in Latin America and the Caribbean. Washington, DC: InterAmerican Development Bank.

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46 Barrow, D., & Delisle, J. (2010). Evaluation of some teachers’ concerns, and levels of use of the lower secondary SEMP science curriculum in Trinidad and Tobago. Caribbean Educational Research Journal, 2(1), 3–16. Blom, A., & Hobbs, C. (2008). School and work in the Eastern Caribbean: Does the education system adequately prepare youth for the global economy? Washington: World Bank. Caribbean Community Secretariat. n.d. EFA Caribbean Plan of Action 2000-2015. UNESCO. http://portal.unesco.org/en/files/25227/11081130321EFACaribbean.pdf /EFACaribbean.pdf.. Cassidy, T. (2006). Education management and information systems in Latin America and the Caribbean: Lessons and Challenges. Study prepared for the VIII Regional Policy Dialogue Meeting, Education Network. Inter-American Bank. Chaudhary, L., Musacchio, A., Nafziger, S., & Yan, S. (2012). Big BRICs, weak foundations: The beginning of public elementary education in Brazil, Russia, India, and China. Explorations in Economic History, 49(2), 221–240. Chen, H. (1990). Theory-driven evaluations. Newbury, CA: Sage. Clarke, P. (2011). The status of girls’ education in education for all fast track initiative partner countries. Prospects, 41(4), 479–490. De Lisle, J. (2010). Final report for the consultancy to determine the status of the Continuous Assessment Programme (CAP) in the sixty (60) full treatment schools under the SES Project. Port of Spain. De Lisle, J. (2012). Explaining whole system reform in small states: The case of Trinidad and Tobago secondary education modernization program. Current Issues in Comparative Education, 15(1), 64–82. di Gropello, E. (2003). Monitoring educational performance in the Caribbean. Washington: World Bank. Dongen, V. 2002. Results of the Escuela Nueva baseline survey for 5 schools in region 1 and 7 schools in region 9. A report written by UNICEF. http://origin-ww.unicef.org/french/evaldatabase/files/GUY_2001_801_part_1_report.pdf.. Dye, R., Jennings, J., Lambert, C., Hunt, B., & Wein, G. (2002). Evaluation and recommendations for strengthening and extending the New Horizons for primary schools project in Jamaica. Washington, DC: Aguirre International. Education Development Center Inc. (2008). Trinidad and Tobago Seamless Education System Project Early Childhood Care and Education Study. Port of Spain. Education Evaluation Center (2012). Project completion report: Project Number BA0009. Ministry of Education, Barbados: Inter-American Development Bank, http:// www.iadb.org/en/projects/project-description-title,1303.html?id=BA0009 Education Planning Division. n.d. Education for all action plan: Target 2015. Prepared for the Government of the Republic of Trinidad and Education Ministry of Education. http://www.moe.gov.tt/media_pdfs/publications/Action%20Plan%20Booklet.pdf.. Fitzpatrick, J., Sanders, R., & Worthen, B. (2004). Program evaluation: alternative approaches and practical guidelines (3rd ed.). Boston: Allyn and Bacon. Franklyn, G. (2010). Support for transition from Early Childhood Care and Education (ECCE) to primary education. Port of Spain. Fuller, B. (1987). What school factors raise achievement in the third world? Review of Educational Research, 57(3), 255–292. George, J. E. (2009). Gender issues in education and intervention strategies to increase participation of boys. Trinidad: St Clair. Halcrow Group Limited (2002). Country poverty assessment – Anguilla 2002 (vol. 1). Prepared for Caribbean Development Bank. Halcrow Group Limited (2003a). Main Report: Country poverty assessment – Dominica 2002. Prepared for Caribbean Development Bank. Halcrow Group Limited (2003b). Main Report: Country poverty assessment – British Virgin Islands 2002. Prepared for Caribbean Development Bank. Halcrow Group Limited (2010). Main Report: Country poverty assessment – Belize 2009. Prepared for Caribbean Development Bank. Halcrow Group Limited (2012). Main Report: Montserrat survey of living condictions – Montserrat 2009. Prepared for Caribbean Development Bank. Hanushek, E. A. (1995). Interpreting recent research on schooling in developing countries. The World Bank Research Observer, 10(2), 227–246. http://dx.doi.org/ 10.1093/wbro/10.2.227 Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative health research, 15(9), 1277–1288. Isaacs, B., Perlman, N., & Pleydon, A. (2004). Limitations in the scope of logic models: Social climate and context. Journal of Intellectual Disability Research, 48, 515. Iyo, A. (2001). 2001 BZE: School Health and Physical Education Services (SHAPES) program: An impact assessment. A report written for UNICEF, http://www.unicef.org/evaldatabase/index_14135.html Jules, V., & Panneflek, A. (2000). EFA in the Caribbean: Assessment 2000. Sub-regional report, Vol II. State of Education in the Caribbean in the 1990: Sub-Regional Synthesis and Annexes. Jamaica: UNESCO. KAIRI Consultants Limited (2000). Poverty assessment report – Turks & Caicos Islands St Kitts & Nevis 2000 (vol. 1). Prepared for Caribbean Development Bank. KAIRI Consultants Limited (2001). Poverty assessment report – St Kitts & Nevis 2001 (vol. 1). Prepared for Caribbean Development Bank. KAIRI Consultants Limited (2007a). Main Report: Country poverty assessment – Antigua and Barbuda 2007. Prepared for Caribbean Development Bank. KAIRI Consultants Limited (2007b). Trade adjustment and poverty in St Lucia-report – St Lucia 2005/2006 (vol. 1). Prepared for Caribbean Development Bank. KAIRI Consultants Limited (2008). Main Report: National assessment of living conditions Cayman Islands 2006/2007. Prepared for Caribbean Development Bank. KAIRI Consultants Limited (2009a). Main Report: Country poverty assessment – Anguilla 2007/09. Prepared for Caribbean Development Bank. KAIRI Consultants Limited (2009b). Country poverty assessment – St Kitts & Nevis 2007/ 2008 (vol. 1). Prepared for Caribbean Development Bank.

45

KAIRI Consultants Limited (2010). Main Report: Country poverty assessment – Dominica 2009. Prepared for Caribbean Development Bank. KAIRI Consultants Limited. n.d. Main Report: Country poverty assessment – Grenada, Carriacou and Petit Martinique 2007/2008. Prepared for Caribbean Development Bank.. Knowlton, L., & Phillips, C. (2009). The logic model guidebook: better strategies for great results. Los Angeles: Sage. Kouame, A., & Reyes, M. I. (2011). The Caribbean region beyond the 2008-09 global financial crisis. Options for the Caribbean After the Global Financial Crisis. Conference on Economic Growth, Development and Macroeconomic Policy. International Monetary Fund. Leacock, D. (2009). Quality education for all in the Eastern Caribbean: Rethinking the curriculum in the face of universal secondary education. Journal of Eastern Caribbean Studies, 34(3), 19–38. Lincove, J. A. (2006). Efficiency, equity and girls’ education. Public Administration and Development, 26(4), 339–357. Lockheed, M., Harris, A., Gammill, P., & Barrow, K. (2006). Impact of New Horizons for primary schools on literacy and numeracy in Jamaica 1999–2004. Journal of Education in International Development, 2, 1. www.equip123.net/JEID/articles/2/ NewHorizons.pdf Marshall, J. H., Chinna, U., Hok, U. N., Tinon, S., Veasna, M., & Nissay, P. (2012). Student achievement and education system performance in a developing country. Educational Assessment, Evaluation and Accountability, 24(2), 113–134. http://dx.doi.org/ 10.1007/s11092-012-9142-x Melvin, M. M., & Henry, G. T. (2013). Logic models and content analyses for the explication of evaluation theories: The case of emergent realist evaluation. Evaluation and Program Planning, 38, 74–76. http://dx.doi.org/10.1016/j.evalprogplan.2012.03.018 Miller, D. C., Sen, A., & Malley, L. B. (2007). Comparative indicators of education in the United States and other G-8 Countries: 2006. NCES 2007–006. National Center for Education Statistics. Miller, E. (2000). Education for all in the Caribbean in the 1990: Retrospect and prospect. Assessment 2000 Monograph Series No. 19, Kingston. Jamaica: UNESCO. Miller, E. (2009). Universal secondary education and society in the commonwealth Caribbean. Journal of Eastern Caribbean Studies, 34(2), 3–18. Miller, R. L. (2013). Logic models: A useful way to study theories of evaluation practice? Evaluation and Program Planning, 38, 77–80. http://dx.doi.org/10.1016/j.evalprogplan.2012.03.019 Miske Witt, & Associates (2008). Achieving inclusion: Transforming the education system of Trinidad and Tobago. Final report prepared for the Ministry of Education. Trinidad: St Clair. Moore, R. (2010). Final report on strategy for ECCE. Port of Spain. Na¨slund-Hadley, E., Alonzo, H., & Martin, D. (2013). Challenges and opportunities in the Belize education sector (No. 80738). Inter-American Development Bank. Neirotti, N. (2012). Evaluation in Latin America: Paradigms and practices. Evaluation voices from Latin America. New Directions for Evaluation, 134, 7–16. Northey, D., Bennett, L., & Canales, J. (2007). Final report on curriculum and instruction, testing and evaluation and Spanish as the first foreign language. Port of Spain. Office of Evaluation and Oversight (2002). Country program evaluation (CPE): Guyana (1989–2001). Inter-American Bank. Office of Evaluation and Oversight (2010). Country program evaluation (CPE): Jamaica (2003–2008). Washington, DC: International Development Bank. Office of Evaluation and Oversight (2014). Country program evaluation (CPE): Barbados (2010–2013). Inter-American Bank. Organization of Eastern Caribbean Education Reform Unit (2002). Performance management handbook for schools. Prepared as part of the Report of the OECS EMIS Project: 1998-2002: Establishing a Sub-Regional Education Management Information System (EMIS). The Eastern Caribbean Education Reform Project. Castries, St. Lucia OECS Education Reform Unit. Ramdass, M., & Lewis, T. (2012). Towards a model for research on the effects of school organizational health factors on primary school performance in Trinidad & Tobago. International Journal of Educational Development, 32(3), 482–492. Rodrigues, A. (2000). Effecting a smooth transition from nursery to primary: Final Report. A report written by UNICEF, http://www.unicef.org/evaldatabase/index_14309.html Schrouder, S. (2008). Educational efficiency in the Caribbean: A comparative analysis. Development in Practice, 18(2), 273–279. Sir Arthur Lewis Institute of Social and Economic Studies (SALISES) (2012). Assessment of living conditions 2010 (Volume 1). Prepared for Caribbean Development Bank. Swaroop, V. (1996). The public sector in the Caribbean. Issues and reform options. Policy Research Working Paper #1609. Policy Research Department/Public Economics Division. World Bank. Tsang, M. C., Fryer, M., & Arevalo, G. (2002). Access, Equity and performance: Education in Barbados, Guyana, Jamaica, and Trinidad and Tobago. Washington, DC: International Development Bank. Vedder, P. (1994). Global measurement of the quality of education: A help to developing countries? International Review of Education, 40(1), 5–17. http://dx.doi.org/ 10.1007/BF01103001 Walberg, H. J., & Zhang, Guoxiong. (1998). Analyzing the OECD indicators model. Comparative Education, 34(1), 55–70. Warrican, J. (2009). Quality secondary education for all: An introduction. Journal of Eastern Caribbean Studies, 34(2), 1–3. Weiss, C. H. (1997). Theory-based evaluation: Past, present, and future. New Directions for Evaluation, 76, 41–55. Wideen, M., Kanevsky, L., & Northey, D. (2007). Modernizing the approach to teacher’s development. Port of Spain.

46

A.G. Bowe / Evaluation and Program Planning 48 (2015) 31–46

World Bank (2002). Project Appraisal Document. OECS Education Development Program Report No. 24159-LAC. World Bank (2008). Girls’ education in the 21st century; equality, empowerment, and growth (vol. 23). Portland: Book News Inc. World Bank (2009a). Implementation completion and results report for St Kitts & Nevis (IBRD-71250). Report No. ICR00001256. World Bank (2009b). Implementation completion and results report for St Lucia (IBRD71240 IDA-36610). Report No. ICR00001070.

World Bank (2011). Implementation completion and results report for Grenada (IBRD71870 IDA-38090 IDA-45420). Report No. ICR00001901. Anica G Bowe is a Visiting Assistant Professor at Oakland University in Rochester Michigan. She earned a Ph.D. in Education Psychology with a focus in Quantitative Methods in Education. Her interests are in education outcomes and evaluation practices within the Caribbean, quantitative research methods and design, and instrument development.

The development of education indicators for measuring quality in the English-speaking Caribbean: how far have we come?

Education evaluation has become increasingly important in the English-speaking Caribbean. This has been in response to assessing the progress of four ...
564KB Sizes 1 Downloads 6 Views