Evaluation and Program Planning 45 (2014) 50–60

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Evaluation of partner collaboration to improve community-based mental health services for low-income minority children and their families§ Jane Hamilton a,*, Charles Begley b, Ralph Culler c a

University of Texas Medical School at Houston Department of Psychiatry, United States University of Texas Health Science Center at Houston School of Public Health, United States c Research and Evaluation Services of Texas, United States b

A R T I C L E I N F O

A B S T R A C T

Article history: Received 29 May 2013 Received in revised form 13 March 2014 Accepted 13 March 2014 Available online 23 March 2014

This paper describes a mixed methods evaluation of partner agency collaboration within a system of care implemented from 2010 to 2012 in a historically underserved minority community in Houston, Texas. The first section describes the project and the framework for evaluating partner agency collaboration. The second section describes the evaluation methods and presents the baseline and follow-up results of the evaluation. The third section includes a discussion of the evaluation findings, the conclusion, and the lessons learned. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Mental health services Health disparities Collaboration Program evaluation Systems of care

1. Introduction The President’s New Freedom Commission on Mental Health (2003) recommends that children and youth at risk of mental health problems receive mental health screening, assessment, and referral to treatment in order to improve outcomes and decrease the chances of long-term disability (Shonkoff & Phillips, 2000). For children with multiple and complex mental health needs, or those whose mental health and social needs overlap, multiagency collaborative interventions are recommended (Bullock & Little, 1999; Miller & Ahmad, 2000; Quinn & Cumblad, 1994). Achieving this goal requires a system of care approach defined as ‘‘a spectrum of effective, community-based services and supports for children and youth with mental health challenges and their families, that is organized into a coordinated network, builds meaningful partnerships with families and youth, and addresses their cultural and linguistic needs, in order to help them to function better at home,

§ Funding for this project was provided by the Hogg Foundation for Mental Health. * Corresponding author at: 2800 S. MacGregor Way, HCP 3-E50, Houston, TX 77021, United States. Tel.: +1 713 695 7347. E-mail address: [email protected] (J. Hamilton).

http://dx.doi.org/10.1016/j.evalprogplan.2014.03.010 0149-7189/ß 2014 Elsevier Ltd. All rights reserved.

in school, in the community, and throughout life’’ (Stroul & Friedman, 2011, p. 3). The system of care approach has been shown to improve children’s mental health service delivery in the United States by increasing accessibility, responsiveness, and coordination of services (Ayers & Lyman, 2006). With core values of providing community-based, family-driven, youth-guided, and culturally and linguistically competent services (Stroul & Friedman, 2011; Stroul, Blau, & Friedman, 2010), the system of care approach may be especially beneficial in historically underserved communities, where an increase in the availability and effectiveness of community-based mental health services is recommended to reduce mental health disparities (U.S. Department of Health and Human Services, 2011). However, to ensure successful implementation of systems of care in underserved communities, collaboration between partner organizations with continually improving synergy is necessary (Lasker, Miller, & Weiss, 2001; Lasker, Weiss, & Miller, 2001; Lightburn, 2008). In Houston, Texas, the Hogg Foundation for Mental Health initiated a request for proposals with the goal of funding a children’s mental health service delivery project in a low-income racial and ethnic minority community using a system of care approach. A group of five community-based agencies responded to this request and agreed to collaborate and integrate their services.

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

The project, the South Region Children’s Mental Health Collaborative (SRCMHC), was awarded the Hogg grant in August of 2009, and program implementation began in January 2010. 1.1. South Region Children’s Mental Health Collaborative The primary aim of the SRCMHC was to prevent mental health problems in children and families before they interfered with success in school. The SRCMHC targeted area had a long history of severe economic and social challenges and limited resources for mental health and social services. Prior to the implementation of the SRCMHC, several community-based organizations were working in the area to improve conditions; however, children’s mental health services were not being addressed in a coordinated way. According to Woodland and Hutton (2012), ‘‘the sine qua non of collaboration is a shared purpose—two or more entities (organizations or people) come together or stay together for a reason—to achieve a vision, or to do something that could not otherwise be accomplished in isolation’’ (p. 370). The SRCMHC partner agencies came together in order to provide a coordinated continuum of mental health services that included prevention, treatment, and social services. They adopted a system of care model with the intent of improving mental health service delivery among children and families served by the safety net system. The partner organizations participating in the project included a:  child development agency providing parent education and project administration;  city health department providing navigation and case management;  federally funded system of care program providing intensive wrap-around support;  large non-profit counseling agency;  university-based department of psychiatry;  large urban school district; and

51

 school of public health conducting the program evaluation. During project planning, SRCMHC partner agencies expressed the desire to develop collaborative relationships that would enable them to accomplish more than would be possible if they were working independently. The agencies defined collaboration as a mutually beneficial endeavor aiming to achieve common goals that could not be accomplished alone. The SRCMHC partners agreed to incorporate a public health approach that included mental health promotion and early identification activities as recommended by Stroul et al. (2010) for system of care programs. The SRCMHC leadership team developed an initial logic model (Hernandez, 2000; Mayeske & Lambur, 2001) depicting linkages between program resources, planned activities, expected outputs, and initial outcomes (Fig. 1). The logic model development process created the opportunity for the partner agencies to articulate the program’s theory of collaboration, specify realistic objectives and outcomes that could be achieved in the first three years of the project, define program activities and strategies, and identify methods and procedures for monitoring and evaluation. 1.2. Partner agency collaboration According to Butterfoss (2007), community health collaboration ‘‘signifies a durable relationship where separate organizations enter into a structural arrangement with formal roles and a full commitment to a common mission’’ (p. 28). Through collaboration, Woodland and Hutton (2012) propose that organizations can address social issues, accomplish tasks, and reach goals that fall outside the grasp of any individual entity working independently. Woodland and Hutton explain that effective collaboration requires an ongoing cycle of inquiry including: dialog, decision making, action, and evaluation around a shared purpose (Gajda & Koliba, 2007, 2008; Goodlad, Mantle-Bromley, & Goodlad, 2004). A primary objective among SRCMHC partners was to identify specific ways in which they could work together to form a multi-

Fig. 1. South Region Children’s Mental Health Collaborative Logic Model.

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

52

Tools, 2008). Developed by the Center for the Advancement of Collaborative Strategies in Health (CACSH), the PSAT is designed to measure interagency partner collaboration through 67 questions with 5-point Likert scale response options (CACSH, 2007). There are seven PSAT subscales: synergy, leadership, administration and management, decision-making, financial resources, non-financial resources, and efficiency. Nine PSAT items related to synergy measure how well collaborative partners are able to identify new and creative ways to solve problems (Cramm, Strating, & Nieboer, 2011). Leadership (11 items), administration and management (9 items), and decision-making (3 items) measure how collaborative partners perceive they are supported in achieving high levels of synergy in these process areas (Weiss, Anderson, & Lasker, 2002). Financial resources (3 items), non-financial resources (6 items), and efficiency (3 items) measure perceptions about the availability and use of the financial and non-financial resources within the partnership. Two additional domains, satisfaction (5 items) and cost/benefit (18 items), were treated as collaboration outcome subscales for this evaluation. They include questions regarding participant involvement in and satisfaction with the collaborative process. The PSAT was pilot tested by Weiss et al. (2002) in the National Study of Partnership Functioning that included 66 public health partnerships in 28 U.S. states. Through two rounds of psychometric testing, Weiss et al. found the PSAT to be a methodologically rigorous, reliable, and valid instrument. Additionally, research by Lasker, Miller, et al. (2001) found that the PSAT’s proximal outcome measure, synergy, was an effective way to predict the success of a partnership’s collaborative process. Weiss et al. (2002) posit that partner collaboration should be viewed as a process in which project partners, who often have divergent perspectives and experiences, come together to solve problems collectively. Within the SRCMHC, the PSAT was used to assess how well the partnership was functioning as a whole and to guide program planning and implementation. According to Hodges, Nesman, and Hernandez (1999), true collaboration is difficult to achieve ‘‘because it is both the process and the product of building systems of care’’ (p. 18). Anderson, McIntyre, Rotto, and Robertson (2002) recommend that when developing system of care collaborations, evaluation activities start with open dialog among leadership, staff, and families about what is working and what is not. The use of the PSAT within SRCMHC project enabled the program evaluators to obtain information from partner agency administrators, service providers, and consumer family members

agency collaborative and track overall progress. Within the SRCMHC project, collaboration activities included: monthly partner agency meetings, a community-based mental health awareness campaign, and the use of a shared management information system for making interagency referrals and documenting client intake information. An initial program evaluation objective was to measure the extent to which agency partners were collaborating with each other and to identify specific areas where the collaborative process could be improved. When formulating the evaluation design, the evaluation team identified the following three questions to assess collaboration: (1) How well is the collaborative process working? (2) What are the perceived costs and benefits of collaboration? (3) Do the benefits of collaboration outweigh the costs? This paper presents the methods and results of this evaluation. 2. Methods 2.1. Overview The evaluation of collaboration involved the administration of a validated survey instrument at the end of the first (baseline) and second (follow-up) years of the project to partner agency staff, consumer family members, and advocates followed by key informant interviews. The baseline survey was administered at the end of program year one in order for participants to have enough experience working together to be able to respond to the questions. The evaluation team assessed improvement in collaboration by comparing baseline and follow-up survey responses. Key informant interviews were conducted to obtain more detailed information on PSAT domains where responses varied. Feedback from the evaluation was provided by the evaluation team through presentations and written evaluation reports. A timeline of the evaluation is presented in Fig. 2. 2.2. Selection of the PSAT The evaluators conducted a search for a validated instrument to assess collaboration. The Partnership Self-Assessment Tool (PSAT) was chosen because it was developed for public health partnerships with five or more partners that had begun program implementation (National Collaborating Center for Methods and

Fall 2010

Baseline Key Informant Interviews

Year One Evaluaon Report

(n=42)

Spring 2011

Summer 2011

Baseline PSAT

(n=11)

Follow-Up PSAT

Fall 2011 (n=36)

Follow-Up Key Informant Interviews 1 Spring 2012 (n=9) Fig. 2. Mixed methods evaluation timeline.

Year Two Evaluaon Report Summer 2012

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

and advocates about collaborative areas needing improvement in order to initiate the dialog necessary to develop a more effective system of care. 2.3. PSAT administration Although the PSAT is normally administered in paper format, permission was obtained from the PSAT developer, Roz Lasker, M.D., to administer the tool online to partner agency staff and consumer family members and advocates via Zoomerang.com in the fall of 2010 and 2011. Approval was also obtained from the University of Texas Houston Health Science Center Institutional Review Board (IRB). Prior to beginning the PSAT online survey, participants were prompted in Zoomerang.com to review and complete an informed consent form. Participants were informed that they could exit the survey at any time and that their participation would not be revealed to anyone outside of the evaluation team. 2.4. Key informant interviews In the spring of 2011 and 2012, the evaluation team conducted key informant interviews to obtain more detailed information about selected domains with significant response differences in the baseline and follow-up PSAT surveys. The interviews were conducted at community-based agency locations with subgroups of PSAT respondents using a sampling strategy of including one program administrator and one service provider from each SRCMHC agency. A family member affiliated with the system of care program partner, who regularly attended SRCMHC project meetings, was also invited to be interviewed. To guide the inperson interviews, a semi-structured interview tool with nine open-ended questions related to the PSAT results was developed by the evaluation team and approved by the IRB (Table 1). All interviews were conducted by one member of the evaluation team who was a mental health clinician. The evaluator used interviewing techniques such as reflective listening and reframing to increase the chances that information provided during interviews was documented accurately. The interviews were not recorded to ensure participants would be willing to share their thoughts and feelings and because the semi-structured interview instrument made note taking feasible. The interview protocol was piloted during the first four interviews at baseline and follow up to assess the ability of the Table 1 Semi-structured interview questions. Interview Questions What is your opinion of the adequacy of financial and related resources? Why do you think the service-providing individuals rated the adequacy of the resources lower than administrators? What actions, if any, would you suggest that the SR Collaborative, its leadership and participants take to improve this area? What is your opinion of the SR Leadership Team and the decision-making process in place at the present? Why do you think the participants working at primarily service-providing agencies scored these areas much lower? What actions, if any, would you suggest that the SR Collaborative, its leadership and participants take to improve the efficiency, leadership and/or decision-making? What is your opinion of the collaboration’s costs versus benefits of participation for you, and your personal satisfaction with the South Region Collaborative? Why do you think the participants working at primarily service-providing agencies scored these areas much lower? What actions, if any, would you suggest that the SR Collaborative, its leadership and participants take to reduce costs, increase benefits, and increase participant satisfaction?

53

semi-structured interview questions to elicit needed information about the PSAT responses. During the pilot interviews, two evaluators attended each interview with one evaluator conducting the interview and the other evaluator observing the interview and taking notes. After each of the initial interviews, the evaluator observing the interview provided feedback to the evaluator conducting interviews to improve the interview process. After the fourth interview, it was determined that the interviewer was consistently adhering to the interview protocol, and the remaining interviews were conducted by one evaluator. Informed consent was obtained prior to conducting each interview. Interviewees were told they could decline to participate in the interview, and their decision would not be disclosed to persons not on the evaluation team. Additionally, the evaluator informed interviewees that all interview responses would be combined with other responses in order to maintain confidentiality. The questions on the semi-structured interview form were designed to encourage interviewees to provide detailed information as to why differences in PSAT responses may have occurred. By asking interviewees to speculate on aggregated PSAT results, the interviewer encouraged interviewees to discuss issues that were believed to be important to multiple SRCMHC partners. Additionally, since interviewees were not asked to indicate what their own response on a particular item might have been, the interviewees were not ‘‘put on the spot’’ and confidentiality could be maintained during the interview process. 2.5. Analysis The evaluators calculated PSAT partnership synergy and dimensions of partnership functioning mean responses within each subscale section. After reviewing the PSAT results for all SRCMHC participants, the evaluators identified subscale areas where subgroups of respondents’ scores were significantly lower than others. PSAT results were compared between: (1) participants from service-providing agencies (regardless of their role within their agency) and those from primarily administrative agencies; and (2) respondents whose role was administrative to those who indicated they had a service provider role (regardless of their agency’s designation). T-test results were used for identifying significant variation in the data. Service-providing agencies included the university-based department of psychiatry, the non-profit counseling agency, the system of care program, and the city health department. Administrative agencies included the child development agency providing program administration, the public school district receiving SRCMHC services, and the school of public health conducting the program evaluation. To better understand the differences, the evaluators developed key informant interview questions based on the statistically significant differences between subgroups of PSAT respondents. Baseline and follow-up PSAT mean responses were also compared across all groups and the percent change from baseline to follow-up was calculated. T-tests were used to identify significant differences. The evaluation team used qualitative analysis methods to analyze key informant interview responses. After completing the baseline and follow-up key informant interviews, evaluation team members transcribed and indexed all interview responses according to the corresponding interview questions. Each transcribed interview was then ordered according to the date and time of the interview, the agency in which the interview took place, and the participant’s employment role (administrator or service provider). Next, the evaluators coded the transcribed interviews using systematically applied indexing categories as recommended by Mason (2002). The evaluators then analyzed the relationships between categories and corresponding interview questions and

54

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

identified themes based on qualitative analysis methods recommended by Green and Thorogood (2004). For quality assurance, at least two of the three evaluators participated in all transcriptions, indexing, coding, and analyses. During the indexing process, the evaluation team members documented how many times a particular response to an interview question was made by an interviewee to identify themes. If a response was only made once, then the comment was not included in the evaluation report to protect confidentiality. If a response was made by 2 to 5 interviewees, then it was reported as an issue raised by several interviewees. If an issue was raised by 6 or more interviewees, then it was reported as an issue raised by the majority of interviewees. The major themes that emerged from the analysis of the key informant responses are described in the results section.

Table 2 PSAT subscale means and scoring categories. Subscale

Mean

PSAT category

Non-financial resources Financial and

3.73

Good

3.71

Good

other resources Efficiency Decision making Leadership Administration and management Synergy Cost/benefit Satisfaction

3.51 3.83 3.28 3.44 3.48 4.12 3.80

Good Good Good Good Good Very good Good

3. Baseline results 3.1. PSAT scores In the fall of 2010, 42 SRCMHC participants from all partner agencies (91%) completed the PSAT online survey. The majority of participants were administrators (60%), but 26% were direct service providers. Smaller proportions were SRCMHC program evaluators (10%) and consumer family members and advocates (5%). Overall, the baseline PSAT results were positive, and the evaluation team developed positive descriptors of PSAT mean scores that recognized that all the subscale averages were above the midpoint level of 3.00.1 PSAT mean scores from 3.0 to 3.9 were labeled ‘‘good,’’ and scores from 4.0 to 4.5 were labeled ‘‘very good.’’ A single item comparing the benefits and drawbacks of participation in the SRCMHC project was above 4.00 (mean score 4.12) at baseline. For each subscale, 5.00 was the highest possible score, and scores from 4.5 to 5.0 were labeled ‘‘excellent.’’ Table 2 presents the nine baseline subscale mean scores. When the evaluators stratified the PSAT results by respondent role (serviceproviding versus administrative staff), a statistically significantly lower rating emerged for service-providing staff compared to administrative staff across all agencies in the adequacy of financial resources. When the PSAT results were categorized by agency type (administrative versus service providing), statistically significantly lower scores also emerged among a number of subscales including: administration and management, leadership, decision making, synergy, and satisfaction for respondents at service-providing agencies compared to individuals at administrative agencies. Table 3 presents pairs of subscale averages where statistically significant differences were found between subgroups of PSAT respondents. 3.2. Key informant interviews Eleven individuals comprising SRCMHC partner agency administrators and service providers participated in the baseline interviews. The system of care program partner family member invited to participate was not able to be interviewed due to scheduling conflicts. The following are summaries of interview results related to PSAT subscale domains that received significantly lower scores from partner agency front-line staff and additional respondents at service providing agencies. 3.2.1. Adequacy of financial resources During the baseline key informant interviews, an issue raised by multiple interviewees was a perceived lack of flexibility in the use of the grant funds. The interviewees reported that within the 1

The mean scoring level descriptors, ‘‘Zones,’’ recommended by the PSAT developers were perceived as being confusing by the SRCMHC partners.

SRCMHC project there were a number of areas where funding was needed but was not available because of grant restrictions. These areas included funding to support client basic needs, service provider incidental expenses, and incentives for families participating in SRCMHC services. According to several direct service providers, funding in these areas was necessary to properly deliver services in the community. Due to the extreme poverty and intensive needs of the families being served, two interviewees reported that they were using flexible grant funds from other sources to help meet client basic needs. When asked why service providers in particular may have thought SRCMHC financial and related resources were inadequate, there was agreement among interviewees, both administrators and service providers, that being on the front lines in the community placed service providers in the situation of being able to see the needs. One interviewee described how service providers were working in client homes without electricity. 3.2.2. Collaboration process domains While a majority of interviewees reported that the SRCMHC project leadership was good, several interviewees expressed concerns that the constraints placed on them by the grant funder made service delivery difficult. Problems related to leadership and administration of the SRCMHC project were, in general, reported by respondents as being part of the process of creating a collaborative. The majority of respondents indicated during the baseline interviews that both areas had improved since the baseline PSAT was administered. A prominent issue that emerged Table 3 Statistically significant subscale mean pairs at baseline. Subscale Adequacy of financial resources Service provider role Administrative role Administration and management Service provider agencies Administrative agencies Leadership Service provider agencies Administrative agencies Decision making Service provider agencies Administrative agencies Synergy Service provider agencies Administrative agencies Satisfaction Service provider agencies Administrative agencies *

Number of respondents

Mean*

Standard deviation

25 11

2.97 3.98

0.57 0.98

23 15

3.15 3.83

0.73 0.50

23 15

2.95 3.71

0.82 0.63

23 15

3.49 4.27

0.62 0.61

23 15

3.27 3.67

0.56 0.40

23 15

3.51 4.20

0.62 0.52

All of these mean differences are statistically significant (T-test) with p < .05.

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

for interviewees was the decision-making process. Two specific examples of dissatisfaction with the decision-making process given by interviewees was a perceived inconsistency in implementing system of care values and principles across SRCMHC programs and the lack of inclusion of SRCMHC partners in development of the SRCMHC community mental health awareness campaign. In these areas, SRCMHC administrators and service providers indicated more work was needed to incorporate SRCMHC families and community members into service delivery planning. Most interviewees indicated that two aspects of the decisionmaking process needed improvement. First, many interviewees expressed the feeling that decisions within the SRCMHC project had been made without their input. Interviewees reported that this aspect had improved since the PSAT was administered. The second issue raised was the perception that after some decisions were made, they were never implemented. 3.2.3. Collaboration outcome domains During the interviews, key informants were asked to give their opinion of the collaboration’s synergy, costs versus benefits of participation, and their personal satisfaction with the SRCMHC project. Multiple respondents reported that a greater degree of synergy was needed within the project, and that the program components needed to be better integrated. When asked about the costs versus benefits of participation in the SRCMHC project, several respondents reported that although the implementation of the project had been slow, once implemented, the benefits of participation clearly outweighed the costs. One of the major benefits of participating in the SRCMHC collaboration identified by most interviewees was the formation of new relationships with partner agencies. Additionally, several service providers reported they had begun working with other SRCMHC service providers to address multiple client needs at once. Synergy, an indicator of collaboration, is a proximal outcome measure in the PSAT. Although some respondents did not rate synergy highly at baseline, most interviewees reported a good level of synergy within the project. A couple of areas mentioned in the synergy discussion were intake and referral processes within the SRCMHC project. One of the major themes that emerged during the baseline interviews was a concern among direct service providers about the appropriateness of referrals being made between SRCMHC partner agencies. In particular, service providers reported they were receiving referrals for adult clients with severe mental health problems that providers were not trained to handle. Additionally, service providers reported that the management information system was not easy to use to make referrals. 3.3. Follow-up PSAT results In the fall of 2011, thirty-six SRCMHC participants (89%) completed a follow-up PSAT that was administered to obtain information on how partner agency collaboration had improved within the SRCMHC project since the baseline PSAT was administered. The largest two groups of participants were administrators (47%) and service providers (25%). Smaller proportions were SRCMHC program evaluators (13%) and consumer family members (3%). Comparisons of baseline and follow-up PSAT scores along with percent changes are presented in Table 4. All nine PSAT subscale averages increased from baseline to follow-up and the average change was +8% (range: +2% to +17%). Five of the subscale increases were statistically significant. To create an overall comparison, all nine subscales were averaged (shown at the bottom of Table 4). The overall change from year two to year three was also statistically significant and was close to the ‘‘very good’’

55

Table 4 PSAT subscale averages by year (N). Subscale

Baseline (42)

Follow up (36)

Percent change

Synergy Leadership Administration and management Efficiency Non-finance resources Financial and other resources Decision-making Cost-benefit comparison Satisfaction

3.48* 3.28* 3.44* 3.51* 3.73* 3.71* 3.83* 4.12*

3.80* 3.84* 3.81* 3.88* 3.86* 3.78* 3.94* 4.38*

+9% +17% +11% +11% +3% +2% +3% +6%

3.80*

4.21*

+11%

*

*

+8%

Average subscale

3.66

3.94

*

Mean difference between year two and year three is statistically significant (Ttest) p < .05.

level of 4.00. Ninety-four percent of the responding participants averaged 3.00 (good) or higher on this overall calculation. At baseline, the evaluators had stratified the PSAT results by respondent role and found a statistically significant difference in the adequacy of financial resources. Although there was a small difference in this area in the follow-up PSAT results when stratified by respondent role, it was not significant. While statistically significant improvements from baseline to follow-up were found within three collaborative process areas (leadership, administration, and decision making), major differences in responses again emerged between participants depending on agency type. At follow-up, participants at primarily serviceproviding agencies rated all three areas as being ‘‘good,’’ but ‘‘more effort was needed in these areas to maximize the partnership’s collaborative potential.’’ In contrast, participants at primarily administrative agencies rated all three collaborative process areas as being ‘‘very good,’’ and ‘‘the partnership was doing well in these areas but had the potential to progress even further.’’ As shown in Table 4, the PSAT leadership subscale score rose 17% from baseline to follow-up (from 3.28 to 3.84). Leadership showed the greatest gain between baseline and follow-up for all of the PSAT subscales. Seventy-five percent of all responding participants rated the SRCMHC leadership ‘‘good’’ or higher and almost 60% rated it ‘‘very good’’ or ‘‘excellent.’’ When stratified by agency type, the leadership subscale at baseline showed a statistically significant difference between respondents from service-providing agencies and administrative agencies. The difference between these two groups (.82) at followup was also statistically significant (Table 5). Both averages increased from baseline to follow-up. Improvement was also seen in the PSAT administration and management subscale score, which rose 11% from baseline to

Table 5 Collaboration process areas (differences based on agency type).

Leadership Participants working at a service provider agency Participants working at an administrative agency Administration and management Participants working at a service provider agency Participants working at an administrative agency Decision-making Participants working at a service provider agency Participants working at an administrative agency

Baseline*

Follow up*

2.95 3.67

3.43 4.25

3.15 3.79

3.52 4.10

3.49 4.23

3.60 4.27

* Mean difference between year two and year three is statistically significant (Ttest) p < .05.

56

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

follow-up – increasing from 3.44 to 3.81 – one of the larger subscale gains (Table 4). Seventy-eight percent of all responding participants rated administration and management ‘‘good’’ or higher, and over half (56%) rated it ‘‘very good’’ or ‘‘excellent.’’ At baseline, the administration and management subscale showed a statistically significant difference between respondents from service-providing agencies and administrative agencies. The difference between these two groups (.58) at follow-up was also statistically significant (Table 5); however, both averages increased from baseline to follow-up. As can be seen in Table 4, the PSAT decision making subscale score rose only slightly (3%) from baseline to follow-up – increasing from 3.83 to 3.94. Decision making was the highest subscale score at baseline and had less room for improvement at follow-up. Ninety-seven percent of all responding participants rated decision making ‘‘good’’ or higher and half rated it from 4.0 to 5.0 ‘‘very good’’ or ‘‘excellent.’’ At baseline, decision making showed a statically significant difference between respondents from service providing agencies and administrative agencies. The difference between these two groups (.60) at follow-up was not statistically significant (Table 5). Although both averages increased from baseline to follow-up, the administrative group increased only slightly. The evaluation team identified statistically significant improvement from baseline to follow-up in two collaborative outcome areas (satisfaction and synergy); however, the third collaborative outcome area, cost-benefit comparison, was not found to improve significantly. The satisfaction subscale score rose 11% – increasing from 3.80 ‘‘good’’ to 4.21 ‘‘very good’’ and was the second highest subscale score at follow-up. Ninety-seven percent of all responding participants rated their level of satisfaction with the SRCMHC project as ‘‘good’’ or higher and three quarters rated it ‘‘very good’’ or ‘‘excellent.’’ A major difference was found between participants for responses related to satisfaction depending on agency type. At follow-up, participants at primarily service-providing agencies rated satisfaction as being ‘‘good,’’ suggesting that serviceproviding agency participants perceived more effort was needed to maximize the partnership’s collaborative potential. In contrast, the PSAT responses from participants at primarily administrative agencies on the satisfaction subscale items were in the ‘‘excellent’’ range, pointing to a high level of satisfaction with the way SRCMHC project partners were working together. Synergy was a key outcome in the collaboration evaluation, representing the degree to which the SRCMHC project was perceived to be achieving substantially more than the participating agencies could achieve independently given the same resources (Weiss et al., 2002). The PSAT synergy subscale score rose 9% from baseline to follow-up (from 3.48 to 3.80). Synergy was one of several areas where, at baseline, there was a statistically significant difference between respondents from service-providing agencies and administrative agencies. The difference between these two groups (.42) at follow-up remained statistically significant; however, both averages increased substantially during follow-up (Table 6). Participants at primarily administrative agencies rated Table 6 Collaboration outcome areas (differences based on agency type). *

Satisfaction Participants Participants Synergy Participants Participants

*

Baseline

Follow up

working at a service provider agency working at an administrative agency

3.51 4.15

3.86 4.56

working at a service provider agency working at an administrative agency

3.27 3.74

3.59 4.01

* Mean difference between the two groups is statistically significant (T-test) p < .05.

synergy as being ‘‘very good,’’ indicating that the partnership was synergistic although potential for progress remained. In contrast, participants at primarily service-providing agencies rated synergy slightly lower as ‘‘good,’’ indicating that service-providing agency respondents perceived that more effort was needed to maximize the partnership’s collaborative potential. 3.4. Follow-up key informant interviews Nine individuals comprising SRCMHC partner agency administrators and service providers participated in the follow-up key informant interviews. The follow-up key informant interview results addressed statistically significant findings on the follow-up PSAT including: adequacy of financial resources, collaboration process areas (leadership, administration and management, and decision-making) and collaboration outcome areas (satisfaction and synergy). 3.4.1. Adequacy of financial resources During the follow-up key informant interviews, several interviewees reported that since flexible funds had been made available, service providers were better able to address the needs of families in crisis. A few service providers reported that they were now receiving reimbursements for incidental expenses including cell phone and mileage reimbursements. Additionally, service providers reported that the reimbursements had made their jobs easier. A major problem area reported by several interviewees at baseline was the lack of available incentives for families participating in SRCMHC services. In this area, multiple service providers reported that a lack of funding for incentives continued to be a problem within the SRCMHC project. Several service providers expressed concerns that some SRCMHC programs seemed to provide more incentives than others, which they believed was contributing to uneven service recruitment and participation within the SRCMHC project. Another concern raised by service providers was that incentives were frequently provided by social service programs in the community not affiliated with the SRCMHC project; therefore, community residents expected incentives for participation in SRCMHC services. In contrast to service provider concerns, SRCMHC partner agency administrators reported that incentives were available for all SRCMHC programs and suggested that some service providers might not know how to obtain the funds for incentives. Multiple administrators also reported that a greater effort would be made to ensure that incentives were available to service-providing staff. Another issue raised by several administrators was the perception that financial resources that were devoted to the SRCMHC mental health awareness campaign should have been used to fund more client services. Additionally, several administrators commented that the awareness campaign, which was primarily a schoolbased art project, did not reach enough community residents. In spite of the perceived difficulties in implementing the community mental health awareness campaign, the administrators agreed that the SRCMHC leadership team had acknowledged the difficulties at monthly partner meetings and planned to allocate the community awareness funds differently in the future. 3.4.2. Collaboration process domains Key informant interview responses were consistent with the higher PSAT leadership scores. In particular, several interviewees reported that many of the leadership concerns they had at the beginning of project implementation had improved significantly and were no longer an issue. Several interviewees reported that the decision to make flexible funds available within the SRCMHC project was a particularly good leadership decision which enabled service providers to better meet the needs of SRCMHC clients. Interviewees gave examples during interviews of how the use of

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

flexible funding had made it possible to help clients living in extreme poverty through the provision of rent and utility subsidies. In regards to administration and management within the SRCMHC project, several interviewees reported that referrals had increased since the single call line was implemented. Several administrators reported that information sharing by SRCMHC leadership continued to be a problem, and increased information sharing was necessary to improve program efficiency. An issue raised by multiple administrators and service providers was the perception that important information was not being disseminated at monthly SRCMHC partner meetings. In contrast, several service providers reported that information sharing with other service providers had improved, and that they were regularly working at another SRCMHC partner agency which led to the formation of new relationships. Several administrators discussed how the formation of the program integration and sustainability subcommittee provided a venue for securing future funding. Additionally, SRCMHC administrators acknowledged that participation at the awareness campaign subcommittee meetings was not consistent during the first year of the SRCMHC project, which may have contributed to frustrations with the implementation of the campaign. Several interviewees expressed concerns about the lack of family-driven, youth-guided services across programs that were consistent with system of care values. While several individuals reported that system of care trainings had taken place during the previous year, they expressed concerns that not enough had been done to train front line staff on the system of care philosophy. The majority of the service providers commented on the decision-making process. A major theme that emerged was a need for more service provider input into the decision-making process. There were two primary issues regarding decision making raised during the key informant interviews: (1) there was not enough participation by service providers in decision making, and (2) service providers were not being asked for input before decisions were made. A common theme that emerged among service providers was a perception that they were working on their own without enough direction from managers from their respective agencies. Several service providers reported finding the monthly service provider meetings helpful, but still felt ‘‘out of the loop.’’ 3.4.3. Collaboration outcome domains When asked directly about their level of satisfaction with their participation in the SRCMHC project, the majority of interviewees reported that they were satisfied with the current level of interagency collaboration. Several interviewees commented on partnership areas with which they were not as happy. Funding continued to be a common concern at follow-up, and nine comments were made by different interviewees regarding frustration over the uncertainty about the future funding of the SRCMHC project. Several service providers discussed difficulties they experienced working with families with so many needs. A few service providers also discussed the stress associated with providing services in client homes without furniture where clients often were not at home when a provider arrived. Multiple service providers reported that their clients were living in a state of crisis, and that engaging clients was difficult. Additionally, multiple service providers expressed concerns that much more needed to be done to incorporate community needs into SRCMHC program planning. Among service providers, a theme emerged that they did not perceive that SRCMHC leadership understood how great the community needs really were with the consensus being that more resources were necessary to increase client participation and improve outcomes. During the follow-up key informant interviews, several SRCMHC administrators discussed the difficulty that some service

57

providers might be having because the scope and funding objectives of the SRCMHC project limited how much could be accomplished in the targeted community. Several administrators reported that they were very satisfied with the accomplishments of the SRCMHC project, while other administrators reported that their particular program needed more referrals from SRCMHC partner agencies. Multiple SRCMHC partner agency administrators also reported that the client dropout rate continued to be a problem. Several service providers reported that the psychiatric referral process had improved considerably with the new intake line, and that the wait time for a new child or youth intake assessment with the SRCMHC psychiatrist had decreased to less than two weeks. To improve the service engagement rate, multiple service providers reported that they were meeting with SRCMHC clients in person to make referrals to additional SRCMHC services. While both SRCMHC administrators and service providers reported being satisfied with the newly implemented intake call line, they also reported that the management information system continued to be difficult to work with. During the follow-up interviews, several service providers reported that they found it easier to give a referral verbally than to use the management information system; therefore, all referrals were not being documented in the system. Overall, a theme emerged among administrators and service providers that additional program integration was necessary to increase referrals to some SRCMHC services and to decrease the client dropout rate among all services. While there were no comments made by interviewees specifically related to synergy, continued reports of collaborative interactions between partner agency service providers led the evaluators to conclude that a high level of synergy was occurring among service providers within the SRCMHC project. Multiple comments from service providers regarding the need for service provider and family participation in all aspects of the SRCMHC project suggest that more synergy was needed at the SRCMHC leadership level. 4. Discussion Regardless of agency type, each SRCMHC partner agency had its own institutional culture, history, and organizational structure. Within the SRCMHC project, successful collaboration meant that each agency would have to develop new ways of delivering a continuum of mental health services in an underserved community. The SRCMHC leadership team communicated to the evaluators that they saw the value of measuring the strength of the collaboration. The diversity of organizations and services provided within the SRCMHC project and the challenges encountered provided SRCMHC leadership with the opportunity to build bridges. The baseline PSAT results revealed that as a whole the SRCMHC was doing well; however, most participants perceived more effort was needed in certain areas to maximize the partnership’s collaborative potential. The PSAT’s proximal outcome measure, partnership synergy, represented the path through which SRCMHC partnership functioning influenced SRCMHC partnership effectiveness (Lasker, Miller, et al., 2001). While the synergy subscale rose 10% from baseline to follow up, statistically significant differences were found between service-providing agencies and administrative agencies at follow-up. Lasker, Miller et al. propose that resources, both financial and in-kind, are the building blocks of synergy. By combining resources, Lasker, Miller et al. explain that collaborative partners are able to create something new and valuable that could not be done when the partners are working alone. During the baseline key informant interviews, service providers in particular raised concerns about the availability of resources. While improvements were made in this area, a difference in the perceived availability of resources was a theme that emerged during the

58

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

follow-up key informant interviews specifically in the areas of incentives and sustainability. The improvements in the PSAT leadership subscale results along with the key informant interview responses suggest that the SRCMHC leadership team became more effective over time in their ability to bring together different organizations, understand different perspectives, and put changes in place to improve partner synergy. Through participation in PSAT and key informant interviews, SRCMHC administrators, service providers, and consumer family members and advocates each provided valuable information needed to improve service delivery quality. As a result of SRCMHC partner agency feedback during the evaluation process, the SRCMHC leadership team and agency partners identified and implemented service delivery quality improvements. In response to the evaluation findings, the SRCMHC leadership team attempted to improve participation and information sharing both at monthly partner agency meetings and direct service provider meetings. At monthly partner agency meetings, the participation of service providers was encouraged, and a second system of care program partner family member was invited to attend. Community agencies not directly involved in the collaborative were also invited to attend monthly partner agency meetings during year two in an effort to improve community buy in and sustainability. To improve program integration and to develop a plan for sustainability, a SRCMHC subcommittee was formed and meetings were conducted on a regular basis. To improve the referral process, SRCMHC partner agencies implemented a single-access phone number to connect referral sources including parents, nurses, and school counselors with a dedicated intake coordinator located at the SRCMHC system of care program partner offices. To increase interagency interactions and referrals, SRCMHC partner agencies adopted a co-location model in which service providers employed by one agency were housed at another agency within the community. To improve service engagement and completion, SRCMHC partner agencies made flexible funds available for families in crisis who were at risk for discontinuing SRCMHC services. The evaluation team worked collaboratively with SRCMHC partner agencies to improve the use of the shared management information system by providing agency-based trainings that included the refinement of documentation categories and procedures. The evaluators developed two new process indicators to track service delivery improvement: yearly increases in new episodes and yearly increases in families receiving multiple services. The service data reported in the SRCMHC shared management information system documented a steady growth in the volume of individual services provided through the SRCMHC project as well as an increase in the number of families receiving multiple types of SRCMHC services. A total of 2017 SRCMHC clients were served in year two of the SRCMHC project (from August 2010 through June 2011), and 827 individuals engaged in a SRCMHC service for the first time. In addition, 72 families received two or more SRCMHC services. In year three, 2597 clients were served (from August 2011 through June 2012). A total of 973 individuals engaged in a SRCMHC service for the first time, representing an 18% increase in new service episodes from year two year three. Additionally, 218 families were documented as having multiple service episodes in the shared management information system between August 2011 and July 2012. To systematically screen clients at intake, the evaluation team identified a validated screening instrument, the Strengths and Difficulties Questionnaire (SDQ), to measure the appropriateness of referrals and to monitor the mental health needs of the children being referred for services. The newly hired intake coordinator

began administering the SDQ to parents and caregivers as part of the intake process. After providing three months of mental health services, SRCMHC counselors and psychiatrists began administering the validated instrument to parents and caregivers as an outcome measure. A total of 248 children and youth were screened through the SRCMHC intake call line from January 2012 through April 2013. The mean age was 9 (range 4–18 years). Sixty-eight percent of the children and youth were male, and 32% were female. Seventy-three percent were African American/black, 24% were Hispanic/Latino, and 3% were other. Sixty-six percent of the children screened were enrolled in the Medicaid program, and 13% were commercially insured. Among children and youth screened, 79% had not received prior behavioral health treatment. Seventyseven percent of children and youth screened were found to have serious mental health difficulties based on a SDQ total difficulties score greater than or equal to 17. Program evaluation limitations include the changes in SRCMHC program staff between baseline and follow-up which led to a smaller number of people who completed the PSAT at follow-up compared to baseline (36 versus 42 respectively). The larger number of administrators compared to service providers participating in both the baseline and follow-up PSAT is also a limitation. All SRCMHC project participants were invited to complete the PSAT; however, because survey participation was voluntary, more administrators chose to complete the PSAT, leading to larger numbers of administrators in the sample. Another major limitation is the lack of involvement of SRCMHC youth and family members in program planning and evaluation. Although three consumer family members and advocates affiliated with the SRCMHC system of care program partner regularly attended SRCMHC monthly partner agency meetings and completed the PSAT, none of these individuals were from the community in which SRCMHC services were being provided. While the results of the key informant interviews showed that SRCMHC service providers and administrators wanted to increase youth and family involvement, the interviews also revealed perceptions that SRCMHC families were not always available and were living in crisis. 5. Conclusion 5.1. How well is the collaborative process working? Overall, the pre/post PSAT and key informant interview results showed that collaboration improved within the SRCMHC project over time. The key informant interviews revealed that service providers were working with each other to address multiple client needs, and they saw collaboration as important. Although satisfaction in general appeared to be high within the SRCMHC project, the differences found in multiple PSAT subscales between service providers and administrators as well as between serviceproviding agencies and administrative agencies suggest that additional work is needed to address the concerns of the individuals working on the front lines within this system of care. During the baseline and follow-up key informant interviews, multiple service providers expressed a less positive view of the collaborative and reported needing additional program resources to improve client engagement and retention in SRCMHC services. While the SRCMHC leadership team acknowledged their concerns, a service delivery intervention could not be put into place to address program-specific underutilization and to target resources where necessary due to limitations of the grant funding. Additionally, if the SRCMHC leadership team had been able to implement the system of care approach with greater model fidelity, service providers could have more easily connected with each other under a common philosophy. In spite of these

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

shortcomings, the key informant interviews also revealed that the service providers were working with extremely difficult situations and still managed to form collaborative relationships with each other within the SRCMHC project. According to SRCMHC service providers: ‘‘We work with some hard families with intense needs.’’ ‘‘There is satisfaction in seeing a case closed.’’ 5.2. What are the perceived costs and benefits of collaboration? During the baseline key informant interviews, several respondents identified a perceived slowness in implementation of the SRCMHC project as a cost of participation; however, respondents reported that once the program was implemented, the benefits clearly outweighed the costs. A major benefit of participation in the SRCMHC project identified by multiple respondents was the formation of new relationships with other SRCMHC participants. During the follow-up key informant interviews, multiple participants reported that the benefits of participation in the SRCMHC were high with little to no perceived costs. 5.3. Do the benefits of collaboration outweigh the costs? While not technically a subscale, the single cost/benefit comparison item included in the PSAT rose 6% from year two to year three, from 4.12 to 4.38. This was the highest PSAT subscale score at 4.38/5.00 at follow-up. Although there was some concern expressed by interviewees regarding the continuation of funding and the use of the shared management information system, most of the interview comments associated with the costs versus benefits of participation in the SRCMHC project were consistent with the high PSAT results. 6. Lessons learned The program evaluation was able to elucidate distinct areas of frustration among service providers which provided the SRCMHC leadership team with the ability to address specific areas of concern. The identification of service provider concerns about participation may be a particularly important accomplishment of this evaluation. The program evaluation revealed that the monthly service provider meetings put into place by SRCMHC leadership had a positive effect on the formation of horizontal relationships (between service providers); however, the evaluation also revealed that service providers continued to feel distanced from the decision-making process and ‘‘out of the loop.’’ Future system of care efforts in impoverished communities should be proactive in planning for the difficulties associated with providing services in resource-poor communities and should make greater efforts to provide practical support and involve frontline staff in policy decisions that affect them. While differences between service providers and administrators persisted, the program evaluation showed that the SRCMCH collaborative as a whole had become better integrated at the end of year three. Using the levels of integration rubric developed by Woodland and Hutton (2012), at follow-up, the SRCMHC had achieved a ‘‘unifying’’ level of integration. According to the framework, the SRCMHC project had maximized their level of integration in several areas including the development of a shared organizational mission, the formation of subcommittees, and the designation of roles and responsibilities. While the evaluation demonstrated how the collaborative process can be improved through engaging partner agency

59

organizations, additional stakeholders, including SRCMHC youth and families, were not engaged to the extent needed to adhere to system of care principles. The project leadership and partner organizations included SRCMHC families and youth in several program activities. A parent participating in family therapy with a SRCMHC service provider spoke during a panel presentation at a SRCMHC-sponsored holiday party during program year two. Additionally, several groups of at-risk youth in the SRCMHC targeted community were included in the planning and delivery of the community mental health awareness campaign. In spite of these achievements, the planning, implementation, and evaluation process fell short of the system of care guiding principle that ‘‘families and surrogate families of children with serious emotional disturbances should be full participants in all aspects of the planning and delivery of services’’ (Stroul & Friedman, 1986, rev. ed., p. 17). The inability to fully adhere to this principle may have resulted from a lack of exposure among program leaders and multiple partner agencies to the system of care values and principles prior to obtaining the Hogg grant and a lack of knowledge of engagement strategies for families in severely disadvantaged communities. Acknowledgements The authors would like to thank Josiah Q. Hamilton for his review and editing of the article. The authors would also like to thank Ilana Reisz, Ph.D. for her ongoing support of the program evaluation. References Anderson, J., McIntyre, J., Rotto, K., & Robertson, D. (2002). Developing and maintaining collaboration in systems of care for children and youth with emotional and behavioral disabilities and their families. The American Journal of Orthopsychiatry, (4), 514–525. Ayers, S., & Lyman, R. (2006). The development of a community-based system of care. In A. Lightburn & P. Sessions (Eds.), The handbook of community-based clinical practice (pp. 221–243). New York: Oxford University Press. Bullock, R., & Little, M. (1999). The interface between social and health services for children and adolescent persons. Current Opinion in Psychiatry, 12, 421–424. Butterfoss, F. D. (2007). Coalitions and partnerships in community health. San Francisco, CA: Jossey-Bass. Center for the Advancement of Collaborative Strategies in Health. (2007). Partnership self-assessment tool: Tool report http://partnershiptool.net/. Cramm, J. M., Strating, M. H., & Nieboer, A. P. (2011). Development and validation of a short version of the Partnership Self-Assessment Tool (PSAT) among professionals in Dutch disease-management partnerships. BMC Research Notes, 4, 224 http:// www.biomedcentral.com/1756-0500/4/224. Gajda, R., & Koliba, C. (2007). Evaluating the imperative of inter-personal collaboration: A school improvement perspective. American Journal of Evaluation, 28, 26–44. Gajda, R., & Koliba, C. (2008). Evaluating and improving the quality of teacher collaboration: A field-tested framework for school leaders. NASSP Bulletin, 92, 133–154. Goodlad, J., Mantle-Bromley, C., & Goodlad, S. J. (2004). Education for everyone: Agenda for education in a democracy. San Francisco, CA: Jossey-Bass. Green, J., & Thorogood, N. (2004). Qualitative methods for health research. London: Sage. Hernandez, M. (2000). Using logic models and program theory to build outcome accountability. Education and Treatment of Children, 23(1), 24–40. Hodges, S., Nesman, T., & Hernandez, M. (1999). Promising practices: Building collaboration in systems of care. Systems of care: Promising practices in children’s mental health, 1998 Series (Vol. 6,). Washington, DC: Center for Effective Collaboration and Practice, American Institutes for Research. Lasker, R., Miller, R., & Weiss, E. (2001). Partnership synergy: A practical framework for studying and strengthening the collaborative advantage. The Milbank Quarterly, 79, 179–206. Lasker, R., Weiss, E., & Miller, R. (2001). Promoting collaborations that improve health. Education for Health, 14(2), 163–172. Lightburn, A. (2008). Actualizing synergy in a community mental health system of care through assessing leadership collaboration. An International Database and eJournal for Outcome – Evaluation and Research, (3) 2008 Downloaded at: http://www.outcome-network.org. Mason, J. (2002). Qualitative researching (2nd ed.). London: Sage. Mayeske, G. W., & Lambur, M. T. (2001). How to design better programs: A staff centered stakeholder approach to program logic modeling. Crofton, MD: The Program Design Institute. Miller, C., & Ahmad, Y. (2000). Collaboration and partnership: An effective response to complexity and fragmentation or solution building on sand? International Journal of Sociology and Social Policy, 20, 1–39.

60

J. Hamilton et al. / Evaluation and Program Planning 45 (2014) 50–60

National Collaborating Centre for Methods and Tools. (2008). Partnership self-assessment tool. Hamilton, ON: McMaster University Retrieved from http:// www.nccmt.ca/registry/view/eng/10.html. New Freedom Commission on Mental Health. (2003). Achieving the Promise: Transforming Mental Health Care in America. Final Report. DHHS Pub. No. SMA-03-3832. Rockville, MD. Quinn, K., & Cumblad, C. (1994). Service providers’ perceptions of interagency collaboration in their communities. Journal of Emotional and Behavioral Disorders, 2, 109– 116. Shonkoff, J. P., & Phillips, D. A. (2000). From neurons to neighborhoods: The science of early childhood development. Washington, DC: National Academies Press. Stroul, B. A., Blau, G., & Friedman, R. (2010). Updating the system of care concept and philosophy. Washington, DC: Georgetown University Center for Child and Human, Development, National Technical Assistance Center for Children’s Mental Health. Stroul, B. A., & Friedman, R. M. (1986). A system of care for children and adolescents with severe emotional disturbances (Rev. ed.). Washington DC: Georgetown University Center for Child Development, National Technical Assistance Center for Children’s Mental Health. Stroul, B. A., & Friedman, R. M. (2011). Issue brief: Strategies for expanding the System of Care approach. Washington, DC: Technical Assistance Partnership for Child and Family Mental Health. U.S. Department of Health and Human Services. (2011). HHS action plan to reduce racial and ethnic health disparities a nation free of disparities in health and health care http://minorityhealth.hhs.gov/npa/files/Plans/HHS/HHS_Plan_ complete.pdf. Weiss, E., Anderson, R., & Lasker, R. (2002). Making the most of collaboration: Exploring the relationship between partnership synergy and partnership functioning. Health Education and Behavior, 29(6), 683–698. Woodland, R. H., & Hutton, M. S. (2012). Evaluating organizational collaborations: suggested entry points and strategies. American Journal of Evaluation, 33(3), 366– 383.

Jane Hamilton, Ph.D., M.P.H, M.S.W is a Postdoctoral Research Fellow in the Department of Psychiatry at the University of Texas Medical School at Houston and is a recent graduate of the University of Texas School of Public Health. Prior to returning to school to pursue doctoral studies, Dr. Hamilton was a clinical social worker providing treatment services for children and youth with complex mental health needs. During her doctoral studies, she gained experience evaluating community-based mental health services. Dr. Hamilton received doctoral funding as a training fellow both through the National Cancer Institute and the Health Resources and Services Administration Maternal and Child Health Bureau. She is a member of the American Evaluation Association. Charles Begley, Ph.D. is a Professor of Management, Policy, and Community Health at the University of Texas School of Public Health, University of Texas Health Science Center at Houston where he has taught since 1984. He is Co-Director of the Center for Health Services Research. Dr. Begley has authored or co-authored numerous scientific papers, book chapters, books, government reports, and other professional documents over the past 25 years. Dr. Begley teaches masters and doctoral level courses in health policy, health services research, and health economics. He directs health services and policy research projects serving as principal investigator and co-investigator on grants from national, state, and local government agencies and private entities. His current research interests include disparities in health care, performance of healthcare safety net systems, and health care reform. Ralph Culler holds a Ph.D. in Psychology from The University of Texas at Austin. He was formerly Associate Director of the Hogg Foundation for Mental Health and has more than 35 years of experience in grant making, evaluation, and working with non-profits. He heads an evaluation consultant practice: Research and Evaluation Services of Texas. Dr. Culler co-directed the Hogg Foundation’s evaluation fellowship program and taught a graduate-level evaluation seminar. He has made presentations and conducted workshops on evaluation to groups and organizations both nationally and in Texas. He is a member of the American Evaluation Association.

Evaluation of partner collaboration to improve community-based mental health services for low-income minority children and their families.

This paper describes a mixed methods evaluation of partner agency collaboration within a system of care implemented from 2010 to 2012 in a historicall...
749KB Sizes 0 Downloads 3 Views