Differing

Perspectives

BY AND

SPIVACK,

GEORGE JEROME

J.

PLATT,

on Mental

PH.D.,

CATHERINE

Health HARAKAL

Evaluation ST.

Small-group workshops ofnonevaluators with differing interests in mental health discussed evaluation priorities for community mental health centers. Participants included center professionals. concerned citizens, and funders. A majority ofthe participants placed high value on accountabilityfrom a consumer standpoint and relatively low value on center management issues and cost or equity ofservice delivery. Center staff were more interested in efficiency and effectiveness, while outsiders with vested interests were more concerned with community involvement. The authors summarize seven role perspectives and discuss their implicationsfor the establishment ofevaluation priorities within a center.

4

CLAIR,

PH.D.,

JEROME

SIEGEL

PH.D..

PH.D.

PSYCHIATRISTS IN community mental health centers, whether they are administrators, clinical directors, or therapists, share with professionals in other disciplines an increasing pressure for accountability and evaluation. This demand stems from a variety of sources-government officials with responsibility for allocation of funds, administrators who have restricted budgets that must be efficiently deployed, consumers who expect effective services, and mental health professionals themselves ( I). Despite the widespread interest in mental health program evaluation, there has been little formal training or exchange of information available to interested individuals who are not professional evaluators. Yet these mdividuals-clin ician s, ad m in ist rato rs, referring agencies, government officials, and concerned citizens-are those who provide, fund, and demand the services that must be evaluated to satisfy the current pressures for accountability. We are not aware of any efforts to teach these special interest groups to appreciate evaluation from the vantage point of any other concerned group. Although they are all involved and have a vested interest in mental

health services, they have not been helped to see the issues and perspectives that are involved in their evaluation. For these reasons, we submitted a proposal for a training grant in mental health evaluation education to the Psychiatry Training Branch of the National Institute of Mental Health and received funding beginning in June 1974. Our specified major objectives were as follows: I) to educate a variety of mental health professionals, agencies, and involved lay groups about the meaning and implications of program evaluation and accountability in the delivery of mental health services, 2) to sensitize each group to the vantage point of the others and especially to the legitimacy of the various possible perspectives, and 3) to enumerate possible solutions to evaluation and accountability issues in a multiperspective social context. Our purpose was less to educate about what evaluation should be than to increase awareness that there are different points of view about evaluation, all of which have at least situational validity. In the first phase of the program small-group workshops were planned to provide an opportunity for individuals personally involved in the delivery of mental health services and with different vested interests to come together to share, debate, and expand their perspectives about evaluation. An anticipated second phase of the training would include a means ofbninging these vantage point differences to a wider audience for discussion in a regional conference. This report describes the areas of agreement and diiferences in viewpoints on evaluation that emerged during the small-group workshops of people with different role perspectives.

METHOD

The

group consisted of four senior research a large urban community mental health and mental retardation (CMH/MR) center. The director of evaluation for Philadelphia County and the evaluation specialist from the Alcohol, Drug Abuse, and Mental Health Administration region III participated actively as consultants with the core planning group. evaluators

The authors are with the Research and Evaluation Service, Community Mental Health and Mental Retardation Center, the Hahnemann Mcdical College and Hospital of Philadelphia, Hotel Philadelphia, 314 N. Broad St., Philadelphia, Pa. 19102, where Dr. Spivack is Director, and Drs. St. Clam, Siegel, and Platt are Senior Research Evaluators. This work was supported in part by grant MH-13844 from the Psychiatry Training Branch of the National Institute of Mental Health. The authors gratefully acknowledge the help of Ms. Miriam Scheiber, research assistant, and the advice of Dr. Martin McGurnin, Philadelphia Community Mental Health/Mental Retardation Program, and Dr. Walter Lauterbach, Alcohol, Drug Abuse, and Mental Health Administration region III.

The

planning

from

Participants

A major concern was which vantage points on mental health services should be represented. The five sessions eventually involved the following 39 participants (plus 5

A m J Psychiatry

132.12,

December

1975

1295

PERSPECTIVES

ON

EVALUATION

group leaders and 5 evaluation specialists): 3 adrninistratons ofCMl-I/MR centers (e.g., center directors), 5 directons of clinical services such as outpatient or partial units, 4 therapists, 5 community board members from CMH/ MR centers, 3 county CMH/MR center analysts (program monitors and funders), 2 representatives from third-party funding sources (health maintenance organizations [HMOs]), 5 representatives from agencies that use CMH/MR center services for their clients, 4 public school counselors, 3 representatives from the Mental Health Association of Southeastern Pennsylvania (MHA), and 5 representatives from the Philadelphia Association for Retarded Children (PARC). Attempts to include legislators or former clients did not prove feasible. Finally, although it seemed unrealistic to exclude expenienced research and evaluation professionals from any serious discussion of evaluation, the planning group did not want the exchange of viewpoints to revolve around technical issues of “how to” or “how much does it cost.” A format evolved in which professional evaluators were represented in a separate discussion following the original group’s exploration of evaluation issues. The planning staff contacted potential Philadelphia area participants within each category by phone, describing the evaluation workshops and emphasizing that the person was being asked to represent a given role (e.g., clinical director, therapist, board member, etc.). He was asked to express concerns about community mental health evaluation from the viewpoint of his present role. Almost all of the responses were favorable. An honorarium for participation was offered, but interest in evaluation is currently so high that several participants indicated the honorarium was unnecessary. The

Questionnaire

An explicit list of possible evaluation questions was considered the best method to facilitate I ) an exchange of ideas among participants who might not otherwise have had the same referents for the term “evaluation,” 2) objective measurement of the existence of different vantage points toward evaluation (heretofore only an assumption), and 3) assessment of whatever consensus might exist about what constitutes good evaluation. The questionnaire listed 36 center evaluation topics from the literature. Topics included needs assessment, effectiveness of treatment, utilization of state hospitals, staff time utilization and morale, and cost of therapy (see table 1 for further examples). Evaluation questions in simple language (e.g., How aware are citizens of the center and its services?”) were mailed to the participants before the meeting. Each person was asked to circle the 10 topics of most value to him and to underline the 10 of least value to him in his specific role. Participants were also told during the phone contact that there would be open group discussion of reasons for and against each type of evaluation. Differences of opinion were expected, and the goal was to appreciate differen,t viewpoints and to determine whether there is consensus about which types ofevaluation are best and why. “

1296

A m J Psychiatry

132.12.

December

1975

The

Workshops

Discussion focused on similarities and differences among the respondents’ choices on the questionnaires (visually displayed on a large wall chart), beginning with evaluation topics accorded high value. In the discussions, participants were identified by their roles in order to keep discussion focused on role-based perspectives. The focus was always on the reason behind the choice. Discussion was most lively when it focused on topics about which there was a clear initial difference of opinion. Participants often commented on the difficulty they had rating any topic as low in interest and value and occasionally wanted to change their vote. Following a break, the group reassembled and the focus of the workshop changed. The discussion leader introduced the group to a research and evaluation specialist, who served as a technical specialist and expert on evaluation. His role was to provide feedback to the group on the practical implications of the evaluation topics unden discussion (such as differential costs in time and effort ofdifferent kinds ofstudies, feasibility and likelihood of getting useful results, and findings from previous studies) and to present information that would be useful to nonevaluatons that could only come from an experienced evaluator. The data in this report cover only the votes and analyses of the taped discussions before the evaluator’s contribution (i.e., the participants’ relatively naive viewpoints of evaluation).

RESULTS

Ratings

ofEvaluation

Topics

The significant chi-squane analyses in table I are based on the initial high and low votes of the 39 participants on the 36 evaluation topics. Because they represent a forced choice among alternatives, the votes reflect what a mixed and vested interest group might decide if they had to determine priorities among a number of worthwhile evaluation topics. It is noteworthy that three high value topics reflect a “consumer orientation” (i.e., citizen awareness, patient satisfaction with services rendered, and accessibility of services to residents). Two topics generally accorded high value indicate an overall orientation in favor of nontraditional consultation and education (item 18) and preventive (item 16) services, suggesting a “supportive-preventive” orientation. The selection of the type of topic represented by item 1 1 further suggests an “accountability” orientation. The data on the low value topics suggest that the group was least interested in issues relating to internal management (i.e., information dissemination, staff time utilization, and distribution of administrative personnel). Such information was not of interest to most participants. These data also suggest that the group as a whole perceived as relatively unimportant two questions that have been given high priority by government sources interested in evaluation: cost of delivery of therapy (item 17) and equity of service delivery as measured by the match

SPIVACK,

TABLE

ST.

CLAIR,

SIEGEL,

AND

PLATT

I

Evaluation

Topics

Selected

as Being

of High

and

Low

Value

Observed Distnib of Ratlngs* Item

High

value 5. How asare are citizens ofthe center and its services? I I .Does the center evaluate how well it is doing itsjoh? 14. Are patients satisfied with the services they get? 16. What prevention programs have been developed by the center? 18. Hoss good is the center’s consultation and education effort to agencies and groups in the community? 24.Areservicesaccessible? Low value 17. What does it cost to deliver an hour oftherapy? 20. Howwelldoesinformationgetdisseminatedwithinthecenter? 22. Is the center top-heavy with administrative personnel! 25. Do patients from different racial or ethnic backgrounds differ in their presenting problems? 27. Does the racial and ethnic composition of the patient group match that ofthe catchment area? 30. How do different types ofstaffpersons (e.g.. psychiatrist, psychologist. social worker, mental health worker) spend their working hours?

Significance

ution

Lo

Not

of

Difference from Expected Distribution

Chosen

High

*Wjth each

39 participants item

ssould

choosing be

I I in the

10 questions high

group.

as being I 1 in the

ofhigh loss group.

value and

and

between patient population and the community in racial and ethnic background (item 27). Despite the high value placed by others on evaluation of treatment effectiveness, our participants did not consider this issue important. This could be considered a significant finding, in view of the fact that 5 items touched on this aspect of evaluation: reduction of state hospital utilization, effectiveness of different types of treatment with different types of patients, changes in symptoms on adjustment on the basis of treatment, attainment of thenapists’ goals in treatment, and duration of treatment effects. Ratings

as a Function

1

20

p

Differing perspectives on mental health evaluation.

Small-group workshops of nonevaluators with differing interests in mental health discussed evaluation priorities for community mental health centers. ...
881KB Sizes 0 Downloads 0 Views