11

Letters are welcomed and will be published, iffound suitable, as space permits. The editors reserve the right to edit and abridge letters, to publish replies, and to solicit responses from authors and others. Letters should be submitted in duplicate, double-spaced (including references), and generally should not exceed 400 words.

Planning vs Evaluation in the Health Services

Comments from Deniston In the first paragraph of his paper, Program Evaluation Techniques in the Health Service. (Am. J. Public Health 66:1069-1073, 1976), Meredith points out lack of agreement about meaning of the word evaluation. He doesn't note similar problems about the word planning. He offers the definition for evaluation, ". . . finding out what worked, what did not work, and why." I would suggest a similar definition for planning, ". . . estimating what might work, what might not work, to what extent, and at what cost?" The difference is in time perspective; evaluation is retrospective, planning is prospective. He notes the purpose of retrospective evaluation is to facilitate planning. Our concern then is the extent to which the two models he discusses would facilitate planning future programs. (He tries to make a nondistinction between ongoing and retrospective evaluation; ongoing evaluation is retrospective of how the program has gone so far for the purpose of planning the rest of the program.) Our first concern with the first model is the "index of changes" . . . "a rough measure of the effect of the program on the individual." Here he glosses over a major problem in evaluation, the issue of causality. The example is the difference in function between admission and discharge of AJPH May, 1977, Vol. 67, No. 5

Letters to the Editor clients of a mental health program. We are reminded of Herzog's discussion of this issue, "Two out of Three Improve, With or Without Treatment".1 Our next concern arises when discussion turns to comparative effectiveness of four drug treatment programs. (We would prefer "adequacy" when comparing alleged program effect to total possible effect.)2 The discussion assumes that clients of the four programs were similar; differences in effectiveness (and later efficiency) are due to differences in the programs. But it seems highly likely that in this example, an agency with four different drug treatment programs, that clients are not similar, different types of clients were assigned to different types of programs based on past evaluation. The types of planning estimates recommended are not called for if the clients were not similar. Finally, the model does not seem to meet one of the criteria suggested, that evaluation findings help understand why a program worked, to the extent it did. We have similar concerns with the second "evaluation" model. Firstly, we again seem to avoid the issue of causality in relation to programs. Secondly, we have the same issue of similarity of patients as we start comparative evaluation of different treatment programs. Finally, the discussion seems to propose this model almost exclusively as a predictive planning model, not a retrospective evaluation model. This is an important distinction to make; the paper might have been better titled "Planning Techniques in Health Services." 0. L. Deniston Associate Professor Dept. of Health Planning and

Administration University of Michigan Ann Arbor.

REFERENCES 1. Deniston, 0. L., Rosenstock, I. M., and Getting, V. A. Evaluation of program ef-

fectiveness, PHR, 83:4, April 68. 2. Herzog, E. Some Guidelines for Evaluation Research, Children's Bureau Pub., No. 378, USDHEW, 1959.

Author's Response Professor Deniston raises some issues of interest to evaluation researchers. The first, that of causality, is an issue that plagues everyone in the helping professions. The determination of whether the recipients of our "help" were better off with or without it is no easy task, primarily because of the difficulty, cost, and lack of time for controlled experimentation on similar human subjects. On this same topic, one of the conclusions of the evaluation study in reference 8 of the subject paper was that the experimental program being investigated did not improve the subjects' functioning at all (nor did any other program). But its immense contribution was in preventing the regression of these subjects' painfully learned abil-

ities. Professor Deniston's second point, that the clients of the four programs may not have been similar, illustrates the need for qualitative input to decisions regarding programs in addition to the use of model results. Models can be used for many purposes. In this case, if the patients were all similar, the model could be used for helping to select the best program or for eliminating unnecessary programs. If the patients were all dissimilar, the model results must be reviewed with that fact borne in mind. In this case, a decision might still be to eliminate a program; or, if a program looked particularly weak, the decision might be to keep the program, in spite of the model results, because 475

Planning vs evaluation in the health services.

11 Letters are welcomed and will be published, iffound suitable, as space permits. The editors reserve the right to edit and abridge letters, to publ...
177KB Sizes 0 Downloads 0 Views