Viewpoint

55

Materials Likely to Be of Some Use to Those Designing Prevention Programs, Particularly Primary Prevention Programs." Not every item--indeed almost no item--describes a complete primary prevention program together with a demonstration of program effects. Most, however, do contain material relevant to designing, implementing, or evaluating prevention programs. It is when a write-up has one or more of these components that I find it falls in a shadowy area in so far as Cowen's definition is concerned. However, clarifying the point that the Clearinghouse does not aim at providing examplars of primary prevention programs makes it even more obvious that the need for such a service exists. As Emory Cowen made clear in a major theme of his Editorial: Although primary prevention's current generative base cannot be described as robust, it is sufficient to justify development of diverse primary prevention programs. The field could profit enormously from a small cluster of heuristic program demonstrations, each based on: a) a structurally-demanding, pure definition of primary prevention, and b) supporting research documentation--in other words the "modeling" of excellence in primary prevention. In brief, I am saying that Cowen is quite right about the need for rigorous definition of primary prevention, and for spotlighting outstanding programs, but that use of such a definition as a criterion for inclusion in the Clearinghouse and limiting the Clearinghouse to such material might be not only difficult but inappropriate. By all means, if you are looking for an effective model of a primary prevention program, ask E m o r y - - b u t permit the Clearinghouse to let a thousand flowers bloom.

Justin M. Joffe Psychology Department University of Vermont Burlington, Vermont

Creative Controversy The Viewpoint article in the Fall 1981 issue, "Program development vs. research: Which first?" will no doubt result in the "creative controversy" that Dr. Hollister hoped it would. My contribution to the controversy is as follows. If only programs of proven worth are funded, nothing new can ever be attempted. The question posed in the article's title, therefore, begs the real question. The meaningful question is: Should new prevention programs, when they are funded, be evaluated? Prevention programs pose no special problems for evaluation. I can not understand why such new programs should not be subject to evaluation.

Journal of Primary Prevention

56

If, " I t is most difficult to prove that you have had an impact on something that didn't happen," then it would be difficult to prove that dams prevent floods, or innoculations prevent diseases. The use of funds for high-risk ventures in prevention requires that there be an evaluation of results. If we can not specify outcomes in measurable terms, we should not use the funds. The requirement that we evaluate our efforts does not imply that we avoid high-risk efforts. Nothing in the orientation that we specify and measure outcomes requires that we succeed dramatically or quickly, only that we provide feedback to know where we are, and where we are going. Sufficient work has been done in research design to deal with the technical problems inherent in evaluation studies. Practitioners in prevention must understand that goodwill and best-efforts are not enough to justify funding their programs. Legislators, and those in government responsible for distributing funds, must learn to demand an evaluation of the results of programs and must learn that rapid, dramatically effective solutions to complex problems are not possible.

Sheldon Blackman St. Vincent's Medical Center of R i c h m o n d

Hollister Replies I am extremely pleased you felt motivated to respond to our Viewpoint article. As you detected there is some controversy and I sense you feel very strongly about "required" and "demanded" evaluation. Let me try, without trying to take sides, to portray to you some of the positions taken by those on the other side. I attended a conference on Prevention in Atlanta in which some 28 prevention programs, being conducted by small town and rural mental health centers, were presented and discussed. Some of these were not attempting to evaluate because: 1. The cost of doing the evaluation would exceed the money we are spending to do the intervention. Funds are scarce. 2. These prevention programs were requested in need surveys or by local citizens' groups; they are well-liked and attended, though their scientific efficacy is not proven. We're not going to stop a good program because we don't have the staff, time, and money to evaluate it. 3. We don't do evaluation because there are too many intervening factors we can't control; so evaluation results might be specious, too distantly related, or too atomistic to really evaluate the impact of the program on its consumers.

Creative controversy.

Creative controversy. - PDF Download Free
109KB Sizes 0 Downloads 0 Views