Research in Developmental Disabilties,Vol, 13, pp. 429-441, 1992 0891-4222/92 $5 .00+ .00 Printed in the USA . All rights reserved. Copyright ® 1992 Pergamon Press Ltd .

A Content Analysis of Written Behavior Management Programs Timothy R . Vollmer, Brian A . Iwata, Jennifer R . Zarcone, and Teresa A . Rodgers University of Florida

A method is described for assessing the elements of individual behavior management programs. The content analysis, consisting of 24 items covering the general categories of behavior specification, objectives, program procedures, data collection, and quality assurance, was applied to 141 written behavior programs of two large institutions from different regions of the United Slates. These data can be utilized readily to establish a data base for program evaluation at both the individual and institutional levels. In addition, to provide a measure of validity, the items included in the content analysis were rated by experts in the treatment of severe behavior disorders . General strengths and weaknesses of the programs, and of the content analysis itself, are discussed in light of their implications for program implementation and evaluation.

The treatment of problem behavior is a major challenge faced by professionals working in the fields of developmental disabilities and applied behavior analysis ; furthermore, it is an area full of controversy . Programs aimed at reducing potentially dangerous behavior (such as aggression, disruption, and self-injury) have received particularly close attention in the literature (e .g ., Repp & Deitz, 1978 ; Sajwaj, 1977), and in state regulatory systems (Johnston & Shook, 1987 ; Spreat & Lipinski, 1986) . The reason for such concern is that behavior reduction programs often include techniques that are considered more "intrusive" to the client, such as timeout, This research was supported in part by a grant from the Developmental Disabilities Planning Council . The authors thank Peter Andree and Martins honk for their assistance in conducting various aspects of this study and the senior psychologists and psychology staff at both of the facilities involved in the study. Requests for reprints should be sent to Brian A . Iwata, Department of Psychology, University of Florida, Gainesville, FL 32611 . 429



430

T R. Volliner et al.

response cost, or other punishment procedures (Lennox, Miltenberger, Spengler, & Erfanian, 1988). The United States Department of Health and Human Services, Health Care Financing Administration (1988) requires intermediate-care facilities for the mentally retarded (ICF-MR) to have a policy describing which treatment procedures may be used under what conditions . Furthermore, the Joint Commission on Accreditation of Healthcare Organizations (1989) stated that "objective criteria reflecting current knowledge, clinical experience, and standards are used in the evaluation process for a behavior management program ." Both documents called for a committee to review, approve, and monitor programs for inappropriate behavior, but did not specify how this was to be done . In addition, although it is suggested that policies are to be developed regarding the use of specific procedures, no class of procedures is ruled out per se . Thus, a framework for identifying the contents of behavior management programs is provided, but the actual evaluative components are currently unavailable . Although the limitations and/or acceptability of specific behavioral techniques are issues not yet resolved, it should be possible to specify general criteria that apply to all techniques ; however, no such criteria are currently available . There are several potential advantages to a set of evaluative criteria : (1) they could guide clinicians during the program development phase ; (2) they could be used as one of several methods for evaluating programs at the individual level (e .g ., by a review committee) ; and (3) when applied across a number of individuals, the resulting data could provide an index of program quality at the organizational or institutional level . Evaluative criteria could be applied to several aspects of treatment programs, such as the soundness of the procedures proposed, the consistency of implementation, and outcome measures . All of these are important to the eventual success or failure of treatment . If, however, the program is unsound from the outset, it is not likely to be effective even if implemented consistently. Therefore, a reasonable starting point for evaluation is at the level of the written program. The purpose of this study was to apply a set of objective criteria derived from current practice to a large-N sample to determine the existing contents of behavior reduction programs . DEVELOPMENT OF THE CONTENT ANALYSIS Construction of the program evaluation checklist was based on three sources : published material on the elements of effective treatment programs (e .g ., Cooper, Heron, & Heward, 1987 ; Martin & Pear, 1983), various state regulatory standards for behavioral programming (e .g ., Florida HRS Manual 160-4, 1989), and published research on the behavioral treat-



Behavior Management Programs

431

ment of self-injurious, aggressive, or other harmful behavior . The content analysis was designed to include elements that should be positively correlated with programmatic success, all other factors being equal . Thus, if therapist skill level, environmental factors, facilities, materials, and clients were held relatively constant, the success of a treatment program should increase as the percentage of correspondence with the content analysis items increases . Whether the elements actually are, in fact, correlated with successful programming is a question beyond the scope of this study, because written programs were not tracked over the course of time to examine actual changes in behavior. Rather, the contents are seen to represent important elements of programs to the extent that they match current trends in standards, in the literature, and in clinical practice . The analysis, shown in Table 1, consists of five general categories : (A) Behavior, (B) Objective, (C) Program, (D) Data, and (E) Quality assurance . Each category is further subdivided to provide a more detailed analysis . Some of the items are included in the content analysis to determine whether the program meets various long-held standards for the practice of applied behavior analysis . For example, several items (i .e ., Al, Cl, C2b-e, and C3a-c) are designed to ensure that the program is sufficiently technological (in the sense of Baer, Wolf, & Risley, 1968, procedures are specified and defined) . Compliance with such items should reflect whether the program's procedure is likely to be run consistently from day to day or session to session based on the written instructions provided in the program itself, given that a reasonably trained individual is responsible for implementing the treatment . In other words, if the behavioral definition, for instance, is not based on objectively measurable topographies, it is conceivable that two different observers or therapists might be scoring or providing consequences for two separate response classes . Conversely, if the behavioral and stimulus definitions, session times and locations, schedules, and data collection systems are all reasonably well described, the procedure is more likely to be implemented in a consistent fashion . Other items were included in the content analysis for the purpose of evaluating whether the program is current with respect to particular trends within the field . For instance, item A2 (context of behavior) reflects the growing emphasis on identifying the function of behavioral disorders (e .g ., Iwata, Dorsey, Slifer, Bauman, & Richman, 1982; Repp, Felce, & Barton, 1988) . In addition, items such as C2e (schedule appropriate) are based on currently accepted standards provided in the literature for "appropriateness" - that is, based on observed rates of responding, it is possible to establish appropriate reinforcement intervals by calculating average interresponse times (IRTs) and matching reinforcement intervals to those measures (Deitz & Repp, 1983) .



432

T R . Vollmer et al .

TABLE 1 Behavior Reduction Program Content Analysis A . Behavior 1 . Definition: Behavior problem is described as an observable event, with response topography specified. 2 . Context : Activities, times, persons, etc ., associated with the behavior problem are identified. 3 . Previous Rx : Previous treatments and their results are summarized . B . Objective 1, Measurable: The objective of the program is stated in measurable terms (e.g ., level of responding) . 2 . Time limit A date or interval of time by which the objective should be reached is specified . C. Program 1 . Sessions : Times and locations for program, if not continuous, are specified . 2 . a . Sr+ component : The program contains at least one procedure designed to increase appropriate behavior. If not, score 2b-e as N/A . b. Target defined : The appropriate behavior to be established or increased is described in observable terms . c . Sr+specified : The positive reinforcer is specified . d. Sr+schedule: The schedule of reinforcement, if not continuous is specified . e . Schedule appropriate : The reinforcement schedule appears to be appropriate, given the baseline rate of inappropriate behavior . 3 . a . Aversive specified : If included in the program, consequences for occurrence of the behavior problem are specified . If not, score 3 b-c as N/A . b. Target defined: The inappropriate behavior for which aversive procedures will be used is described in observable terms . c . Aversive schedule : The schedule of aversive consequences is specified . d . Aversive fading : Methods for eventually eliminating the use of the aversive procedure are described . D. Data 1 . Baseline : A baseline for the problem behavior is reported mid includes a quantitative measure over a specified period of time . 2. a. Data for Sr+ target Data will be collected on the occurrence of the appropriate behavior to be increased . If not, score 2b-c as NIA . b. Method described : Procedures for collecting data on behavior to be increased are described . c . Method appropriate: Dam collection procedures described in 2b are appropriate (e .g ., # responses per day, half-hour intervals, % occurrence across trial etc .) . 3 . a . Data for aversive target Data will be collected on the occurrence of the behavior to be decreased . If not, score 3b-c as N/A . b . Method described : Procedures for collecting data on behavior to be decreased are described. c . Method appropriate : Dam collection procedures described in 3b are appropriate (see D2c) . E . Quality Assurance 1 . Review schedule : Monthly review indicated for aversive procedures ; quarterly review otherwise. 2. Consent indicated for. timeout room, restraint, or primary aversive stimulation .

Still other items, although also included because of their potential role in promoting program effectiveness, are primarily included to safeguard clients' rights in light of recent concern over such issues ( e .g ., Spreat &



Behavior Management Programs

4 33

Lanzy, 1989). Both quality assurance items (El, E2) are included for this purpose, as is item C3d, which takes note of plans to fade an aversive procedure if it is included in the program . Finally, some items were included because it was evident that various state regulatory agencies have recognized their importance (see Florida HRS manual 160-4, 1989) . Often, these sorts of items are checkpoints for the behavior of individuals responsible for managing a program's implementation . Items BI and B2 are examples of such elements because they suggest that there should be ongoing review of obtained data . Similarly, each of the items on data collection, if included in the written program, might function as prompts for effective managing and, therefore, should improve the program's efficacy, because they increase the likelihood of documentation and data analysis . METHOD Social Validation

In addition to our attempts to draw content items from the current literature on behavior management programs, the content checklist was mailed to 39 experts for further validation . Two criteria were established for selecting experts : (1) the individual had current (as of early 1991) membership on the editorial board of any of three journals (Behavior Modification, Journal of Applied Behavior Analysis, Research in Developmental

Disabilities) that frequently publish research on the treatment of behavior disorders ; and (2) the individual had published research on the development of behavior management procedures . Each expert was asked to rate the individual contents on a scale of 1 to 5, where 1 = essential ; 2 = important ; 3 = neutral ; 4 = unimportant ; and 5 = irrelevant . Some items were combined on the survey sheet, such as D2a (data will be collected on the occurrence of the appropriate behavior) and D3a (data will be collected on the occurrence of behavior to be decreased) ; items such as these were collapsed because they represented the same general aspect of a written behavior management program . Finally, the respondents were asked to list any further elements of a behavior management program that were seen as "essential" (i .e ., items that would have received a "1" rating had they been on the list) . Surveys returned within 4 weeks of post date were included in the validation analysis . Facilities and Program Selection

Two large public residential facilities located in different regions of the United States participated . The client population of facility A was approx-



434

T. R . Vollmer et alL

innately 1,050, whereas facility B had approximately 900 residents . Both institutions required written programs for behavior reduction, which were supervised by psychologists who were responsible for between one and five residential units (each unit had anywhere from 8 to approximately 25 residents at any given time). In addition, in both cases a senior psychologist supervised behavioral programming for the entire facility, and a requirement existed that "high-risk" behavior reduction programs were routinely reviewed for approval by a committee made up of a subset of the psychology staff. There were also a number of differences between the facilities . For example, facility B's state required a certification examination process for minimal competency in order to write and implement behavior reduction programs that involved potentially high-risk procedures . In addition, facility B had at least a 15-year history of collaborative work with a nearby university ; it was a frequent site for clinical research on both behavior acquisition and reduction programs . Thus, it was likely that facility B's treatment staff had a higher degree of competence in behavior analysis . For the purposes of this study, 90 behavior reduction programs were selected for analysis from facility A . These programs comprised the existing set of approved programs for what were considered the most severe and "high-risk" behavior disorders in the year 1988 for that facility (the programs were not necessarily originally written during that year) . Fifty-one programs were reviewed from facility B . These included the entire existing set of approved programs for high-risk behavior disorders written during 1989 (through September), and then programs from 1988 and 1987 were selected randomly to meet a predetermined sample size of 50 . One extra program was reviewed because the final client folder (selected randomly) contained two programs, so both were reviewed bringing the total sample size to 51 . In the case of both facilities, "high-risk" behavior disorders were operationally defined as severe aggression, self-injury, property destruction, and, in rare cases, aberrant sexual behavior ; "high-risk" procedures generally involved some form of punishment contingency and included variations of overcorrection and timeout procedures . PROCEDURE Each program was scored either by one of the authors or by a graduate student in behavior analysis assisting with the study . First, the program was read in its entirety, and then the reviewer compared the program with the checklist, item by item, verifying the existence of each . Each item included in the written program was scored on a data sheet as a plus (+) ; if the specified item did not exist in the program as defined, it was scored as a minus (-) . In some cases, an item on the checklist would be scored as not applica-



Behavior Management Programs

43 5

ble (NA) . For example, if no method was described for collecting data on the positive reinforcement target (item D2b), D2b would be scored as a minus, but item D2c (method appropriate) was scored as NA because it could not, given the absence of D2b, exist on the program . The percentage of correspondence between items on the checklist and items in the programs was calculated on an item-by-item basis for each facility's programs . Reliability

Interobserver agreement was assessed for over 20% of the scored programs (29 programs) . Programs selected for reliability analysis were chosen randomly (from both facilities) prior to either observer scoring the program, and agreement scores were calculated by dividing the number of agreed upon components by the total number of scored components and multiplying by 100 . The overall agreement was 90 .3% . Interobserver agreement was also assessed on an item-by-item basis on those same 29 programs, and those percentages are presented in Table 2 . RESULTS Validation

Twenty-nine of the 39 experts returned the validation survey within 4 weeks . Table 2 shows the mean ratings for each item from those experts . For all but two items, the mean rating was between "1" and "2" (essential and important, respectively) . It is also noteworthy that for 76% of the items, the majority of raters saw the item as "essential" (gave it a rating of "1"), and in all cases the majority of raters saw the item as either "essential" or "important" (gave it a rating of either "1" or "2") . Some items were clearly viewed as more neutral than others ; for example, item A3 (previous treatment) was rated as "neutral" in nine cases, "unimportant" in one case, and "irrelevant" in one case . Data were also compiled on omitted items the experts felt were "essential" and should have been included (recall that there was a provision for this on the survey) . Forty-seven percent of those surveyed did not add any items, and several others recommended the addition of rather idiosyncratic items (in that either one or no other raters suggested such an addition) . However, certain recommended additions occurred at a relatively higher frequency. For example, approximately 14% of the raters suggested that the checklist should include the name of the person responsible for supervision and monitoring . Also, about 14% stated that the checklist should more explicitly call for a functional analysis of the behavior disorder (suggest-

436

T R. Vollmer et al.

TABLE 2 Item-by-Item Percent Agreement Scores and Mean Validation Ratings Program Component

% Agreement

Mean Rating

Behavior:

Definition Context Previous treatment

89 .7 86 .2 82 .8

1 .09 1 .47 2 .30

Objective :

Measurable Time limit

93 .1 89 .7

1 .25 2 .29

Program :

Sessions Positive reinforcement specified Target defined Reinforcer specified Reinforcement schedule Schedule appropriate Aversive specified Target defined Schedule Aversive fading

86 .2 86.2 84.6 89.7 84.6 96.2 86.2 100 .0 100.0 100.0

1 .78 1 .26 1 .19 1 .19 1 .58 1 .71 1 .09 1 .19 1 .33 1 .88

Data :

Baseline reported Data for reinforcement target Method described Method appropriate Data for aversive target Method described Method appropriate

86.2 89 .7 92 .6 100 .0 89 .7 92 .6 96 .3

1 .61 1 .19 • 1,37b 127c 1,19° 1 .37b 1 .27°

Review schedule specified Consent for aversivcs

89 .7 100 .0

1 .44 1 .36

Quality assunmce:

-Data items rated by experts as one item . bDescription items rated by experts as one item . °Appropriateness items rated by experts as one item.

ing that item A2 was not explicit enough) . Approximately 10% of the raters suggested adding an item that specified whether the alternative response being reinforced was functionally equivalent to the aberrant behavior (i .e ., would function to obtain the same class of reinforcer) . Ten percent also called for the name of the person responsible for carrying out the program, and 10% suggested an item signifying a generalization or maintenance component.

Content Analysis Figure 1 shows the percentage of correspondence between each component of each facility's programs and the items on the checklist . Overall, correspondence was greater in facility B's programs on 11 of the 24 items,



437

Behavior Management Programs

0

20

40

Bo

60

100

79 .2 712 BEHAVIOR

60 .9 59 .5

Context "Olin mon

® Facility / folly B A

Previous Treatment //////////////////////// 69 24,3 Measurable OBJECTIVE

Time L mil

96 .1 42.3 ////////////////////////////////. 62 .7 23 .4 90 .2 97 .4

Sessions

99 Pos Rein! Specified

t .t ////// 52 .9

Target Defined Reinforcer Specified

67 .9 98 9 .9 66.3 82 .2 95 .5 78.9 100 1 00 98 e6 96 5.2

/

Reinforcement Schedule PROGRAM Schedule Appropriate Aversive Specified Target Defined /////////////// Schedule Aversive Fading Baseline Reported Data for Relnf Target

t 8 55.1 83 .3

Method Described

0 .4 83 .3

Method Appropriate

DATA

Data for Aversive Target

75 .7 eo

Method Described _ QUALITYIr ASSURANCE

L

97 .4 98

////// 85.9 93 .3

Method Appropriate Review Schedule Consent for Aversives

97 .3 17 .6 70.3 28 .6 0 100 0 20 40 60 so PERCENTAGE OF PROGRAMS SCORED (+)

FIGURE 1 . Percentage of programs for facilities A and B scored positive (+) for each item in the content analysis .

correspondence was greater in facility A's programs on 7 of the 24 items, and the two facilities were roughly equal (within 5%) on 6 items . Several program items showed very high correspondence with checklist items in both facilities . Nearly all of the items related to the use of aver-



438

T. R . Vollmer et al.

sive procedures (see items C3a-c and D3a-c) were scored high (with the exception of the category for "aversive fading," which was never included in any of the programs) . Both facilities also had high levels of correspondence for data collection, except for data for the positive reinforcement target response . Facility B showed high correspondence (above 90%) on several items in which facility A showed what might be considered a low level of correspondence (below 80%) . Using the (greater than) 90% and (less than) 80% comparative criteria, the items for which Facility B was singularly high in correspondence included the categories for behavioral definition (94 .1%), measurable objective (96 .1%), scheduling reinforcers (95 .5%), reporting baselines (98%), and collecting data for aversive target responses (98%) . Conversely, facility A did not correspond highly in any areas in which facility B's correspondence was low ; all of the items on which facility A scored well were also high-correspondence items for facility B . In addition to low correspondence to the aversive fading item for both facilities, consent for aversive procedures was extremely low - in fact, facility B did not include such an item on any of the reviewed programs . Also, a description of previous treatments was absent on more than 50% of the programs reviewed in both facilities . DISCUSSION Results of this study indicate that an objective system for evaluating the written aspects of behavior reduction programs can be utilized to determine the technical contents of programs . A specific usage of the content analysis could be as a tool to survey the potential effectiveness and quality of one facet of treatment for institutional residents . The high degree of reliability across observers achieved in this study suggests that the content analysis does in fact reduce the likelihood of subjective opinions concerning a program's potential effectiveness . Thus, the success or failure of a particular facility with respect to programmatic surveys would not depend on the whims of one observer across time or divergent "opinions" of two separate evaluators . However, this study did not assess the degree of training required in order to score the contents of a program with an acceptable degree of reliability . The application of the content analysis in this study revealed some potential strengths and weaknesses in behavior reduction programs . In some cases the strengths and weaknesses were specific to one facility, but in other cases they were present in both facilities . In general, facility B was more likely to include items than was facility A, and this was probably a function of the requirements for certification and the ongoing interaction



Behavior Management Programs

4 39

with a nearby university. Thus, it is possible that facility B's level of correspondence reflects very high performance, and that the level for facility A, although lower, reflects the current status in most states . Independent of the results as they relate to the two facilities that participated in the study, the present data address some general issues within the field of developmental disabilities . For example, the generally high rate of correspondence with items involving aversive procedures suggests that there is general concern with and caution exercised when using such procedures . On the other hand, the absence of compliance on the "aversive fading" component might provide the field with a baseline from which to work and improve . Some other aspects of the results seem to be more artifactual . For example, the category for "consent for aversives" was probably low simply because most facilities require such information to be placed in other locations - such as in client files - not in the program itself. A similar argument might be made for facility B with respect to "review schedule specified," insofar as is it likely that the chief psychologist tracks programs and review schedules (whereas this item is apparently tracked on the written program itself in facility A) . The results of the validation procedure are also noteworthy. Those results suggest that most items on the checklist were seen as important (or even essential) to a behavior management program . However, it appears that some professionals do not regard mention of previous treatment as a necessary inclusion ; this is probably because access to previous treatment procedures and their results is available through other means (which might also account for the low level of correspondence on this item in both facilities) . In addition, the item related to specification of a "time limit" for the objective was generally seen as less important than other items ; this finding probably reflects the general tendency in the field to base decisions about program completion on the data, rather than any preset interval of time . Overall, results of the survey validated the specific items included in the present analysis . Future research could examine the correspondence between written programs and the other items suggested by raters, such as an analysis of the functional properties of behavior . It is important to emphasize that a high degree of correspondence with a content analysis such as that described here does not ensure a program's success . Indeed, even if a program's contents correspond with 100% of the items on the checklist, there is no guarantee that the program would be implemented correctly . Furthermore, even if one could assure perfect agreement between a program plan and its implementation, the level of effectiveness based on the number of components included in the program is unknown . Thus, a review of written program instructions can be seen as



440

T R . Vollmer et al .

an important first step, but by no means an end point for program analysis . Future studies could compare the effectiveness of programs that include 100% of the specified criteria against other programs that contain only a certain percentage of the criteria (which could be altered parametrically) . Another important extension of this study would be the development of methods for evaluating program implementation . For example, one could actually observe the program while it is in process and score the presence or absence of items such as "reinforcer delivered at appropriate time" or "session run at correct time of day ." Similar extensions are possible for other important facets of behavioral programs . There is a great deal of current concern over the quality of services provided to developmentally disabled persons, including the quality of behavior reduction programs . Perhaps using the current data as a beginning point, professionals in the service-providing professions can begin to establish uniform evaluative criteria . REFERENCES Baer, D. M ., Wolf, M . M ., & Risley, T. R . (1968) . Some current dimensions of applied behavior analysis . Journal of Applied Behavior Analysis, 1, 91-97. Cooper, J . 0., Heron, T. E ., & Hcward, W. L . (1987). Applied behavior analysis. Columbus, OH: Merrill . Deitz, D. E. D ., & Repp, A . C. (1983) . Reducing behavior through reinforcement . Exceptional Education Quarterly, 3, 34-46 . Florida HRS Manual 160-4 . (1989, April) . TTallaiassee, FL : Florida Department of Health and Rehabilitative Services, Developmennd Services Program Office, Iwata, B . A ., Dorsey, M . F., Slifer, K . J ., Baummt, K. E., & Richman, G . S . (1982) . Toward a functional analysis of self-injury. Analysis and Intervention in Developmental Disabilities, 2,3-20. Johnston, .J M ., & Shook, G. L. (1987) . Developing behavior analysis at the state level . The Behavior Analyst, 10, 199-233 . Joint Commission on Accreditation of Heal dmare Organizations . (1989) . Consolidated standards manual . Chicago : Author . Lennox, D, B ., Miltenberger, R . G ., Spengler, P., & Erfaniat, N . (1988) . Decelerative treatment practices with persons who have mental retardation : A review of five years of the literature . American Journal on Mental Retardation, 92, 492-501 . Martin, G ., & Pear, J . (1983) . Behavior modification : What i1 is and how to do it (2nd ed .) . Englewood Cliffs, NJ : Prentice-Ilall, Inc . Repp, A . C ., & Dehz, D . E. D . (1978) . On die selective use of punishment- Suggested guidelines for administrators . Mental Retardation, 16, 250-254 . Repp, A .C., Felce, D ., & Barton, L . E . (1988) . Basing the treatment of stereotypic and selfinjurious behaviors on hypotheses of their causes . Journal of Applied Behavior Analysis, 21,281-289 . Sajwaj, T. (1977) . Issues and implications of establishing guidelines for the use of behavioral techniques. Journal ofApplied Behavior Analysis, 10, 531-540 . Spreat, S ., & Land, F. (1989) . Role of human rights committees in the review of restictivelaversive behavior modification procedures : A national survey . Mental Retardation, 27, 375-382 .



Behavior Management Programs

44 1

Spreat, S ., & Lipinski, D . (1986), A survey of state policies regarding the use of restrictive/aversive behavior modification procedures . Behavioral Residential Treatment, 1, 137-152 . U.S . Department of Health and Human Services, Health Care Fincancing Administration . (1988, June 3) . Medicaid program : Conditions for intermediate care facilities for the mentally retarded : Final rule. Federal Register, 53, 20447-20505 .

A content analysis of written behavior management programs.

A method is described for assessing the elements of individual behavior management programs. The content analysis, consisting of 24 items covering the...
587KB Sizes 0 Downloads 0 Views