Research in Developmental Disabilities 35 (2014) 1757–1765

Contents lists available at ScienceDirect

Research in Developmental Disabilities

Training residential staff and supervisors to conduct traditional functional analyses Joseph M. Lambert a, Sarah E. Bloom b,*, Casey J. Clay c, S. Shanun Kunnavatana c, Shawnee D. Collins d a

Department of Special Education, Vanderbilt University, United States Department of Child and Family Studies, University of South Florida, United States Department of Special Education and Rehabilitation, Utah State University, United States d Chrysalis, United States b c

A R T I C L E I N F O

A B S T R A C T

Article history: Received 28 August 2012 Received in revised form 14 February 2014 Accepted 19 February 2014 Available online 20 March 2014

In this study we extended a training outlined by to behavioral technicians working for a residential service provider for adults with developmental disabilities. Specifically, we trained ten supervisors and four assistants to organize, conduct, collect data for, and interpret the results of traditional functional analyses (FA; Iwata et al.,1994). Performance was initially low and improved across all measures following training. Results extend previous FA training research by including a tangible condition and by demonstrating that individuals with little to no prior experience conducting FAs can be taught all of the skills required to autonomously conduct them in a relatively short period of time. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Functional analysis Training Group home

1. Introduction Individuals with intellectual and developmental disabilities served in staffed residential settings typically engage in problem behavior (Csorba, Radvanyi, Regenyi and Dinya, 2011; Holden and Gitlesen, 2006; Lowe et al., 2007). The occurrence of problem behavior in these settings can produce a precarious environment for both caretaker and client alike because it can result in property destruction and/or injury to the client, the caretaker, or others for whom the caretaker is responsible. Problem behavior may also be undesirable for clients because its occurrence can lead care providers to place clients into more restrictive environments, which may impede their access to higher preferred activities and/or settings. Because clients have a right to treatments that facilitate inclusion into less restrictive environments (Van Houten et al., 1988), it is common for care providers to attempt to eliminate problem behavior. Typically, the first step in eliminating problem behavior is identifying its function. Although there are a variety of assessments commonly used to identify the function of problem behavior (i.e., indirect, descriptive, and experimental), not all are equally valid. The experimental functional analysis (FA; Iwata et al., 1994a/1994 is by far the most valid of the functional assessments because it identifies the antecedent events that evoke, and consequent events that maintain, problem behavior via experimental manipulation. Furthermore, FAs have been shown to identify a clear function of problem behavior in 95.6% of cases (Hanley et al., 2003). Indirect and descriptive functional assessments of problem behavior are far less accurate and frequently misidentify the true function(s) of problem behavior (c.f. Pence, Roscoe, Bourret and Ahearn,

* Corresponding author at: Department of Child and Family Studies, 13301 Bruce B. Downs Blvd. MHC2113A, Tampa, FL 33612, United States. Tel.: +1 8139747226. E-mail address: [email protected] (S.E. Bloom). http://dx.doi.org/10.1016/j.ridd.2014.02.014 0891-4222/ß 2014 Elsevier Ltd. All rights reserved.

1758

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

2009; Zarcone, Rodgers, Iwata, Rourke and Dorsey, 1991.). This misidentification can lead to the application of inappropriate interventions that can be counter-therapeutic and may actually intensify problem behavior (e.g., Iwata, Pace, Cowdery and Miltenberger, 1994). Thus, the FA has become the standard of assessment in the field of applied behavior analysis (Mace, 1994). Given their validity and accuracy, individuals charged with addressing problem behavior in group-home settings should strive to use FAs to identify the function of problem behavior. However, in order for FAs to be regularly conducted in group home settings, a professional in the organization needs to have the skill set required to independently conduct and interpret the results of FAs. Previous research has shown that it is possible to train teachers (e.g., Moore et al., 2002; Wallace, Doney, Mintz-Resudek and Tarbox, 2004), caregivers/parents (e.g., Najdowski et al., 2008), undergraduate students (e.g., Iwata), and group home staff (Phillips and Mudford, 2008) to serve as therapists who, under the supervision of more trained personnel, can conduct the sessions of an FA with a high degree of procedural fidelity. For example, Iwata demonstrated that in about 2 h, 11 upper-level undergraduate students could be trained to conduct FA conditions with adequate fidelity. However, in order for this training to be useful in group home settings the trainee (a behavioral supervisor) would need to have the repertoire necessary to independently conduct FA sessions, organize other aspects of FA sessions (e.g., ensure that necessary materials are present and that appropriate pre-session events occur), and collect, graph, and interpret FA data without additional support. One limitation of Iwata et al. is that their participants were only trained to conduct FA sessions. They were not trained to organize the environment in which FA sessions occurred (e.g., remove inappropriate materials from and place appropriate materials in the assessment area), engage in important presession behavior (e.g., provide clients with pre-session exposure to attention during attention conditions) or to collect, graph, or interpret FA data. Additionally, the training protocol was not comprehensive (i.e., it did not include ignore or tangible conditions). Thus, it is unknown whether a brief (less than 2 h) FA training, like the one outlined by Iwata et al., would be sufficient to teach someone to independently organize and conduct FAs with a high degree of procedural fidelity. It is also unclear whether participants can be taught to conduct tangible and ignore conditions with the same type of training used to teach the other conditions. In partial response to this limitation, Phillips and Mudford, 2008 added an alone condition to a training protocol adapted from Iwata and trained four residential staff members to implement FA conditions with a high degree of procedural fidelity. Three of their participants had high school degrees and one participant had some college experience. All had little to no formal training in applied behavior analysis. As was the case for Iwata, one limitation of Phillips and Mudford was that they did not train their participants to organize each FA session (e.g., ensure that correct stimuli were present and other stimuli absent) or to collect, graph, and interpret FA data. Additionally, their training protocol did not include a tangible condition. Finally, their participant pool was limited to entry-level group home staff. This specific demographic, along with the demographic targeted by Iwata (e.g., undergraduate students not employed by a residential service provider), would not likely be asked to organize, conduct, and interpret the results of an FA without supervision. Therefore, it remains to be examined whether a single training protocol would be sufficient to train individuals who might be expected to independently organize and conduct FAs, and to collect, graph, and interpret FA data, in a group home organization. Thus, the purpose of the current study was to replicate Iwata and to extend Iwata et al. and Phillips and Mudford, 2008 by training the behavioral supervisors and assistants of a residential service provider, whose job description could require them to independently organize, conduct, and interpret data from an FA in which tangible and ignore conditions were included. 2. Method 2.1. Participants 2.1.1. Supervisors Ten supervisors participated in this study. Each supervisor had a Master’s degree in social work or counseling. Three supervisors were board certified behavior analysts (BCBA), one had completed three on-line courses in behavior analysis, two had completed two on-line courses in behavior analysis, two completed one on-line course in behavior analysis, and two had received no academic training in behavior analysis. The professional experience that each supervisor had in designing behavior supports varied; however, none had any previous experience conducting FAs. Their time as supervisors with the agency ranged from one month to 7 years (mean = 24.5 months). In all subsequent tables and graphs the performance of board-certified supervisors is distinguished from non-board certified supervisors. 2.1.2. Assistants Four assistants participated in this study. Assistants provided support to supervisors in whatever capacity that was required of them. Each assistant was pursuing a Bachelor’s degrees in various fields (e.g., social work, theater, occupational therapy) and had no academic training in behavior analysis or any prior exposure to FA methodology. 2.2. Setting We conducted baseline sessions in supervisors’ offices in various locations across the participating organization. Offices contained a desk and at least two chairs. We conducted the FA training in a conference room at the regional headquarters of the

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

1759

organization. The conference room was equipped with multiple tables, chairs, and a large flat screen television on which PowerPoint1 slides were projected. We conducted post-training sessions in the conference room and in local supervisors’ offices. 2.3. Response measurement 2.3.1. Session-related behavior We assessed participant fidelity to FA procedures across six categories: pre-session environmental control, pre-session access to putative reinforcers, in-session behavior, data collection, abolishing operations in the play condition, and demand presentation. Four categories (pre-session environmental control, pre-session access to putative reinforcers, abolishing operations in the play condition, and demand presentation) were dichotomous in nature. That is, the participants either emitted the required behavior during each session or they did not. The other two categories (in-session therapist behavior and data collection) were continuously tracked throughout each session. Definitions of all participant behavior can be seen in Table 1. 2.3.2. Graphing During each evaluation of graphing proficiency we gave participants a hypothetical data set containing rates of problem behavior across various FA conditions (16 total data points across four conditions: attention, play, tangible, and escape). We also gave participants a worksheet containing a blank XY graph and a key specifying which symbols should be used for each condition. We then asked participants to graph and interpret the data. We scored whether each data point was plotted correctly. We calculated graphing proficiency by dividing correctly plotted data points by all data points and multiplying by 100. 2.3.3. Data interpretation During our evaluation of each participant’s data interpretation skills we gave participants six graphs depicting the results of different hypothetical FAs and asked them to identify the function of behavior depicted in each of the graphs. ‘‘Correct’’ functions were established prior to the study via consensus vote by a panel of five board-certified behavior analysts. If participant-identified functions matched panel-identified functions we scored an agreement. Otherwise we scored a disagreement. We calculated interpretation scores by dividing the number of agreements by the number of agreements plus disagreements and multiplying by 100. 2.4. Procedural fidelity We staged all training sessions using actors (i.e., doctoral-level graduate students) who pretended to be clients and who followed behavioral scripts that were specific to each condition. We collected data on fidelity to these scripts during all staged FA sessions. We evaluated fidelity by scoring the occurrence or non-occurrence of scripted responses at prescribed times. We scored an occurrence if the actor emitted the scripted response within 20 s of when it was scheduled to occur. For example, we scored an occurrence if a script stated that property destruction should occur 1 min 28 s into a session and the actor emitted an instance of property destruction 1 min 34 s into the session. We scored a non-occurrence if the scripted response did not occur within 20 s of when it was scheduled to occur. We calculated procedural fidelity by dividing the number of occurrences by the total number of occurrences plus non-occurrences and multiplying by 100. Fidelity scores can be seen in the top section of Table 2. 2.5. Reliability We calculated session reliability data on in-session participant behavior and on participant data collection accuracy by dividing the number of agreements between primary and reliability data collectors by the total number of agreements plus disagreements and multiplying by 100. Additionally, we calculated Pearson’s r to compare the correlation between primary and reliability scores for participant fidelity and data collection. Reliability scores can be seen in the bottom sections of Table 2.

Table 1 Participant behavior tracked during baseline and post-training sessions. Dependent variable

Definition

In-session behavior

Instruction: The presentation of a novel demand. Repeated demands (during Steps 2 and 3 of the prompt hierarchy) were not counted as an instruction. Attention: Any social interaction between the participant and actor (including reprimands) that was not considered an instruction. Stimulus removal: The removal of an object from a two-foot radius around the actor. Stimulus delivery: The presentation of an object within a two-foot radius around the actor Scoring an instance of problem behavior within 5-s of when it occurred All appropriate materials placed within the designated area and all other materials removed prior to the onset of each session 30 s of access to the prescribed putative reinforcer prior to the onset of relevant sessions Presented attention at least once every 30 s throughout each play session Presented a demand at least once every 30 s during each escape session

Data collection Pre-session environmental control Pre-session access to reinforcers Abolishing operations in the play condition Demand presentation

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

1760

Table 2 Actor procedural fidelity and inter-observer reliability scores.

Actor behavior

In-session participant behavior

Participant data collection

Ignore

Attention

Play

Tangible

Escape

fidelity scores 99.7% (96–100%) during 71.4% (25/35) of ignore sessions

97.2% (76–100%) during 64.9% (24/37) of attention sessions

95.4% (75–100%) during 59.5% (22/37) of play sessions

98.6% (92–100%) during 41.7% (15/37) of tangible sessions

97.3% (92–100%) during 51.4% (19/37) of escape sessions

Reliability scores 96.6% (72–100%) r(25) = 0.65, p < 0.0004 during 71.4% (25/35) of ignore sessions

94.8% (76–100%) r(24) = 0.927, p < 0.0001 during 64.9% (24/37) of attention sessions

96.9% (80–100%) r(22) = 0.868, p < 0.0001 during 59.5% (22/37) of play sessions

94.9% (76–100%) r(25) = 0.963, p < 0.0001 during 67.6% (25/37) of tangible sessions

92.8% (76–100%) r(19) = 0.979, p < 0.0001 during 51.4% (19/37) of escape sessions

95.8% (50–100%) r(24) = 0.974, p < 0.0001 during 68.6% (24/35) of ignore sessions

97.1% (60–100%) r(24) = 0.916, p < 0.0001 during 64.9% (24/37) of attention sessions

96.4% (80–100%) r(22) = 0.724, p < 0.0001 during 59.5% (22/37) of play sessions

94.4%(60–100%) r(25) = 0.868, p < 0.0001 during 67.6% (25/37) of tangible sessions

97.4% (90–100%) r(19) = 0.96, p < 0.0001 during 51.4% (19/37) of escape sessions

2.6. Procedural conditions We used a multiple-baseline across participants design to evaluate the effects of our training on participant behavior. Additionally, we used an AB design to evaluate the effects of our training on participant FA graphing and FA data interpretation. 2.7. Baseline At least 3 days prior to conducting baseline sessions we asked participants to read the methods section of Iwata et al. (1994). During baseline sessions, we gave participants a clipboard containing a list of high- and moderate-preferred tangible items and asked them to be the therapists for ignore, attention, play, tangible, and escape conditions. Actors pretended to be clients and used scripts specifying when specific behavior should occur during each session of the staged FA (see Appendix A for an example). There were five categories of scripted behavior: (1) Self-injurious behavior (this was the target response), (2) property destruction, (3) play, (4) compliance, and (5) appropriate communication. Ten instances of self-injurious behavior and 25 total instances of behavior were scheduled at various times across each 5-min session. We told participants which condition they were to conduct. We provided participants with all of the materials for all conditions and asked them to select and incorporate the appropriate materials for each session. We did not provide feedback following baseline sessions and did not provide prompts to use any specific materials prior to, or during, sessions. After participants conducted FA sessions we asked them to graph and interpret hypothetical FA data. In order to maximize the training time of participants who needed the most support, we implemented a partial-exclusion criterion following baseline sessions. During baseline, if a participant demonstrated 80% or better procedural fidelity for any FA condition, that participant did not participate in a 5-min post-training session for that condition during the post-training assessment. Instead, they participated in a condensed, 2-min post-training session (with actor behavior distribution proportionately equal to the full 5-min post-training sessions). Participants who met the partial-exclusion criteria were still required to demonstrate 90% or better procedural fidelity during post-training assessment sessions. 2.8. FA training 2.8.1. Training Step 1 We conducted a 45-min presentation outlining basic behavioral processes, FAs, FA data collection procedures, and the logic requisite to collecting and interpreting FA data. 2.8.2. Training Step 2 We gave participants written descriptions (adapted from Iwata) of each assessment condition and allowed them 10–15 min to read the materials. Then, we divided participants into groups and a research assistant reviewed the most important features of each condition during a small group discussion (approximately 2–4 participants per group). Next, the research assistant showed participants a brief, 1-min videotaped simulation of each FA condition and asked participants to collect frequency data on self-injurious behavior (i.e., head-hitting). The research assistant then answered questions about the simulations and checked the accuracy of the data collected by each participant. If participant data differed from the answer key, the entire group watched and scored the video again. Following video observation and data collection, participants were asked to take an

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

1761

open-note, 24-item, written quiz (adapted from Iwata) about the assessment process. After each participant completed the quiz, the research assistant immediately scored it. If a participant answered a question incorrectly, the research assistant reviewed the question with the participant and asked the participant to write the correct answer. Participants needed a minimum score of 90%, prior to receiving feedback, before they could move on to the post-training assessment. If participants scored below 90% on their quiz, they were asked to review their notes, watch the video again, and retake the quiz. 2.9. Post-training assessment We asked participants to be a therapist for each FA condition (we videotaped all sessions). The FA sessions were conducted in the same fashion as baseline with two exceptions. First, we allowed participants to bring the FA condition descriptions (provided during Training Step 2) into each experimental session. Second, research assistants gave participants feedback following each session. If participants demonstrated less than 90% procedural fidelity for any given condition the research assistant showed them the videotaped session in which poor fidelity was observed and identified correct and incorrect participant responses. Following review of the videotape the research assistant asked the participant to conduct the FA session again. This sequence continued until each participant completed one session of each condition with 90% or better procedural fidelity. After participants completed each condition with 90% or better fidelity, we provided them with hypothetical FA data and asked them to graph it. Additionally, we gave them completed FA graphs and asked them to identify the function of problem behavior. The data sets and graphs used in this condition were different than those provided in baseline.

Non-BCBAs

Percentage Correct

BL 100 75 50 25 0 100 75 50 25 0 100 75 50 25 0 100 75 50 25 0 100 75 50 25 0 100

BCBAs

Post-Training Tangible

Play

100 75 50 25 Supervisor 1 0 100 75 50 25 Supervisor 2 0

BL

Post-Training Attention

Supervisor 8

Escape

Supervisor 9 5

10

15

Assistants Supervisor 3

Ignore

75 50 25 0 100 75 50 25 0 5

BL

Post-Training

100 75 50 25 Supervisor 4 0 100 75 50 25 Supervisor 5 0 100 75 50 25 Supervisor 6 0 100 75 50 25 Supervisor 7 0 10 15

Assistant 1

Assistant 2

Assistant 3

Assistant 4 5

10

15

Sessions Fig. 1. Baseline and post-training results of participants for FA conditions in which they did not meet exclusion criteria (i.e., 80% or better fidelity during BL). Data represent the percentage of correct in-session responses that each participant emitted during each FA session. Data from supervisors without BCBAs are displayed on the left, data from supervisors with BCBAs are in the top right, and data from supervisors’ assistants are in the bottom right. Open squares represent performance in the attention condition, closed squares represent performance in the tangible condition, closed triangles represent performance in the escape condition, open circles represent performance in the play condition, and closed circles represent performance in the ignore condition. The dotted line represents a 90% mastery criterion.

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

1762

2.10. Social validity questionnaire Following training, we gave supervisors a questionnaire asking them to rate various aspects of each training event. We also asked supervisors their opinions about the practicality of the FA. Each question was rated 1–5 according to a Likert-type scale in which 1 meant ‘‘not at all,’’ 3 meant ‘‘somewhat,’’ and 5 meant, ‘‘very much so.’’ In order to keep questionnaires anonymous, we did not require any personal information. Supervisors mailed questionnaires to researchers in pre-paid, selfaddressed, envelopes. 3. Results A participant’s performance could have met the partial-exclusion criteria for some conditions and not met the partialexclusion criteria for other conditions. Thus, Fig. 1 depicts only the data for conditions in which participants did not meet the partial-exclusion criteria. In the conditions in which participants did not meet the partial-exclusion criteria, baseline fidelity scores were below 80%. Following training, these scores increased to at, or above, 90%. We also tracked progress within and across three groups: Supervisors (BCBA), Supervisors (Non-BCBA), and Assistants. Our intention in doing so was not to attempt to compare group differences. Instead, we presented each group’s data separately so that the performance of one group would not overshadow the performance of another group if performance differences across groups existed. Table 3 shows the averages of the data collected on participant fidelity across conditions (including sessions that met the partial-exclusion criteria). In general, mean participant performance increased from baseline to post-training sessions. Table 4 shows the baseline and post-training results of participant graphing and data interpretation. After our training there was an increase in the average graphing proficiency and graph interpretation accuracy of our participants. Although tentative because of the AB design used to evaluate this particular aspect of the training, these results suggest that training may have been responsible for the observed improvement in participant performance in graphing and interpreting FA data. Table 5 shows the results of the social validity questionnaire. In general, supervisors reported that the FA training was helpful and that, following the training, they felt confident that they could conduct an FA without additional supervision. Additionally, supervisors reported that they were likely to use the assessment procedures in their actual practice. Although most supervisors did not report that they felt that there were important concepts omitted from training, a few noted that they had wished more time had been spent discussing how to graph and interpret data and that we had taught them how to use the FA data to inform interventions. Table 3 Summary of mean participant procedural fidelity measures before and after FA training. Participant group

BL

Post-train

In-session participant behavior Escape Supervisors (BCBA) Supervisors (non-BCBA) Assistants All Tangible Supervisors (BCBA) Supervisors (Non-BCBA) Assistants All Play Supervisors (BCBA) Supervisors (Non-BCBA) Assistants All Ignore Supervisors (BCBA) Supervisors (Non-BCBA) Assistants All Attention Supervisors (BCBA) Supervisors (Non-BCBA) Assistants All

BL

Post-train

BL

Post-train

BL

Post-train

Pre-session environmental control

Data collection

Pre-session Sr+/AO in play/demand presentation

59% 51% 15%

100% 96% 99%

67% 29% 25%

100% 100% 100%

100% 81% 95%

93% 93% 93%

33% 43% 25%

100% 86% 50%

38%

98%

36%

100%

89%

93%

36%

79%

54% 59% 54%

99% 98% 100%

0% 0% 0%

100% 100% 100%

93% 77% 90%

100% 83% 93%

33% 14% 0%

100% 100% 100%

56%

99%

0%

100%

84%

89%

14%

100%

83% 91% 88%

100% 94% 98%

33% 14% 0%

100% 100% 100%

90% 81% 98%

100% 86% 75%

0% 67% 33%

100% 100% 100%

89%

96%

17%

100%

88%

86%

42%

100%

100% 88% 89%

100% 100% 100%

0% 67% 33%

100% 100% 100%

93% 73% 95%

83% 100% 100%

91%

100%

42%

100%

84%

96%

60% 55% 45%

99% 97% 99%

0% 14% 0%

83% 100% 100%

80% 79% 88%

100% 71% 75%

53%

98%

7%

96%

81%

79%

0% 14.3% 0% 7%

100% 71.4% 75% 78.6%

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

1763

Table 4 Graphing and Interpreting FA Data. Participant group

Baseline

Post-training

Function identification Supervisors (BCBA) (n = 2) Supervisors (non-BCBA) (n = 6) Assistants (n = 3) All (N = 11)

29% 36% 10% 27%

(29–42%) (0–57%) (0–29%) (0–57%)

Baseline

Post-training

Graphing 100% 62% (29–86%) 76% (71–86%) 72% (29–100%)

53% (6–100%) 5% (0–6%) 6% 14% (0–100%)

100% 61% (6–100%) 100% 81% (6–100%)

Note: Mean baseline and post-training accuracy scores for each group when graphing and identifying the function of hypothetical FA data. Table 5 Supervisor report of satisfaction with FA training. Mean (range) Survey questions FA training 1. Was the FA training helpful? 2. After completing the training, how confident do you feel that you could conduct an FA in the absence of additional supervision? 3. Were there important concepts or skills that you felt were omitted from, or not discussed in enough detail, during the training? If yes, what were they? Common responses: graphing and interpreting data; examples of what to do with information after collecting it (i.e., interventions); and including a follow up discussion to answer procedural questions General 1. How likely are you to continue using these experimental FA procedures in your practice?

Median

Mode

Supervisor responses 4.93 (4–5) 4.07 (2–5)

5 4

5 4

1.79 (1–4)

1

1

4.29 (2–5)

5

5

Note: All responses were given on a Likert-type scale with a range of 1–5 with 1 being ‘‘not at all,’’ 3 being ‘‘somewhat,’’ and 5 being ‘‘very much so.’’

4. Discussion In this study we extended the results of Iwata et al. (2000) and Phillips and Mudford, 2008 to a new population (i.e., residential supervisors and assistants with varying levels of professional and academic training). Additionally, we taught and evaluated participant fidelity to environmental control procedures prior to and during FA sessions. We also included training and evaluation of data collection, graphing, and graph interpretation procedures. Finally, we extended previous research by including instruction about, and an evaluation of, participant procedural fidelity to both ignore and tangible conditions. When considered together, these extensions provide evidence demonstrating that it is possible to train select personnel, in a relatively short period of time, to independently conduct FAs. However, our data indicate that this training was insufficient to substantially improve participant data collection accuracy. Thus, additional training on data collection may be required for anyone expected to collect data while simultaneously conducting an FA. It is interesting to note that fewer participants, across all groups, met the exclusion criteria for some FA conditions when compared to those who met the exclusion criteria for other FA conditions. Overall, 78.6% (66.7% supervisors [BCBA], 71.4% supervisors [non-BCBA], and 100% assistants) of all participants did not meet the exclusion criteria for the escape condition, 78.6% (66.7% of supervisors [BCBA], 85.7% of supervisors [non-BCBA], and 75% of assistants), did not meet the exclusion criteria for the tangible condition, 71.4% of participants (66.7% of supervisors [BCBA], 71.4% of supervisors [non-BCBA], and 75% of assistants) did not meet the exclusion criteria for the attention condition, 14.3% of participants (0% supervisors [BCBA], 14.3% of supervisors [non-BCBA], and 25% of assistants) did not meet the exclusion criteria for the ignore condition, and 0% of participants did not meet the exclusion criteria for the play condition. Thus, the most difficult conditions for new therapists to conduct without receiving formal training may be escape, tangible, and attention conditions. A potential limitation of this study is that we used actors (i.e., graduate students) instead of actual clients to train our participants to conduct FAs. It is possible that our results would be different if we had asked the participants to conduct an FA on actual problem behavior. Although, the role-play scripts developed for this study were designed to incorporate many events that could be encountered when conducting an FA of actual problem behavior (e.g., client engaging in inappropriate behavior other than the target behavior or requesting attention appropriately), it is possible that the skills we taught would not generalize to applied settings. Another limitation of our study is that we provided participants with a copy of the original FA study, Iwata et al. (1994), prior to collecting baseline data. Some may argue that subsequent articles on FA methodology are more accessible to readers and that, had we provided participants with a more reader-friendly article, they may have performed better during baseline sessions. Additionally, our research design would have been stronger had we collected more baseline data points for each participant. This limitation is accentuated by the fact that baseline performance was on an increasing trend for some participants (e.g., Supervisors 3, 7, and 9 during the attention condition). It is possible that additional opportunities to practice would have been sufficient for these participants to achieve the mastery criterion for each FA condition on their own. Although our training events were brief overall, it is possible that not all components were necessary. Future researchers may wish to conduct a component analysis of this training in order to optimize training efficiency. Furthermore, we did not

1764

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

evaluate long-term maintenance of the skills taught in these trainings. It is possible that over time fidelity may deteriorate and these participants would require additional support. Finally, we did not evaluate how the staff would use the results of the FA to inform treatment. Future researchers may want to investigate whether trainings such as those employed in this study would be sufficient in prescribing appropriate treatments. Notwithstanding these limitations, our training demonstrates that individuals with a broad range of education and expertise can be trained to perform all of the tasks required to independently conduct FAs. Acknowledgments We thank Hayley Halverson, Megan Boyle, and Jessica Akers for their assistance in conducting this study. Appendix A Second

Behavior

Tangible Condition (1)

Collected data?

5 Property destruction ignored?

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

17 Property destruction ignored?

29 Property destruction ignored?

41 Appropriate communication delivered attn?

53 SIB delivered tangible?

Y

N

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

01.05 Property destruction ignored?

01.17 Appropriate communication delivered attn?

01.29 SIB delivered tangible?

01.41 SIB delivered tangible?

01.53 SIB delivered tangible?

02.05 Property destruction ignored?

02.17 Appropriate communication delivered attn?

02.29 Appropriate communication delivered attn?

02.41 SIB delivered tangible?

02.53 Appropriate communication delivered attn?

03.05 Appropriate communication delivered attn?

03.17 Appropriate communication delivered attn?

03.29 SIB delivered tangible?

03.41 Appropriate communication delivered attn?

03.53 SIB delivered tangible?

J.M. Lambert et al. / Research in Developmental Disabilities 35 (2014) 1757–1765

1765

04.05 Property destruction ignored?

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

Y

N

04.17 SIB delivered tangible?

04.29 SIB delivered tangible?

04.41 SIB delivered tangible?

04.53 Property destruction ignored?

Participant provided client with approximately 30-s of access to tangible item prior to starting the session?

Participant arranged for correct stimuli to be present? (highly preferred items)

Participant removed all other items from FA area?

References Csorba, J., Radvanyi, K., Regenyi, E., & Dinya, E. (2011). A study of behaviour profiles among intellectually disabled people in residential care in Hungary. Research in Developmental Disabilities: A Multidisciplinary Journal, 32, 1757–1763. Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003). Functional analysis of problem behavior: A review. Journal of Applied Behavior Analysis, 36, 147–185. Holden, B., & Gitlesen, J. (2006). A total population study of challenging behaviour in the country of Hedmark, Norway: Prevelence, and risk marless, research in developmental disabilities. A Multidisciplinary Journal, 27, 456–465. Iwata, B. A., Wallace, M. D., Kahng, S., Lindberg, J. S., Roscoe, E. M., Conners, J., et al. (2000). Skill acquisition in the implementation of functional analysis methodology. Journal of Applied Behavior Analysis, 33, 181–194. Iwata, B. A., Pace, G. M., Cowdery, G. E., & Miltenberger, R. G. (1994). What makes extinction work: An analysis of procedural form and function. Journal of Applied Behavior Analysis, 27, 131–144. Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27, 197–209. Lowe, K. K., Allen, D. D., Jones, E. E., Brophy, S. S., Moore, K. K., & James, W. W. (2007). Challenging behaviours: prevalence and topographies. Journal of Intellectual Disability Research, 51, 625–636. Mace, F. C. (1994). The significance and future of functional analysis methodologies. Journal of Applied Behavior Analysis, 27, 385–392. http://dx.doi.org/10.1901/ jaba.1994.27-385 Moore, J. W., Edwards, R. P., Sterling-Turner, H. E., Riley, J., DuBard, M., & McGeorge, A. (2002). Teacher acquisition of functional analysis methodology. Journal of Applied Behavior Analysis, 35, 73–77. Najdowski, A. C., Wallace, M. D., Penrod, B., Tarbox, J., Reagon, K., & Higbee, T. S. (2008). Caregiver-conducted experimental functional analyses of inappropriate mealtime behavior. Journal of Applied Behavior Analysis, 41, 465–469. Pence, S. T., Roscoe, E. M., Bourret, J. C., & Ahearn, W. H. (2009). Relative contributions of three descriptive methods: Implications for behavioral assessment. Journal of Applied Behavior Analysis, 42, 425–446. Phillips, K. J., & Mudford, O. C. (2008). Functional analysis skills training for residential caregivers. Behavioral Interventions, 23, 1–12. Van Houten, R., Axelrod, S., Bailey, J. S., Favell, J. E., Foxx, R., lwata, B., et al. (1988). The right to effective behavioral treatment. Journal of Applied Behavior Analysis, 21, 381–384. Wallace, M. D., Doney, J. K., Mintz-Resudek, C. M., & Tarbox, R. S. F. (2004). Training educators to implement functional analyses. Journal of Applied Behavior Analysis, 37, 89–92. Zarcone, J. R., Rodgers, T. A., Iwata, B. A., Rourke, D. A., & Dorsey, M. F. (1991). Reliability analysis of the motivation assessment scale: A failure to replicate. Research in Developmental Disabilities, 12, 349–360.

Training residential staff and supervisors to conduct traditional functional analyses.

In this study we extended a training outlined by Iwata to behavioral technicians working for a residential service provider for adults with developmen...
413KB Sizes 0 Downloads 3 Views