Research in Developmental Disabilities, Vol. 12, lap. 349-360, 1991 0891-4222/91 $3.00 + .00 Printed in the USA. All rights reserved. Copyright © 1991 Pergamon Press pie

Reliability Analysis of the Motivation Assessment Scale: A Failure to Replicate Jennifer R. Zarcone, Teresa A. Rodgers, and Brian A. Iwata The University of Florida

David A. Rourke Amego, Inc., Quincy, MA

Michael F. Dorsey South Bay Mental Health Center, Plymouth, MA

The Motivation Assessment Scale (MAS) has been proposed as an efficient questionnaire for identifying the source of reinforcement for an individual's selfinjurious behavior (SIB ). A previous reliability analysis of the MAS (Durand & Crimmins, 1988) reported interrater correlation coefficients ranging from .66 to .92, based on a comparison of responses provided by classroom teachers. In this study, the reliability of the MAS was reexamined with two independent groups of developmentally disabled individuals who exhibited SIB (N = 55). For the institutional sample (n = 39), the MAS was given to two staff members (a supervisor and therapy aide) who work with the individual daily. For the school sample (n = 16), the MAS was given to the teacher and teacher's aide who taught the student. The correlational analyses completed by Durand and Crimmins (1988) were repeated; in addition, a more precise analysis of interrater reliability was calculated based on the actual number of scoring agreements between the two raters. Results showed that only 16 of the 55 raters agreed on the category of reinforcement maintaining their client's or student's SIB, that only 15% of the correlation coefficients obtained were above .80, and that none of the reliability scores based on percent agreement between raters was above 80%. This research was supported by a grant from the Developmental Disabilities Planning Council. The two subject samples were drawn from independently initiated research projects, which were later combined into a single study after the authors became aware of each others' work. The authors appreciate the assistance of Mark Geren, Suneeta Jagtiani, Martina Jonak, Jonathan Kimball, Jodi Mazaleski, and Timothy Vollmer. Requests for reprints should be sent to Brian Iwata, Department of Psychology, The University of Florida, Gainesville, FL 32611.

349

350

J. R. Zarcone et al.

A considerable amount of research indicates that behavior disorders exhibited by developmentally disabled individuals are learned responses, and in recent years much of this research has focused on the development of methods for identifying the motivational bases for a behavior disorder prior to treatment (see Iwata, Vollmer, & Zarcone, 1990, for a review). The most thorough methods of analysis involve exposing the individual to a series of antecedent and consequent events to determine whether the behavior problem is maintained by positive, negative, or automatic reinforcement (Carr & Durand, 1985; Durand & Carr, 1987; Iwata, Dorsey, Slifer, Bauman, & Richman, 1982; Iwata, Pace, Cowdery, Kalsher, & Cataldo, 1990; Mace & Knight, 1986; Mace, Page, Ivancic, & O'Brien, 1986; Steege, Wacker, Berg, Cigrand, & Cooper, 1989; Sturmey, Carlsen, Crisp, & Newton, 1988). Several requirements must be met when conducting these types of analysis: therapists who conduct assessment sessions and observers who collect data must be well trained; a relatively high degree of environmental control must be maintained; and several days or weeks should be allowed for the completion of the assessment. In an attempt to reduce the effort and/or time requirements for conducting functional analyses of behavior disorders, alternative methods have been proposed that make use of verbal report data, usually in the form of interview, rating scale, or questionnaire responses. For example, a questionnaire recently developed by Durand and Crimmins (1988) focuses on the maintaining conditions for an individual's self-injurious behavior (SIB). Called the Motivation Assessment Scale (MAS), the questionnaire examines four possible categories of reinforcement: positive reinforcement in the form of attention; positive reinforcement in the form of access to activities, toys, or food; negative reinforcement in the form of escape from demands; and sensory (automatic) reinforcement. Four items on the questionnaire are allocated to each category of reinforcement, for a total of 16 items. In the Durand and Crimmins study, the MAS was administered to a teacher and a teacher's aide who worked regularly with 50 self-injurious students. The raters indicated on a Likert-type scale, ranging from zero (never) to six (always), the likelihood of SIB occurring or not occurring under a variety of circumstances. The authors compared the responses of each teacher-aide pair through a correlational analysis of: (1) all questionnaire items, (2) the mean scores for each category of reinforcement, and (3) the ranked ordering of the four reinforcement categories for each subject (from highest to lowest). Results showed that all three correlational analyses were significant at the .001 level. Correlations for the items, mean scores, and ranks ranged from .66 to .92, from .80 to .95, and from .66 to .81, respectively. Based on these data, the authors concluded that the MAS was a reliable instrument.

MAS Reliability

351

A limiting feature of the Durand and Crimmins (1988) study was their use of correlational procedures, which do not provide information on the extent to which two raters ever chose the same response for any of the items on the questionnaire. Point-by-point reliability between raters can be determined only by examining the actual scoring agreements, which are usually calculated as a percent agreement statistic. The purpose of this study was to systematically replicate the reliability analysis of the MAS reported by Durand and Crimmins. Interrater reliability was calculated using both correlational and percent agreement analyses. METHOD

Subjects and Settings Two subject samples were included in the study. The institutional sample consisted of staff and clients of a public residential facility for the developmentally disabled. The MAS was administered in the home cottage of the client who was the target of the questionnaire. The school sample consisted of teachers, teacher aides, and students of a private residential school for autistic and mentally retarded individuals who had severe behavior disorders. The MAS was administered in the classroom of each individual who was the target of the questionnaire.

Clients-students. The institutional sample was comprised of 39 clients (adolescents and adults) who were referred to an intensive program for assessment and treatment of SIB. They participated in this study as part of a preintervention screening. All of the clients had a history of SIB, as identified by a facility psychologist, which ranged from very frequent and severe (more than one per minute, resulting in extensive tissue damage) to infrequent and mild (once a week or less, with little injury). All of the clients functioned in the severe-to-profound range of mental retardation. The school sample consisted of 16 students (adolescents and adults), all of whom had specific goals and objectives on their individual educational plans for the elimination of SIB. Average frequencies of occurrence for SIB ranged from several times per hour to less than once per week. All students functioned in the moderate-to-severe range of mental retardation; 14 of the 16 were diagnosed with autism using DSM-III-R criteria. Raters. Each institutional client's SIB was rated on the MAS by two staff members who had worked with the client the most extensively and/or for the longest time, which ranged from several months to several years. One supervisor (a behavior program specialist or rehabilitation therapist) and one therapy aide were chosen to complete the MAS for each client. SIB for

352

J. R. Zarcone et al.

the school sample was rated by each student's primary classroom teacher and the classroom aide having the most experience working with that student, which ranged from several months to about a year. Educational backgrounds for both samples were quite mixed, ranging from high school through completion of the masters degree. Administration Procedure For the institutional sample, the MAS was administered to the two raters individually. Graduate research assistants trained in the administration of the scale were used as experimenters. Prior to responding to the items on the questionnaire, the raters were told the purpose of the MAS and that the results would be used in developing treatment programs for the client. Raters also were asked to give a topographical description of the client's most frequent form of SIB. The raters were then given a 3" × 8.5" card, on which the response scale was printed (including numbers and the description of what each number meant). Each question was read to the rater by the experimenter, so that there would be no discrepancies in answers on the MAS due to confusion, illiteracy, etc. Raters were given as much time as they desired on each item. Experimenters did not give the raters any assistance on the specific items; the experimenter simply encouraged the rater to answer as best as (s)he could. For the school sample, the MAS was administered to the two raters individually in their respective classrooms. The raters marked answers directly on the questionnaire, as in the Durand and Crimmins (1988) study. Administration of the MAS was preceded by an inservice training program on the functional analysis of behavior disorders, during which the purpose of the MAS was described. Reliability Analysis The interrater reliability of the MAS was calculated in two ways. First, in order to compare the results obtained in this study directly with those reported by Durand and Crimmins (1988), the three correlational analyses used by Durand and Crimmins were applied to the present data. Pearsonproduct correlations were calculated for each individual's MAS by comparing the two raters' raw scores across all items on the questionnaire. Pearson-product correlations also were obtained for each individual by c o m p a r i n g the mean scores for the four categories of reinforcement. Finally, the four categories of reinforcement were ranked from highest to lowest for each individual (e.g., if a rater gave the highest scores in the attention category, that category received the rank of "1"), and Spearman rank-order correlations were calculated for each pair of raters.

MAS Reliability

353

The second method of reliability analysis was based on item-by-item scoring of agreement between the two raters for each MAS. Two types of percent agreement scores were calculated by dividing the number of agreements by the number of agreements plus disagreements and multiplying by 100. With the "identical" method, an agreement was defined as both raters selecting the same response for a given item on the MAS (e.g., both raters gave the score of "2"). A second and more lenient method was based on agreements between "adjacent" scores. With this method, an agreement was defined as one rater's score for a given item falling within plus or minus one from the other rater's score (e.g., if one rater gave a score of "2," an agreement was defined as the second rater giving a score of "1," "2," or "3"). In addition to the analysis of reliability across rater pairs, the reliability of each of the items on the MAS was assessed. First, Pearson-product correlations were calculated for each of the 16 items across all pairs of raters. Second, percent agreement scores also were calculated by the "identical" and "adjacent" methods for each of the items across all raters. The results of this analysis would indicate which items on the questionnaire were highly correlated and/or had high percent agreement scores across all raters. RESULTS Figure 1 shows a scatter plot based on the categories or sources of reinforcement scored highest by each pair of raters. Only 16 of the 55 pairs of raters, or 29.1%, agreed on the source of reinforcement for their client's or student's SIB; these were distributed about evenly between the institutional (11/39) and school (5/16) samples. The data also indicate which categories were most frequently cited as the variable(s) maintaining SIB. Of the total 110 raters to whom the MAS was administered, the sensory category was scored highest by 38% of the raters, the tangible category was scored highest by 31%, the demand category was scored highest by 16%, and the attention category was scored highest by 8%. Six percent of the raters scored two categories as tied for highest; these data could not be plotted in Figure 1. An unexpected finding was that the most frequently observed "category disagreement" occurred when one rater scored a client's SIB as maintained by "sensory" consequences, while the other rater scored SIB as maintained by "tangible" consequences. Disagreement of this t y p e - 17 occurrences, exceeding the total number of category agreements - - would seem unusual because of the clear differences between the two maintaining variables: tangible consequences are physical events whose delivery can be directly observed; sensory events cannot be observed, and their effects are inferred in the absence of tangible consequences. One might predict greater confu-

354

J. R. Zarcone et al.

ATTENTION

TANGIBLE

ATTENT°N'iii [] TANGIBLE

~

DEMAND

SENSORY

u-nuN

1-10

:~!i!ii!~!~!i!i!i!i~ii!i~i!i!i!i~i!~i~!i!i!ii~i!~!iiiii!ii! i i li(:=:i!~::!ii!!!:!i!:!:i!i!i:!%i!!ii:i!!!i i~iiiiiiiiiiiii!iiiiiili!iiOBiiiiiiiiiiiiiiiiiiiiiii O O BNN iiiiiiiiiiiiiiii!iiiiiiiiiiBiBiiiiiiiiiiiiiiiiiiiiii

DEMAND

[] OODO munn ||N| NNNN

iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiOiiiiiiiiiiiiiiiiiiiii!iiiiiii! iiiiii~iiiiiii~iii~lBi!iiiiililili!iiiiii!!iiiiiii~ "i~!i!i!i!i!!!ii i :i i i i i ilili i i :i i:i i:i:i:i~i:i~i:i:i:!

run nm uumu

[] +:~=~:i i:ii,=~ ~:~ ~=~

SENSORY

~+~+=:

iii!i!i!i~i!ii~iii!iiioiiii!ii!i!ii~i!i~i~ iiiiiii!!iiiiiiiiiiioBiiiii!iii!!i~!iiiiii!i!!ii! ii~iiiiiiiiiiiii!iii!i!BBiiiiiiiiiiiiiiiiiiiil

FIGURE 1. Scatter plot based on the MAS categories of reinforcement (maintaining variables) scored highest by each pair of raters. Only 48 category x category values are shown for the the 55 subjects; 7 values could not be plotted because a rater's responses yielded the same highest score for two different categories. Data points falling within the shaded cells (n = 16) reflect category agreement between two raters.

sion between the "tangible" and "attention" categories because both types of consequences belong to the same general class (environmental positive reinforcement), but this accounted for only one category disagreement. Results of the correlational analyses are presented as frequency distributions in Figure 2 under the headings "Overall," "Category," and "Rank." The Pearson (overall) correlations for each individual ranged from -.30 to .81 (M = .27). The Pearson correlations of mean scores for the separate categories ranged from -.80 to .99 (M = .41). Finally, the Spearman rankorder correlations ranged from -.80 to 1.0 (M = .41). The effect of collapsing data (from Overall to Category to Rank) can be seen as an upward distributional shift in the correlations; however, extremely low (i.e., negative) values are apparent for all methods of data reduction.

MAS Reliability

355

I

1 []

Institution

RANK CORRELATION

CA'fEGORY GORRELAllON 09-1,0 (/) l-Z LU 0

0.8-0.89 I 0.7-0.79 0.6-0.69 05-0.59 0.4-0.49

U. U. LU

02 029

0 0

01-009

Z 0

oI (-.o9) o 1 (-o.19)

l--

-o2-(029)

.J 14.1 n" 0 (.3

0.9-1.0

0.8-1.0

0.8-0.89

0.8-0.89

0.7-0.79

0.7-0.79

[]

0.6-0.69

0.6-0.69

/

0.5-0.59

0.5-0.59

~

0.4-0.49

0.4-0.49

0.3-0.39

0.3-0.39

0.2-0.29

0,2-0.29

0.1-0.09

0,1-0.08

o.3-o.39 ~

I

[]

0-0.09

0-0.09

0-009

-.01-(-.09) '

-01-(- 09)

-0.1-(-0.19)

-01-(-0 19)

[]

-0 2-(-0.29)

-0 2-(-0 29)

~

-0.3-(-0.39)

-0.3 (039)

0 3-(-0.39)

0 4 (-0.49)

-0 4-(-0.49)

-0.4-(-0.49)

-0.5-( 0 59)

-0 5-(-0 59) 1

-0.5-(-0.69)

-0.6-(-0.69)

-0.6-(-0.69)

-0.6-(-0.69)

0 7-(-0.79)

0 7-(-0 79) [ ]

0.7-(-0.79)

08(89)

0 8 (89)

-0.8-(-.89)

-0 9-(-1 0)

o 9+1 o)

-o.9-(-I .o)

i

0

5

10

15

FREQUENCY

1

School

0

[]

I1

t

i

l

i

i

5

10

15

5

10

OF

i

15

OCCURRENCE

FIGURE 2. Frequency distribution of correlation coefficients between M A S rater pairs across overall raw scores (left panel), category means (middle panel), and category ranks (right panel). See text for calculations.

Figure 3 shows frequency distributions of interrater reliability scores based on the percentage agreement between pairs of raters. Agreement scores calculated by the usual "identical" method ranged from 0% to 63% (M = 20%). Reliability calculations based on the more lenient "adjacent" method resulted in higher values, ranging from 0% to 88% (M = 48%). Table 1 presents both Pearson correlations and percent agreement scores for the individual MAS questions. Pearson correlations for the institutional

356

J. R. Zarcone et al.

~ACENT

IDENTICAL

90 - 100

90 - 100

I,,U 80

I

80 - 89

- 89

I [] []

Institution School

O O

70 - 79

70 - 79

I--

60 - 69

60 - 69

50

50 - 59

Z

I,M - 59

LU UJ n."

40 - 49

40 - 49

30

- 39

30 - 39

20

-

¢5 '~ I-Z LU O n"

29

10-19

I.,U IX 0-9

ii

Ill

i

20 - 29

10 - 1 9

0-9

i

i

|

i

0

5

10

15

F R E Q U E N C Y

f i r i | 2o

0 OF

5

10

15

20

O C C U R R E N C E

FIGURE 3. Frequency distribution of percent agreement scores between MAS rater pairs usingthe typical "identical" method (left panel) and a morelenient"adjacent"method (right panel) of calculation. See text for details. and school samples, respectively, ranged from -.24 to .44 (M = .14) and -.51 to .55 (M = .11). The percent agreement scores for the institutional and school samples, respectively, ranged from 8% to 31% (M -- 19%) and 0% to 38% (M = 18.9%) based on the "identical" method of calculation, and ranged from 31% to 59% (M = 45%) and 31% to 75% (M = 55.2%) based on the "adjacent" method.

DISCUSSION Findings reported by Durand and Crimmins (1988) on the reliability of the M A S were not replicated in the present study. Several variations of

MAS Reliability

357

TABLE 1. Correlation Coefficients and Percent Agreement Scores for Individual Questions Percent Agreement Correlation Question #

Institution

School

Identical

Adjacent

Institution

School

Institution

School

1 2 3 4 5 6 7 8 9 10 11 12 13 14

.08 .20 .25 .42 -.24 .23 .44 .19 .04 .15 .23 -.10 .08 -.04

.14 -.51 .00 .34 .55 -.22 -.25 .45 -.17 .12 .05 .34 .54 .28

8% 23% 31% 28% 18% 10% 23% 10% 10% 23% 15% 21% 21% 26%

25% 0% 19% 19% 25% 13% 19% 25% 6% 13% 19% 25% 25 % 25%

31% 46% 54% 46% 49% 49% 49% 46% 38% 46% 54% 33% 41% 36%

50% 31% 50% 69% 56% 63% 69% 75% 44% 44% 69% 50% 50% 56%

15

.28

-.10

28%

38%

59%

69%

16

.06

.21

13%

6%

38%

38%

both correlational and percent agreement reliability analyses were applied to independently collected data samples, and results showed little correspondence between raters when they were asked to identify variables maintaining SIB. If one were to choose a minimum standard of r = .80 or agreement = 80% as a cutoff for acceptable reliability, only 15% of the correlation coefficients and none of the percentage agreement scores met this standard. These results differ markedly from those obtained by Durand and Crimmins and raise serious questions about the utility of the MAS as a diagnostic or prescriptive tool for clinical as well as research purposes. In light of the low correlation and percent agreement scores obtained between raters for the MAS as a whole, we conducted additional reliability calculations to determine whether rater responses to any specific questions contributed to either high or low overall reliability. The uniformly low correlation and agreement scores obtained for the individual questions indicate that there was no subset of items or even a single item likely to produce high agreement between raters. Because percent agreement scores are more conservative (i.e., stringent) measures of observer reliability than are correlation coefficients, some discrepancy between these two was expected. In the present study, however, even the correlation coefficients were quite low. The fact that we did not obtain correlational results consistent with those reported by Durand and Crimmins ordinarily might be attributed to a number of differences related to settings, clients, raters, and administration procedure (i.e., institution vs.

358

J. R. Zarcone et al.

school, adolescents and adults vs. children, institutional staff vs. teachers and aides, responding to each item when read by an interviewer vs. viewing the entire scale at once). Yet these variations in sampling characteristics were included in the present study and appeared to have no effect on the results. Even if our low reliability figures could be attributed to such sampiing variables, the generality of the MAS would still be questionable because its use has not been restricted in any way. The MAS has been proposed as a general substitute for functional analyses based on direct observation; therefore, it should be applicable across type of individual, rater, setting, and behavior problem. Although the primary data for the MAS consist of verbal report, the ratings should reflect observable events to some extent because the rater's response is based on having observed something about the client's behavior. Thus, another variable perhaps responsible for low reliability is definitional (item) ambiguity. For example, phrases on the MAS such as "Does it appear that he or she enjoys performing this behavior..."(Item 9), "Does your child seem to do this behavior to upset or annoy you..." (Items 10 & 11), and "...does your child seem unaware..." (Item 13) appear subjective because they do not refer to any specific events (i.e., the described processes could never have been directly observed). Item ambiguity of this type could account for low reliability scores generally, but not for differences between the present data and those reported by Durand and Crimmins because the same items (i.e., "behavioral definitions") were used. The fact that the MAS requires a rater to either recollect past observations of the client or make conclusions based on these recollections suggests two additional factors for consideration. The first is the possibility that responses on the MAS are affected by a rater's familiarity with a client. This would be reflected in items such as "Does this behavior occur repeatedly, over and over, in the same way?" (Item #5), which seem to require extensive observation of the client over time. Raters in the present study had worked with the clients for varying lengths of time (ranging from about half an academic year to several years), but all should have had the opportunity to observe their clients repeatedly across a number of contexts; we assume that a similar situation existed in the Durand and Crimmins study because it was conducted in a classroom environment. Thus, rater familiarity should have had little effect on the reliability data; if anything, the institutional raters in our study were more familiar with their clients than were raters in the Durand and Crimmins study. A second possibility is that staff in the Durand and Crimmins study were trained extensively in either behavioral observation or interpretation, whereas our staff received minimal training. For example, the items "Does this behavior occur when any (sic) request is made..." (Item 6), "Does this behavior occur when you take

MAS Reliability

359

away a favorite toy or food" (Item 8), and "Does this behavior stop occurring shortly after..." (Items 12 and 14) require the "setting up" of certain situations or the recollection of highly specific situational contexts in which past behavior occurred, both of which reflect a considerable amount of training in behavioral assessment. Extensive staff training for the purpose of completing the MAS was both unnecessary and inappropriate in our view. No specific expertise on the part of raters was mentioned in the original report; therefore, the MAS should be applicable across a wide range of raters. Also, a requirement of extensive staff training in order to obtain reliable MAS data would seem to negate directly one of the scale's advantages - - relative simplicity compared to more direct forms of assessment. Our attempted replication examined two classes of variables - - reliability calculation method and rater/subject sample. Neither variable seemed to have a differential effect because the reliability results were uniformly low. This finding was unexpected and cannot be attributed to any identified difference between our methods and those used by Durand and Crimmins, although rater training remains an unexplored possibility. The present research might therefore be considered a failure not only to replicate previous findings, but also to identify the conditions that produce high or low interrater reliability on the MAS. Additional reliability analyses will be required to identify those conditions. Nevertheless, our inability to obtain adequate interrater reliability under conditions both similar to and different from those reported by Durand and Crimmins suggests that extreme caution should be taken when administering the MAS and when interpreting the results. Having a parent or caretaker complete the MAS may provide information every bit as useful as that gained during an interview (e.g., as in asking the parent, " W h y do you think Johnny bangs his head?"), but the present data indicate that information gained from the MAS may not be any more useful. Thus, it does not appear that verbal report measures about the functions of (maintaining variables for) behavior disorders are substitutable for information gained through direct observation of behavior. REFERENCES Carl E. G., & Durand, V. M. (1985). Reducing behaviorproblemsthrough functional communication training. Journal of Applied Behavior Analysis, 18, 111-126. Durand, V. M., & Cart, E. G. (1987). Social influences on "self-stimulatory"behavior: Analysis and treatmentapplication.Journal of Applied Behavior Analysis, 20, 119-132. Durand, V. M., & Crimmins, D. B. (1988). Identifyingthe variables maintaining self-injurious behavior.Journal of Autism and Developmental Disorders, 18, 99-117. Iwata, B. A., Dorsey, M. E, Slifer, K. J., Bauman, K. E., & Richman, G. S. (1982). Toward a functional analysis of self-injury. Analysis and Intervention in Developmental Disabilities, 2, 1-20.

360

J. R. Zarcone et al.

Iwata, B. A., Pace, G. M., Cowdery, G. E., Kalsher, M. J., & Cataldo, M. F. (1990). Experimental analysis and extinction of self-injurious escape behavior. Journal of Applied Behavior Analysis, 23, 11-27. Iwata, B. A., Vollmer, T. R., & Zarcone, J. R. (1990). The experimental (functional) analysis of behavior disorders: Methodology, applications, and limitations. In A. C. Repp & N. N. Singh (Eds.), Perspectives on the use of nonaversive and aversive interventions for persons with developmental disabilities (pp. 301-330). Sycamore, IL: Sycamore Publishing Co. Mace, F. C., & Knight, D. (1986). Functional analysis and the treatment of severe pica. Journal of Applied Behavior Analysis, 19, 411-416. Mace, E C., Page, T. J., Ivancic, M. T., & O'Brien, S. (1986). Analysis of environmental determinants of aggression and disruption in mentally retarded children. Applied Research in Mental Retardation, 7,203-221. Steege, M. V., Waker, D. P., Berg, W. K., Cigrand, K. K., & Cooper, L. J. (1989). The use of behavioral assessment to prescribe and evaluate treatments for severely handicapped children. Journal of Applied Behavior Analysis, 22, 23-33. Sturmey, P., Carlsen, A., Crisp, A. G., & Newton, J. T. (1988). A functional analysis of multiple aberrant responses: A refinement and extension of Iwata, et al.'s methodology. Journal of Mental Deficiency Research, 32, 31-46.

Reliability analysis of the Motivation Assessment Scale: a failure to replicate.

The Motivation Assessment Scale (MAS) has been proposed as an efficient questionnaire for identifying the source of reinforcement for an individual's ...
608KB Sizes 0 Downloads 0 Views