This article was downloaded by: [Northeastern University] On: 25 November 2014, At: 06:58 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Intellectual and Developmental Disability Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/cjid20

Reliability of a method for establishing the capacity of individuals with an intellectual disability to respond to Likert scales a

a

a

a

Monica Cuskelly , Karen Moni , Jan Lloyd & Anne Jobling a

The University of Queensland, School of Education,Brisbane, Australia Published online: 06 Sep 2013.

To cite this article: Monica Cuskelly, Karen Moni, Jan Lloyd & Anne Jobling (2013) Reliability of a method for establishing the capacity of individuals with an intellectual disability to respond to Likert scales, Journal of Intellectual and Developmental Disability, 38:4, 318-324, DOI: 10.3109/13668250.2013.832734 To link to this article: http://dx.doi.org/10.3109/13668250.2013.832734

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Journal of Intellectual & Developmental Disability, 2013 Vol. 38, No. 4, 318–324, http://dx.doi.org/10.3109/13668250.2013.832734

ORIGINAL ARTICLE

Reliability of a method for establishing the capacity of individuals with an intellectual disability to respond to Likert scales

MONICA CUSKELLY, KAREN MONI, JAN LLOYD & ANNE JOBLING

Downloaded by [Northeastern University] at 06:58 25 November 2014

The University of Queensland, School of Education,Brisbane, Australia

Abstract Background The study reported here was an examination of the reliability of a method for determining acquiescent responding and the capacity to respond to items using a Likert scale response format by adults with an intellectual disability. Method Reliability of the outcomes of these procedures was investigated using a test–retest design. Associations with receptive vocabulary were examined. Results The majority of the participants did not demonstrate acquiescent responding. Individuals’ responses to the Likerttype discrimination tasks were consistent, although this varied somewhat depending upon the abstractness of the task. There was some association between receptive language age equivalence scores and respondent performance. Conclusion It is recommended that the pretest protocol (a) be modified to improve its reliability, and (b) this modified version be used with study participants who have an intellectual disability to ascertain the appropriate level of choice to be used for items that use a Likert response format.

Keywords: intellectual disability, Likert scale, questionnaire, self-report, pretest

Introduction Self-reports are the primary method of obtaining information on individuals’ psychological functioning; however, it has been common practice to rely on proxy reports when collecting information about individuals with an intellectual disability. Although there is recognition that proxy reports are not the best method for accessing information about self-understanding and self-perception (e.g., Zimmermann & Endermann, 2008), researchers in the field of intellectual disability often choose to use proxy reports as there are substantial concerns about the quality of data collected from individuals with an intellectual disability. These concerns relate to acquiescent responding and the capacity of individuals with an intellectual disability to respond reliably to questions about self. However, there is a growing recognition of the necessity to understand individuals’ own views of self, and thus increasing numbers of researchers are collecting data on self directly from participants with an intellectual disability, often using Likert-type scales (Hartley &

MacLean, 2006). This study investigated the reliability of a method for determining if an individual with intellectual disability is able to respond to items where the response choices are presented using a Likert scale. When researchers have made the decision to collect information directly from individuals with an intellectual disability, they have generally made some modifications to the instruments, with the intention of increasing the reliability of the data. Items and/or response choices may be simplified, and sometimes visual supports are added—often in the form of cartoon drawings of faces depicting emotions such as happy and sad. These modifications do not always result in levels of reliability, generally measured through internal consistency, that allow confidence in the resulting data (e.g., see Cuskelly & Gordon, 2011; Dagnan & Sandhu, 1999). In some instances, problems with internal consistency may arise from the quality of the scale; in others, the lack of reliability may be due to difficulties experienced by the respondents.

Correspondence: Monica Cuskelly, School of Education, The University of Queensland, St Lucia, QLD 4072, Australia. E-mail: [email protected] © 2013 Australasian Society for Intellectual Disability, Inc.

Downloaded by [Northeastern University] at 06:58 25 November 2014

Pretest for Likert scales Individuals with an intellectual disability may have difficulty in understanding items, even after modification, and they may not understand the response choices that are available. In addition, individuals with an intellectual disability may be prone to responding acquiescently. Acquiescent responding occurs when an individual has a tendency to answer “yes” to any question. There have been a number of studies that suggest that this tendency occurs in individuals with an intellectual disability (e.g., Carlin et al., 2008), with IQ being negatively correlated with acquiescence (Gudjonsson & Young, 2011). This tendency may undermine attempts to use instruments with Likert scale response formats with this group. Prior to using an instrument with Likert-type response choices, it would be helpful to have a process that determined if an individual with an intellectual disability was likely to be able to respond reliably. To use a Likert scale effectively, individuals need to be able to discriminate between at least two poles: positive and negative. Many Likert scales include a neutral position, and the majority also ask respondents to discriminate between levels of positive (or negative) response (e.g., strongly agree versus agree). Likert scales typically have between four and 10 response options (Brown, 2001), although studies that include participants with an intellectual disability generally use a relatively low number of possible choices (e.g., Nota, Ginevra, & Carrieri, 2010; Rose & Gerson, 2009). Cummins (1992) developed a test of quality of life (Comprehensive Quality of Life Scale – Intellectual Disability [ComQoL-ID] Third Edition) that included a pretesting protocol (described in detail in the section on Instruments) that was intended to identify if the respondent with an intellectual disability was able to respond to Likert-type items, and, if so, at which level (binary choice, or a 3- or 5-point scale). In a report of the psychometric properties of the ComQoL-ID, Cummins, McCabe, Romeo, Reid, and Waters (1997) did not report on the pretest protocol except to note that of the 34 individuals who completed the scale on two occasions, two were not able to respond on the second occasion at the level reached on the first occasion, 22 reached the same response level, and 10 were able to complete the items at a higher level. The authors interpreted this improvement to mean that the experience of completing the scale on the first occasion led to learning by the respondents (rather than that they were responding unreliably). Although these data suggest that the scale is reliable, the lack of information about the individual elements of the protocol means that it is difficult to be confident that the

319

scale is useful in determining if an individual can be expected to be able to successfully participate in a study in which Likert scale response types are to be used. The purpose of the study reported here was to determine if the pretest protocol developed by Cummins (1997) was reliable and so could be used to ascertain if an individual would be able to effectively respond to a questionnaire that used a Likert scale response format. A test–retest procedure was adopted for this purpose. The association with receptive vocabulary was also investigated to determine if this made a significant contribution to the ability to use Likert scales.

Method Participants Thirty-three adults with an intellectual disability with varied cause (e.g., Down syndrome, lack of oxygen at birth, prematurity) participated in the study. These participants were a convenience sample, chosen because they were a group already established to have an intellectual disability. In addition, the participants were easily accessible to the researchers as they took part in a group program and were thus able to be tested at one of the program sites. No participant had sensory or physical impairments that were uncorrected or severe enough to make participation difficult. Three participants also had an autism spectrum disorder. Participants ranged in age from 18 to 32 years, with a mean of 24.06 years (SD = 3.54 years). The mean standard score on the Peabody Picture Vocabulary Test – Third Edition (PPVT-III; Dunn & Dunn, 1997) was 48.18 (SD = 12.79). Approximately 64% of the group had a standard score of 40 on the PPVT-III, which is the lowest possible standard score on this instrument. Seventythree percent had a score that was more than three standard deviations below the mean. While the score provided by the PPVT-III is not an IQ score, it is very strongly correlated with scores from more comprehensive measures of intellectual ability and can be used as a proxy measure when more complete assessments are not possible (Castellino, Tooze, Flowers, & Parsons, 2011).

Instruments Peabody Picture Vocabulary Test – Third Edition (PPVT-III; Dunn & Dunn, 1997). The PPVT-III is an individually administered standardised test of listening comprehension for the Standard English

320

M. Cuskelly et al.

spoken word. It is designed for individuals aged 2½ to 90+ years and can be used as a screening test of verbal ability. It has strong psychometric properties. Age equivalents (AE) were used to examine the association between receptive ability and reliability of responding to Likert-type items.

Downloaded by [Northeastern University] at 06:58 25 November 2014

Pretest protocol (Cummins, 1997). There are three elements to this pretest: a Test of Acquiescent Responding, Subjective Testing, and a Test of Domain Satisfaction. Cummins suggested the elements be used as a precursor to the administration of his quality of life scale; however, the protocol can be used prior to any questionnaire. Test of Acquiescent Responding. This test comprises four questions, two of which should be answered negatively (“Do you make all your clothes and shoes?” and “Did you choose who lives next door?”). If either is answered positively, the individual is understood to be answering acquiescently. Testing is then halted. Subjective Testing. There are three aspects to this test: the Order of Magnitude test, the Scale with a Concrete Reference and the Scale with an Abstract Reference. Each has three stages: in the first stage, the individual is asked to discriminate between two choices; in the second, between three choices; and in the third, to discriminate between five choices. The Order of Magnitude test assesses whether the individual can identify orders of magnitude: biggest/ smallest; biggest/middle/smallest; biggest/second biggest/middle/second smallest/smallest. The Scale with a Concrete Reference requires the respondent to match sizes using a concrete reference (two, three, and then five blocks of different sizes). The Scale with an Abstract Reference asks the respondent to complete a task using abstract references, for example, “How important to you are the things you have, like money and the things you own?” (Here the available options are very important or not important [two-level choice]; very important, quite/somewhat important or not important [three-level choice]; not at all important, a little bit important, somewhat important, a lot important, and very, very important [five-level choice]). See Cummins (1997) for complete details. Domain Satisfaction Test. This task examines respondents’ abilities to use a Likert-type scale to express happiness and sadness. Pictures of faces are used instead of numbers or descriptors. The binary choice is between very happy and very sad; the three-element choice options are very happy, neutral, or very sad; and the five-element choice

options are very happy, happy, neutral, sad, or very sad. Before beginning this aspect of the test, the respondent is asked to identify something that makes him or her happy (e.g., dancing) and what makes him or her sad (e.g., being lonely). These are then used as the topic of the items in this scale (e.g., “If I say ‘how happy are you about dancing?’ which face would you point to?”). Before each level of the test, the intended meaning of the faces is explained to the respondent. It is anticipated that these questions will result in a positive or negative choice (not a neutral choice). Two questions that were intended to elicit a neutral response were added in this study. These two questions were: “How happy or sad do you feel about a fish in a teapot?” and “How happy or sad do you feel about the Prime Minister?” Procedure After ethical clearance had been obtained from the Ethical Review Committee of The University of Queensland, Brisbane, Australia, a large service provider was approached to seek permission to recruit participants through one of their services that had three sites across Queensland. This service provided literacy education to young adults with an intellectual disability two days a week. Permission was granted and letters outlining the study were provided to the service recipients (N = 42) and to their families. The study was explained to each individual and consent was obtained for all participants. On the first testing occasion, individuals completed the PPVT-III and all the assessments from Cummins’ (1997) pretest protocol. These latter assessments were repeated on a second occasion. There was an average of 36.48 days (SD = 14.19) between testing occasions, with a range of 14–57 days. All respondents were assessed in quiet, familiar surroundings by the same individual who was known to them. Results All participants were able to complete the PPVT-III. The group had a mean AE of 6.68 years (SD = 2.11 years, range: 2.07–14.01 years). The two questions added to the Domain Satisfaction scale that we expected would elicit a neutral response failed to do so. Many respondents had quite strong views about the advisability of keeping fish in a teapot and of the Prime Minister. Responses to these items were omitted from further consideration. Two of the 33 participants were acquiescent on both testing occasions and another two were

Downloaded by [Northeastern University] at 06:58 25 November 2014

Pretest for Likert scales acquiescent on the second occasion. These four participants were not included in the analysis regarding the reliability of the remaining elements of the pretest protocol. Two other individuals were unable to identify a favourite item (used in the Abstract Reference test), and they also did not continue with the test and their data were discarded. Thus data from 27 participants were available for analysis of the reliability of the elements of the protocol, with the exception of the Acquiescent Responding subtest, for which there were 33 participants. As responses for analysis were categories (i.e., acquiescent or not and the highest level of the two-, three-, or five-level discrimination tasks to which each individual could respond), unweighted kappa was used to determine reliability. The unweighted kappa statistic is preferred to percentage agreement as it removes the portion of agreement that is expected to occur by chance alone. Values for kappa range from 1 (perfect agreement) to 0. The recommendations made by Landis and Koch (1977) were used to interpret the data: a value for kappa between 0 and .40 was considered to indicate poor reliability; values between .41 and .60 were taken to indicate moderate reliability; values between .61 and .80, to indicate substantial reliability; and kappas of .81 and above, to indicate excellent reliability. Table 1 provides the test–retest percentage agreement and the unweighted kappa for each element of the pretest protocol. The initial percentage refers to that proportion of the group whose ability to respond to the questions was at the same level on both occasions. Performance on each occasion needed to be exactly the same to be scored as agreement. For example, if a respondent was able to discriminate at the level of two choices (yes/no), but not the level of three choices (yes/neutral/no) on both occasions, this was scored as an agreement. The Test of Acquiescent Responding, the Scale with a Concrete Reference, and the Scale with an Abstract Reference provided reliable information, but the Order of Magnitude and the Domain Satisfaction tests were unreliable. Cummins et al. (1997) suggested that changes over time in participants’ responses to the pretest items could reflect learning, so the analyses were also conducted assuming that an improvement in performance was also an indicator of reliable responding. For this analysis, if a respondent was able to discriminate at the level of two choices on the first occasion but at the level of three choices on the second occasion, this was scored as agreement, as was a change from three choices to the five choices. The Acquiescent Responding Scale was not included in these analyses as it does not have any possibility of

321

improving performance in this way. The results of these analyses are also shown in Table 1. The level of reliability of the Scale with an Abstract Reference changed from moderate to excellent when improvement was considered, and the Domain Satisfaction pretest moved to the moderate level. The Order of Magnitude test did not show any appreciable change. It should be noted that the numbers involved were small – only one participant improved his or her performance on the Order of Magnitude test, three did so on the Scale with a Concrete Reference, four improved on the Scale with an Abstract Reference, with the same number improving on the Domain Satisfaction test. Improved performance across the scales was not attributable to the same individuals improving across the aspects of the pretest. All individuals were able to discriminate at the level of two choices for the Scale with a Concrete Reference, 26% could respond at the level of three options but not at the five-option level, with 74% being able to respond reliably when presented with five graded options. For the Scale with an Abstract Reference, one individual (4%) was able to respond reliably at the level of two choices but not to the more refined options, 22% were reliable at the level of three choices but not when five response options were presented, and 74% were reliable with five options. Therefore, either a binary or a three-element choice option captured data from all participants, with the exception of those who were acquiescent. According to the protocol, this latter group did not proceed to the stage of responding to Likert-type items. As all respondents whose data were retained were able to respond to the Scale with a Concrete Reference at either the three- (n = 7) or five- (n = 20) level of discrimination, the Mann–Whitney U test was used to investigate if the two groups differed with respect to the level of receptive vocabulary. There was no significant difference between groups. This same analysis was conducted for the Scale with an Abstract Reference, after the data from the one individual who could not respond to either the three- (n = 6) or five- (n = 20) level discrimination were removed. There was a significant difference between groups, Z = −2.38, p = .017, with those who could respond to the five-level discrimination having higher skills. The median AE for the lower group was 5.11 years, and for the group who could use the five-level discrimination median AE was 6.56 years. Discussion This study was essentially a pilot study with a small, convenience sample. Therefore, our findings should be treated as preliminary. The data suggest that

322

M. Cuskelly et al.

Table 1. Test–retest results Scale Acquiescent Respondingd Order of Magnitudee Scale with a Concrete Referencee Scale with an Abstract Referencee Domain Satisfactione

Percentage who remained in the same category (accounting for learninga)

Initial unweighted kappab

Unweighted kappa accounting for learningc

88% (NA) 74% (78%) 88% (100%)

.64 (substantial) .28 (poor) .66 (substantial)

NA .36 (poor) 1 (perfect)

82% (96%)

.46 (moderate)

.84 (excellent)

56% (67%)

.24 (poor)

.50 (moderate)

Note. NA = not applicable. Percentage in brackets indicates agreement between sessions when improvement in performance is assumed to indicate learning and is thus included in the calculation as similar performance. bCalculation conducted using only data that were matched exactly. cCalculation conducted using data that were matched exactly and data where learning was assumed to have occurred. dn = 33. en = 27.

Downloaded by [Northeastern University] at 06:58 25 November 2014

a

some of the subscales of the pretest protocol developed by Cummins (1997) are useful in identifying those individuals with intellectual disability who will be able to respond reliably to items that are presented using a Likert scale. These include the Test of Acquiescent Responding, the Scale with a Concrete Reference, and the Scale with an Abstract Reference. The Domain Satisfaction scales reached a moderate level of reliability when improvement was taken into consideration, but the Order of Magnitude did not reach an acceptable level of test–retest reliability. This last result is somewhat puzzling as it might be expected that an individual would need to be able to understand levels of magnitude in order to be able to complete the Scale with a Concrete Reference and the Scale with an Abstract Reference. It seems likely that the language associated with this task was too difficult for some participants, and so interfered with their capacity to complete it reliably. On the last item of this scale, respondents are asked to point to the “second biggest” and the “second smallest” blocks in an array of five blocks presented pictorially. A modification of this task where respondents are asked to arrange five blocks in correct sequence, or to choose the correct sequence from a range of pictures, may better represent understanding of the concept of magnitude. Several respondents did not interpret the visual stimuli of the Domain Satisfaction scale in the way they were intended. The face used as the intermediate response between neutral and unhappy (introduced in the five-choice test) was understood by some participants to represent an angry face. This invalidates the integrity of the array, as the stimuli no longer represent a continuum. The testing protocol contains a specific introduction for each pictogram before it is used; however, this was insufficient to ensure understanding by the participants in this study. A more comprehensive teaching

procedure may need to be implemented, or a different approach adopted, to ensure the pictograms used in the pretest protocol (and then in subsequent testing) are understood by respondents. This finding has relevance to assessments that use these representations as response options or to support interpretability of response options when respondents have an intellectual disability. The confusion apparent in the participants in this study suggests that the inclusion of these prompts as supports for individuals when they are completing a questionnaire could reduce reliability rather than enhance it, unless each individual’s understanding of the meaning of the pictograms was first established. The majority of participants in this study were able to make judgements using three graded options and all could use a binary choice effectively. Researchers who wish to collect information directly from individuals with an intellectual disability need to consider the issue of including more participants (i.e., by restricting response choices to a binary choice), or in having more detailed data but from fewer, and probably more capable, informants, thus restricting generalisability. However, as Hartley and MacLean (2006) noted, fewer response alternatives reduces reliability. The finding that a substantial number of participants had difficulty using the five-level response option reflects that of several other investigations into questionnaire use with individuals with intellectual disability (e.g., Fang et al., 2011) and begins to address the knowledge gap about the most appropriate number of response options for individuals with an intellectual disability identified by Hartley and MacLean. Cummins et al. (1997) described the conversion of scores collected using different response formats (binary, 3-, or 5-point Likert scale) to proportions of a standard scale. This may be useful to maximise both response rates and quality of data and deserves consideration.

Pretest for Likert scales The use of concrete referencing as a support for participants may be a useful strategy for improving the quality of data collected from those with an intellectual disability. This reduces the reliance on language that was evident in the Scale with an Abstract Reference.

Downloaded by [Northeastern University] at 06:58 25 November 2014

Limitations The small sample size and lack of specific information about level of cognitive functioning of the individuals who participated in the study are limitations of this investigation. In addition to assessment of cognitive abilities and adaptive behaviour, more comprehensive assessment of language ability, particularly of comprehension, would have been helpful, as the PPVT-III measures only receptive vocabulary (Dunn & Dunn, 1997). An understanding of the level of skill required to comprehend the instructions associated with the tasks that make up this assessment would assist to establish those for whom the pretesting protocol evaluated in this study would be useful. The reliance on the PPVTIII as our measure of language ability may have reduced our capacity to identify the contribution of language to performance on the pretest protocol. The time between testing sessions varied quite considerably. As there is the clear possibility that learning relevant to the task might occur, this may have confounded the results. Future research Reliability is the foundation of a sound assessment; however, it is insufficient to ensure usefulness. Validity also needs to be established. Further research is required to establish that using the pretest protocol results in better quality data. It could be expected, for example, that the internal consistencies of scales would improve when data from only those whose responses indicate adequate capacity to respond to Likert scales are included, in contrast to internal consistencies when responses from all respondents are used in the analysis. A further challenge is to determine how to ensure that respondents understand the neutral choice. As demonstrated in this investigation, it is difficult (perhaps impossible) to create items that elicit a neutral response in everyone. This is complicated by the fact that the midpoint of Likert scales is often understood to have multiple meanings (Kulas & Stachowski, 2009), and is certainly intended to have different meanings across some scales. For example, in measures of attitude it is often intended to reflect the view that “I don’t care either way” or to indicate uncertainty, whereas

323

in measures of psychological functioning it may mean “I feel no better or worse than I did.” The use of pictorial supports is often recommended (e.g., Hartley & MacLean, 2006) or used (e.g., MacMahon & Jahoda, 2008) when data are to be collected from individuals with an intellectual disability. Unless there is some examination of the individuals’ understanding of the meanings of these supports, they may decrease reliable responding rather than ensure better quality data. There are a number of ways available to researchers to assist individuals with intellectual disability understand the intended meaning of the various pictograms, and some investigation of the most effective approach would be useful. Conclusions The use of the acquiescence questions and of the scales with a Concrete and Abstract Reference from Cummins’ (1997) pretest protocol seems likely to contribute to better quality data when participants with an intellectual disability are asked to respond to items using a Likert scale. These elements of the pretest protocol have been shown to be reliable using a test–retest procedure. Although the precise level of ability of the participants in this study was not established, their performance on the PPVT-III suggests that the majority of the group were in the moderate range of intellectual disability, thus indicating that the pretest protocol is likely to be useful across a large proportion of the population with an intellectual disability. Additional work needs to be done to develop items that reduce the language load associated with the current Order of Magnitude aspect of the pretest protocol. In its current form, it is not possible to ascertain if the lack of reliability is related to difficulty understanding the construct measured or understanding the requirements of the task. The utility of the pretest protocol would be increased if users could be confident that it was measuring respondents’ understanding. Finally, the pretest protocol, and those instruments that are used to measure satisfaction or other internal processes, would be strengthened by the creation of more effective visual prompts, or, alternatively, a more complete teaching procedure. Despite these limitations, researchers are urged to consider using the pretest protocol before employing Likert-type scales with individuals with an intellectual disability. Acknowledgements The authors wish to thank the individuals who contributed to this study for their assistance.

324

M. Cuskelly et al.

The research reported in the manuscript was funded by the Michael Cameron Fund. No restrictions on publication of, or access to, the data were imposed by the funding body. None of the authors had a conflict of interest in the conduct of the study, and no financial or nonfinancial benefits accrue to any of the authors with respect to the outcomes of the study.

Downloaded by [Northeastern University] at 06:58 25 November 2014

References Brown, J. D. (2001). Using surveys in language programs. Cambridge, UK: Cambridge University Press. Carlin, M. T., Toglia, M. P., Wakeford, Y., Jakway, A., Sullivan, K., & Hasel, L. (2008). Veridical and false pictorial memory in individuals with and without mental retardation. American Journal on Mental Retardation, 113, 201–213. Castellino, S. M., Tooze, J. A., Flowers, L., & Parsons, S. K. (2011). The Peabody Picture Vocabulary Test as a pre-screening tool for global cognitive functioning in childhood brain tumor survivors. Journal of Neuro-Oncology, 104, 559–563. doi:10.1007/s11060-010-0521-1 Cummins, R. A. (1992). Comprehensive Quality of Life Scale – Intellectual Disability (3rd ed.). Melbourne, Australia: School of Psychology, Deakin University. Cummins, R. A. (1997). Comprehensive Quality of Life Scale – Intellectual/Cognitive Disability (ComQol-I5) (5th ed.). Melbourne, Australia: School of Psychology, Deakin University. Retrieved from http://www.deakin.edu.au/ research/acqol/instruments/comqol-scale/comqol-i5.pdf Cummins, R. A., McCabe, M. P., Romeo, Y., Reid, S., & Waters, L. (1997). An initial evaluation of the Comprehensive Quality of Life Scale – Intellectual Disability. International Journal of Disability, Development and Education, 44, 7–19. doi:10.1080/ 0156655970440102 Cuskelly, M., & Gordon, K. (2011). Social comparisons: Associations with psychosocial functioning in individuals with Down syndrome. Down Syndrome Quarterly, 13, 20–28.

Dagnan, D., & Sandhu, S. (1999). Social comparison, self-esteem and depression in people with intellectual disability. Journal of Intellectual Disability Research, 43, 372–379. doi:10.1046/ j.1365-2788.1999.043005372.x Dunn, L. M., & Dunn, L. M. (1997). Peabody Picture Vocabulary Test – Third edition. Circle Pines, MN: American Guidance Service, Inc. Fang, J., Fleck, M. P., Green, A., McVilly, K., Hao, Y., Tan, W., Fu, R., & Power, M. (2011). The response scale for the intellectual disability module of the WHOQOL: 5-point or 3-point? Journal of Intellectual Disability Research, 55, 537–549. doi:10.1111/j.1365-2788.2011.01401.x Gudjonsson, G. H., & Young, S. (2011). Personality and deception. Are suggestibility, compliance and acquiescence related to socially desirable responding? Personality and Individual Differences, 50, 192–195. doi:10.1016/j.paid.2010.09.024 Hartley, S. L., & MacLean, W.E., Jr. (2006). A review of the reliability and validity of Likert-type scales for people with intellectual disability. Journal of Intellectual Disability Research, 50, 813–827. doi:10.1111/j.1365-2788.2006.00844.x Kulas, J. T., & Stachowski, A. A. (2009). Middle category endorsement in odd-numbered Likert response scales: Associated item characteristics, cognitive demands, and preferred meanings. Journal of Research in Personality, 43, 489–493. doi:10.1016/j.jrp.2008.12.005 Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. MacMahon, P., & Jahoda, A. (2008). Social comparison and depression: People with mild and moderate intellectual disabilities. American Journal on Mental Retardation, 113, 307–318. Nota, L., Ginevra, M. C., & Carrieri, L. (2010). Career interests and self-efficacy beliefs among young adults with an intellectual disability. Journal of Policy and Practice in Intellectual Disabilities, 7, 250–260. doi:10.1111/j.1741-1130.2010.00274.x Rose, J. L., & Gerson, D. F. (2009). Assessing anger in people with intellectual disability. Journal of Intellectual & Developmental Disability, 34, 116–122. doi:10.1080/13668250902845194 Zimmermann, F., & Endermann, M. (2008). Self-proxy agreement and correlates of health-related quality of life in young adults with epilepsy and mild intellectual disabilities. Epilepsy & Behavior, 13, 202–211. doi:10.1016/j.yebeh.2008.02.005

Reliability of a method for establishing the capacity of individuals with an intellectual disability to respond to Likert scales.

The study reported here was an examination of the reliability of a method for determining acquiescent responding and the capacity to respond to items ...
97KB Sizes 0 Downloads 0 Views