ORIGINAL ARTICLE

Instrument validation process: a case study using the Paediatric Pain Knowledge and Attitudes Questionnaire Deborah Peirce, Janie Brown, Victoria Corkish, Marguerite Lane and Sally Wilson

Aims and objectives. To compare two methods of calculating interrater agreement while determining content validity of the Paediatric Pain Knowledge and Attitudes Questionnaire for use with Australian nurses. Background. Paediatric pain assessment and management documentation was found to be suboptimal revealing a need to assess paediatric nurses’ knowledge and attitude to pain. The Paediatric Pain Knowledge and Attitudes Questionnaire was selected as it had been reported as valid and reliable in the United Kingdom with student nurses. The questionnaire required content validity determination prior to use in the Australian context. Design. A two phase process of expert review. Methods. Ten paediatric nurses completed a relevancy rating of all 68 questionnaire items. In phase two, five pain experts reviewed the items of the questionnaire that scored an unacceptable item level content validity. Item and scale level content validity indices and intraclass correlation coefficients were calculated. Results. In phase one, 31 items received an item level content validity index 090 and an intraclass correlation coefficient of 094 demonstrating excellent agreement between raters therefore acceptable content validity. Conclusion. Equivalent outcomes were achieved using the content validity index and the intraclass correlation coefficient. Relevance to clinical practice. To assess content validity the content validity index has the advantage of providing an item level score and is a simple calculation. The intraclass correlation coefficient requires statistical knowledge, or support, and has the advantage of accounting for the possibility of chance agreement.

Authors: Deborah Peirce, MSc, PG Dip Paed, BN, Registered Nurse, Princess Margaret Hospital for Children, Child and Adolescent Health Service, Subiaco, and School of Nursing, Midwifery and Paramedicine, Curtin University, Bentley, WA; Janie Brown, BN, MEd (Adult), PhD, Senior Lecturer, School of Nursing, Midwifery and Paramedicine, Curtin University, Bentley, WA; Victoria Corkish, MN, BScN, RGN/RSCN, Clinical Nurse Consultant, Princess Margaret Hospital for Children, Child and Adolescent Health Service, Subiaco, WA; Marguerite Lane, MN, Cert Paeds, RN, Paediatric Nurse Educator, Princess Margaret Hospital for Children,

1566

What does this paper contribute to the wider global clinical community?

• Comparison of two methods of •



calculating interrater agreement to determine content validity. Advice to clinicians that content validity index has the advantage of item level scores and is a simple calculation. The intraclass correlation coefficient accounts for the possibility of chance agreement.

Child and Adolescent Health Service, WA; Sally Wilson, PhD, Paed Cert, RN, Princess Margaret Hospital for Children, Child and Adolescent Health Service, Subiaco, and School of Nursing, Midwifery and Paramedicine, Curtin University, Bentley, WA, Australia Correspondence: Deborah Peirce, Registered Nurse, Princess Margaret Hospital for Children, Child and Adolescent Health Service, Roberts Road, Subiaco, WA 6006, Australia. Telephone: +61 8 9340 8924. E-mail: [email protected]

© 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575, doi: 10.1111/jocn.13130

Original article

Instrument validation process

Key words: content expert, content validity, content validity index, interrater agreement, intraclass correlation coefficient Accepted for publication: 16 October 2015

Introduction Determining validity and reliability of an instrument is important as it affects the inferences that can be made from the data collected (Haynes et al. 1995). Validation was initially detailed by the American Psychological Association Committee on Test Standards (1954) and describes a unified approach to gathering data. Currently validation includes the subsets of content, predictive, concurrent and construct (Beckstead 2009). The production of high quality data in quantitative inquiry requires rigorous testing of the instrument to build sufficient evidence to infer validity. Validity then determines that an instrument is measuring what it proposes to measure, whereas reliability testing measures that the instrument consistently measures the target attribute (Polit & Beck 2008). Content validity is one of the initial processes to determine the validity of an instrument (Tenopyr 1977, Beck & Gable 2001). It is described as the degree to which the items in the tool adequately represent the universe of content for the construct it is attempting to measure (Polit & Hungler 1991). Content validity testing is then concerned with determining the inferences that can be made about an instrument’s construction, in contrast to construct validity which involves the inferences that can be made about the scores generated by the instrument (Tenopyr 1977). Typically content validity follows a process of determining the face validity of the proposed tool, that is, does a lay person agree that the instrument appears to be sound and reasonable (Lynn 1986). Content validity is reported as a critical stage in both of its two phases: a priori referring to processes involved in developing a tool, as opposed to posteriori efforts of researchers to judge the relevance of items in a developed tool (Beck & Gable 2001, Polit & Beck 2006). Content validity has been widely accepted in nursing and health research as the practice of a panel of expert judges rating the relevance of items on a scale (Berk 1990, Davis 1992) and the subsequent calculation of the index of interrater agreement (Polit et al. 2007). Alternative measures to interrater agreement indexes have been debated in the literature on validation and a discussion of this will follow with regard to posteriori validation. Further, the application of two methods of calculating interrater agreement will be compared when seeking to establish content

© 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575

validity prior to using the Paediatric Pain Knowledge and Attitudes Questionnaire (PPKAQ). The PPKAQ (Twycross & Williams 2013) was selected as a suitable questionnaire to assess nurses’ knowledge and attitudes to children’s pain in a tertiary paediatric Australian hospital following an audit which showed suboptimal documentation of children’s pain management (Child and Adolescent Health Service 2013). Twycross and Williams report achieving content validity and reliability for the development of the PPKAQ (Twycross & Williams 2013). They described an extensive literature search, followed by a two phase process of content validity with experts and reliability testing with 30 student nurses. A limitation is that they did not report the method or statistics of achieving content validity. In phase one, the questionnaire was reviewed by one expert and required 19 minor amendments, five questions were removed and six added. In phase two, five experts analysed the questionnaire and this resulted in minor wording changes to 26 items, substantial alterations to seven items and deletion of one question resulting in a total of 71 items (2013). The questionnaire was then tested with 30 student nurses in one UK hospital and found to be reliable (Cronbach a coefficients for the total score and each subscale were ≥070, with a range of 070–082; Wilcoxon signed ranks test for test-retest reliability for each item resulted in three items with p < 005). Subsequently, the three items were removed from the questionnaire resulting in a total of 68 items within five domains to measure the construct. The authors acknowledged that further studies are needed to test results with registered nurses and noted the small sample size as a limitation. In this content validity study, a 4-point rating scale was provided for each item with 1 = not relevant and 4 = very relevant. A free text box was provided at the end of each domain allowing participants to explain any of their choices or write any comments they had about the relevance of the statements in the preceding domain. Drug names and minor wording changes were made to suit the Australian context.

Background The recommendations for expert panel review of an instrument are consistent (Davis 1992, Grant & Davis 1997, Polit

1567

D Peirce et al.

et al. 2007). Recommendations for the selection of expert judges for content validity assessment include the need for expert knowledge on the subject, clinical experience, published or presented in the area or have expertise in instrument development (Grant & Davis 1997). Polit et al. (2007) emphasise the importance of adequate planning in the validation process to ensure that a strong panel of experts is convened. Additionally, recommendations included a multidisciplinary panel, inviting members of the target population, and providing detailed information about the construct, purpose and intended use of the instrument to all panellists (Davis 1992, Grant & Davis 1997). Clear instructions to potential experts in the form of a covering letter were reported as very important (Froman 2002, Wynd & Schaefer 2002). Researchers were in agreement about advocating for an iterative process in which judges get the opportunity to score independently and then rescore following discussion and amendments to items (Lynn 1986, Haynes et al. 1995, Polit et al. 2007). While the suggestions for panel selection were consistent, there exists debate around the most accurate method for calculating the degree of expert agreement regarding the relevance of items on the tool (Polit et al. 2007, Beckstead 2009, Hallgren 2012). A discussion of calculating the degree of expert agreement will follow. Stemler (2004) described the three different methods to quantify the degree of expert agreement to estimate interrater reliability as consensus, consistency and measurement approaches. The focus in nursing research methods has been on consensus approaches (Polit et al. 2007). Consensus estimates aim to measure whether reasonable expert judges share a common understanding of the construct the instrument is measuring (Stemler 2004). Typically it has been seen that experts’ judgements have been averaged against an existing predetermined criterion, and a coefficient alpha or a multirater kappa coefficient have been calculated to determine reliability of expert’s responses (Polit & Beck 2006). Calculations of consensus estimates have generated some debate around whether to calculate a content validity index (CVI) or a kappa statistic for estimation of the interrater reliability (Wynd et al. 2003, Polit et al. 2007, Beckstead 2009). The CVI represents a proportion of agreement between all expert judges and is calculated by using a 4-point ordinal scale of ascending relevance for each item, collapsing the four response categories into dichotomous categories and then calculating the index of content validity (CVI) as a proportion of all the ratings of 3 (quite relevant) or 4 (very relevant) (Lynn 1986). In a seminal paper on content validity determination Lynn advocated for 100% agreement with five or fewer experts and an agreement of 83% with six experts and so forth to

1568

account for chance agreement (1986). The kappa statistic is a coefficient of agreement between two raters that adjusts for chance agreement (Cohen 1960) and the modified kappa statistic is described for multirater situations, while the weighted kappa statistic is described for ordinal categorical data (Polit et al. 2007, Hallgren 2012). The weighted kappa and intraclass correlation coefficient (ICC) have been accepted as equivalent methods for determining interrater agreement (Fleiss & Cohen 1973). The ICC is a common method for calculating the proportion of variance and is described as an estimation of interrater reliability (Tinsley et al. 1975, Shrout 1998). Higher ICC values indicate greater interrater reliability (0 = random agreement, 1 = perfect agreement) with scores of 080 were accepted as content valid, the overall tool accepted as content valid if S-CVI/Ave > 090 (Polit & Beck 2006). To calculate ICC, SPSS version 22 (IBM, Armonk, New York, USA) was used. Analysis was conducted using a

1570

two-way mixed effects model looking at absolute agreement. An ICC of ≥075 was accepted as excellent agreement (Hallgren 2012). Phase two Item level CVI was recalculated as previously described for 31 items on the questionnaire that did not receive I-CVI >080 in phase one of the project. The tool was accepted as valid if I-CVI were > 080 and S-CVI/Ave > 090 post deletion and wording amendments. To calculate ICC, SPSS version 22 was used as previously described in phase one.

Ethical considerations As this was part of a larger study, Human Research Ethics approval was gained from the hospital and the University Committees. Experts’ consent to participate were implied by their returned relevancy rating questionnaire or email agreeing to participate. Both phases of the project contained an information cover sheet that outlined that participation in the content validity study was voluntary and detailed the aims of the proposed benefit of this study. All information provided by participants was anonymous and will be kept by researchers in accordance with legislation (National Health and Medical Research Council 2007).

Results Phase one Ten of 19 experts invited participated in the content validity determination of the PPKAQ by completing the 68 item questionnaire (53% response rate) in its entirety. The ICVIs ranged from 050 to 100 with a mean I-CVI of 080. Thirty-seven items (54%) had an I-CVI of ≥ 080. Thirty© 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575

© 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575

CVI, content validity index.

D1 In treating pain in children only one class of analgesic drug should be used at a time D3 The side effects of non-steroidal anti-inflammatory drugs (NSAIDS), e.g. diclofenac or ibuprofen, only occur when the drugs is given orally D10 Opioid medication given to manage chronic pain in children should be given on a regular basis D11 The use of sedative drugs is an effective way of eliminating pain in children D12 Postoperatively children should not be given analgesic drugs until they ask for them D13 Nonsteroidal anti-inflammatory drugs (NSAIDS) and opioids cannot be given at the same time D15 Nonsteroidal anti-inflammatory drugs (NSAIDS) increase the adverse respiratory effect of opioids D16 Paracetamol (acetaminophen) is unsuitable for use with children who have asthma D19 Pethidine is used infrequently to manage pain in children because of the effects of toxicity D22 Paracetamol and an opioid cannot be given at the same time E2 School-aged children cannot learn how to use a patient controlled analgesia pump E7 Chronic pain in children does not usually cause mood changes E12 Children between the ages of six to 12 months will have no lasting memories of painful procedures

A1 Children tolerate pain better than adults A2 It is Ok to carry out minor procedures, such as taking blood, without the use of analgesic drugs A3 Children under two years of age feel less pain than older children in similar situations A5 Infants who are less than a month old may be intubated without pain medicine A9 When managing chronic pain in children the main goal of treatment is to control the pain A10 Parents exaggerate their child’s pain A11 Children do not need analgesic drugs before having burns dressings changed A12 Infants who are less than a month old can be intubated without sedation A14 Procedural pain should be eliminated A15 The level of pain suffered by a child can be established by giving him placebo medications A16 Pain is to be expected if a child is in hospital B5 The most common reason for the need to increase the dose of analgesic drugs in cancer treatment is the progression of the illness B6 The neurological development of infants under a month of age is still incomplete and this means they cannot feel pain B7 The gate control theory of pain suggests that there is a gating mechanism that can inhibit pain impulses from passing through B9 The gate control theory suggest that the degree to which the gate is opened or closed determines whether impulses are inhibited or allowed to proceed C1 Using a child’s imagination is not an effective way of managing procedural pain C2 Massage is an effective methods of relieving pain in children C6 Relaxation strategies are not effective in reducing a child’s pain

Item

Table 1 Items that did not meet CVI requirements in phase one of project

(70%) (50%) (70%) (70%) (70%) (70%) (70%) (60%) (70%) (70%) (60%) (60%)

7/10 7/10 7/10 7/10 7/10 7/10 7/10 6/10 7/10 7/10 6/10

(70%) (70%) (70%) (70%) (70%) (70%) (70%) (60%) (70%) (70%) (60%)

7/10 (70%) 6/10 (60%)

7/10 (70%) 7/10 (70%) 7/10 (70%)

7/10 (70%)

6/10 (60%)

6/10 (60%)

7/10 5/10 7/10 7/10 7/10 7/10 7/10 6/10 7/10 7/10 6/10 6/10

% of Agreement

All relevant to consider but not all true and depends on whether these practices are considered the norm in a unit. Nonevidence based practice should not be the norm The gate control theory may still be taught by some nursing schools but certainly is longer the standard

All very relevant although not all are totally accurate

Found the wording of relevancy difficult to interpret. I have answered ‘not relevant’ to be ‘untrue’ I think all of these statements are very relevant to nurses’ attitudes although some are not true Q5 and Q12 are the same Very awkwardly worded and clunky questions All great questions

Comments

Original article Instrument validation process

1571

1572

2 Massage is an effective method of relieving pain in children

7 The gate control theory of pain suggests that there is a gating mechanism that can inhibit pain impulses from passing through 9 The gate control theory suggests that the degree to which the gate is opened or closed determines whether impulses are inhibited or allowed to proceed Domain C: Nondrug methods of pain relief

14 Procedural pain should be eliminated 15 The level of pain suffered by a child can be established by giving him placebo medication Domain B: Physiology of pain 5 The most common reason for the need to increase the dose of analgesic drugs in cancer treatment is the progression of the illness

10 Parents exaggerate their child’s pain 12 Infants who are less than a month old can be intubated without sedation

9 When managing chronic pain in children the main goal of treatment is to control the pain

Domain A: Views on the care of children in pain 2 It is OK to carry out minor procedures, such as taking blood, without the use of analgesic drugs 5 Infants who are less than one month old may be intubated without pain medication

Item

Table 2 Phase 2 amendments and deletion of items on the PPKAQ

Wording amendment to: Massage is a method of relieving pain in children

Delete

Wording amendment to: A common reason for the need to increase the dose of analgesic drugs in cancer treatment is the progression of the illness Delete

Wording amendment to: Procedural pain should be minimised Delete

Wording amendment to: Controlling the pain is the main goal of treatment when managing chronic pain in children Wording amendment to: Parents may exaggerate their child’s pain Wording amendment to: Intubated babies less than one month old require sedation Collect demographic information on whether participants have critical care experience

Wording amendment to: Minor procedures, such as taking bloods, require analgesic drugs Delete

Decision

Clarity

Can guess the answer based on question, therefore not worthwhile asking

Can guess the answer based on question, therefore not worthwhile asking

Clarity

Not worthwhile asking as ethically problematic

Clarity Expectation that critical care nurses are familiar with intubated patients. Ward nurses may be unfamiliar with caring for intubated patients. Clarity

Clarity

Clarity

Ambiguous question Context required

Clarity

Rationale cited

D Peirce et al.

© 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575

© 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575

Clarity Broad age range is ambiguous

one items (46%) had an I-CVI ≤ 080. S-CVI/Ave was 080 as can be seen in Table 3. Domain level CVI/Ave is also illustrated in Table 3 demonstrating that Domain A measuring attitudes received the lowest consensus with 11 of 17 (65%) of items receiving an I-CVI < 079. A fair level of agreement (Hallgren 2012) was found with an ICC of 0473, with a 95% confidence interval (CI) from 0276 to 0635. Based on this, and an S-CVI/Ave < 090, the questionnaire was found not to have content validity for use with Australian nurses.

Phase two Five of 7 (71% response rate) invited experts participated in phase two. Item level CVI ranged from 00 to 100. The four items that received I-CVI 080

S-CVI/Ave

I-CVI > 080

S-CVI/Ave*

A. Views on the care of children B. Physiology of pain C. Nondrug methods of pain relief D. Using drugs to relieve pain E. Sociology and psychology of pain Total

6/17 5/9 5/8 12/22 9/12

(35%) (56%) (63%) (55%) (75%)

075 077 081 083 086

15/17 7/9 8/8 22/22 12/12

097 091 092 097 094

37/68 (54%)

080

64/68 (94%)

(88%) (78%) (100%) (100%) (100%)

094

CVI, content validity index; I-CVI, item level CVI; S-CVI, scale CVI. *S-CVI/Ave calculated after the amendment of 10 items and the deletion of four items.

The iterative process used in phase two facilitated various attempts to restructure sentences to measure attitudes. Experts questioned the ‘ambiguity’ and need for ‘context’ around the attitude statements. The two phases of this content validity determination process highlighted that domains related to knowledge were easier to gain agreement on and that assessing attitudes was challenging. Having the knowledge of which items were problematic by calculating the ICVI made this process more straight forward. The process of achieving content validity of the PPKAQ highlighted the importance of clear instructions to participants to explain how to rate relevancy of questionnaire items. A benefit of calculating the ICC is that it accounts for the possibility of chance agreement which is a limitation of the CVI calculations (Wynd et al. 2003, Polit & Beck 2006, Beckstead 2009). However, Lynn (1986) proposes that with a minimum of five experts on the panel the opportunity for chance agreement is minimised and that experts should be asked of any omissions to a construct. Therefore, with the appropriate number of experts, calculating CVI is adequate in most clinical situations. Content validity has been established for the PPKAQ in the Australasian nursing context which was important prior to using the questionnaire in a tertiary paediatric hospital. Further studies are required to determine the construct validity of the PPKAQ in the Australian context. A limitation of this study is that several experts in phase one were unclear with the task required and commented (Table 1) on requiring clarity between relevant and true. Despite being provided with contact details of the researcher, clarity was not sought. Face-to-face discussion in phase two ensured that confusion about relevancy was not repeated. Additionally, experts were chosen based on their knowledge of pain. In the light of the number of items on the questionnaire assessing attitude and the difficulty in attaining interrater agreement with items in the domain, inclusion of experts’ in tool development as suggested by Grant & Davis (1997) could have been advantageous.

1574

Conclusion Ascertaining content validity of an instrument prior to use is deemed important in research, as it affects the inferences that can be made from the data collected. In this study two methods of calculating content validity of the PPKAQ were compared by applying both to the PPKAQ and showed that results using CVI and ICC were similar. However, calculating CVI as per Lynn (1986) had advantages of being easier to calculate, especially for clinicians, with limited access to software, and knowledge of statistical methods. The advantage of calculating ICC is that it accounts for chance agreement, it remains to be seen when determining construct validity whether measuring content validity is redundant.

Relevance to clinical practice Clinicians who want to gain content validity for a questionnaire, following adaptation for the context, should consider providing clear instructions to experts explaining how to rate relevancy. Further, acceptance of the CVI is appropriate as a method of interrater agreement, providing the panel has enough experts. It is simple to calculate and has the advantage of providing item level results.

Acknowledgements We acknowledge the biostatistical support by Dr Lakkhina Troeung, the Princess Margaret Hospital Foundation for the seeding grant and the panels of experts involved in this study.

Contributions Study design: DP, VC, ML, SW; Data collection and analysis: DP, JB, VC, ML, SW; Manuscript preparation: DP, JB, VC, ML, SW. © 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575

Original article

Instrument validation process

References American Psychological Association (1954) Technical recommendations for psychological tests and diagnostic techniques. Psychological Bulletin 51, 201–238. Beck C (1999) Content validity exercises for nursing students. The Journal of Nursing Education 38, 133–135. Beck C & Gable R (2001) Ensuring content validity: an illustration of the process. Journal of Nursing Measurement 9, 201–215. Beckstead J (2009) Content validity is naught. International Journal of Nursing Studies 46, 1274–1283. Berk R (1990) Importance of expert judgment in content-related validity evidence. Western Journal of Nursing Research 12, 659–671. Child and Adolescent Health Service (2013) Pain Assessment: Hospital Wide Audit Summary Report. Child and Adolescent Health Service, Perth, Australia. Cohen J (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20, 37– 46. Davis L (1992) Instrument review: getting the most from a panel of experts. Applied Nursing Research 5, 194–197. Fleiss J & Cohen J (1973) The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement 33, 613–619. Froman R (2002) A classroom activity using a panel of expert judges for content validity determination. The Journal of Nursing Education 41, 234– 236. Grant J & Davis L (1997) Selection and use of content experts for instrument development. Research in Nursing and Health 20, 269–274.

© 2016 John Wiley & Sons Ltd Journal of Clinical Nursing, 25, 1566–1575

Hallgren K (2012) Computing inter-rater reliability for observational data: an overview and tutorial. Tutor Quantitative Methods Psychology 8, 23–34. Haynes S, Richard D & Kubany E (1995) Content validity in psychological assessment: a functional approach to concepts and methods. Psychological Assessment 7, 238–247. Heitzmann C, Kaplan R & Schneiderman N (1988) Assessment of methods for measuring social support. Health Psychology 7, 75–109. Long D, Young J, Rickard C & Mitchell M (2013) Measuring paediatric intensive care nursing knowledge in Australia and New Zealand: how the Basic Knowledge Assessment Tool for pediatric critical care nurses (PEDSBKAT4) performs. Australian Critical Care 26, 36–42. Lynn M (1986) Determination and quantification of content validity. Nursing Research 35, 382–386. NHMRC the publisher is National Health and Medical Research Council (2007) Australian Code for the Responsible Conduct of Research, National Health and Medical Research Council, Canberra, Australia. Polit D & Beck C (2006) The content validity index: are you sure you know what’s being reported? critique and recommendations. Research in Nursing & Health 29, 489–497. Polit D & Beck C (2008) Nursing Research: Generating and Assessing Evidence for Nursing Practice. Lippincott Williams & Wilkins, Philadelphia, PA. Polit D & Hungler B (1991) Nursing Research: Principles and Methods, 4th edn. J.B. Lipponcott Company, Philadelphia, PA.

Polit D, Beck C & Owens S (2007) Is the CVI an acceptable indicator of content validity? Appraisals and recommendations. Research in Nursing and Health 30, 459–467. van Rijswijk L & Beitz J (2015) Pressure ulcer prevention algorithm content validation: a mixed-methods, quanititative study. Ostomy Wound Management 61, 48–57. Shrout P (1998) Measurement reliability and agreement in psychiatry. Statistical Methods in Medical Research 7, 301–317. Stemler S (2004) A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research and Evaluation 9. Available at: http://PAREonline.net/ getvn.asp?v=9&n=4 (accessed 7 June 2015). Tenopyr M (1977) Content–construct confusion. Personnel Psychology 30, 47– 54. Tinsley H, Weiss D & Berdie R (1975) Interrater reliability and agreement of subjective judgements. Journal of Counseling Psychology 22, 358–376. Twycross A & Williams A (2013) Establishing the validity and reliability of a Pediatric Pain Knowledge and Attitudes Questionnaire. Pain Management Nursing 14, 47–53. Wynd C & Schaefer M (2002) The osteoporosis risk assessment tool: establishing content validity through a panel of experts. Applied Nursing Research 15, 184–188. Wynd C, Schmidt B & Schaefer M (2003) Two quantitative approaches for estimating content validity. Western Journal of Nursing 25, 508–518.

1575

Instrument validation process: a case study using the Paediatric Pain Knowledge and Attitudes Questionnaire.

To compare two methods of calculating interrater agreement while determining content validity of the Paediatric Pain Knowledge and Attitudes Questionn...
564B Sizes 0 Downloads 7 Views