REVIEW PAPER

Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence-based practice Kat Leung, Lyndal Trevena & Donna Waters Accepted for publication 3 May 2014

Correspondence to K. Leung: e-mail: [email protected] Kat Leung MMed BN RN Doctoral Candidate Sydney Medical School, The University of Sydney, Camperdown, New South Wales, Australia Lyndal Trevena PhD GP Associate Professor Sydney Medical School, The University of Sydney, Camperdown, New South Wales, Australia Donna Waters PhD RN Associate Professor/Associate Dean (Research) Sydney Nursing School, The University of Sydney, Camperdown, New South Wales, Australia

L E U N G K . , T R E V E N A L . & W A T E R S D . ( 2 0 1 4 ) Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence-based practice. Journal of Advanced Nursing 70(10), 2181–2195. doi: 10.1111/jan.12454

Abstract Aim. To identify, appraise and describe the characteristics of instruments for measuring evidence-based knowledge, skills and/or attitudes in nursing practice. Background. Evidence-based practice has been proposed for optimal patient care for more than three decades, yet competence in evidence-based practice knowledge and skills among nurse clinicians remains difficult to measure. There is a need to identify well-validated and reliable instruments for assessing competence for evidence-based practice in nursing. Design. Psychometric systematic review. Data Sources. The MEDLINE, EMBASE, CINAHL, ERIC, CDSR, All EBM reviews and PsycInfo databases were searched from 1960–April 2013; with no language restrictions applied. Review Methods. Using pre-determined inclusion criteria, three reviewers independently identified studies for full-text review, extracting data and grading instrument validity using a Psychometric Grading Framework. Results. Of 91 studies identified for full-text review, 59 met the inclusion criteria representing 24 different instruments. The Psychometric Grading Framework determined that only two instruments had adequate validity – the Evidence Based Practice Questionnaire measuring knowledge, skills and attitudes and another unnamed instrument measuring only EBP knowledge and attitudes. Instruments used in another nine studies were graded as having ‘weak’ validity and instruments in the remaining 24 studies were graded as ‘very weak’. Conclusion. The Evidence Based Practice Questionnaire was assessed as having the highest validity and was the most practical instrument to use. However, the Evidence Based Practice Questionnaire relies totally on self-report rather than direct measurement of competence suggesting a need for a performance-based instrument for measuring evidence-based knowledge, skills and attitudes in nursing. Keywords: evidence-based practice, instrument, knowledge, nursing, psychometric grading framework, psychometric systematic review, research utilization, skills and attitudes

© 2014 John Wiley & Sons Ltd

2181

K. Leung et al.

Why is this research or review needed? • The move towards evidence-based practice for optimal clinical care has been well recognized but the competence of evidence-based practice knowledge and skills among nurse clinicians remains difficult to measure. • Previous reviews have focussed on the validity of instruments for measuring nurses’ attitudes to research use; using research in practice; self-efficacy and outcome expectancy in evidence-based practice. • There is a need to identify valid and reliable instruments for measuring competence for evidence-based practice in nursing.

What are the key findings? • All 24 instruments included in the review are self-report and susceptible to recall and social desirability biases; many of them are lacking construct validity and measurement of criterion correlation on which the validity is based. • The Evidence Based Practice Questionnaire was found to have the highest validity but is limited by the self-report nature of instrument.

How should the findings be used to influence policy/ practice/research/education? • Development of a valid competency-based assessment tool for measuring evidence-based nursing practice is important for future development of evidence-based practice education and research. • Tools with demonstrated high validity and reliability used in other health disciplines (such as the Fresno test) could be adapted to nursing contexts. • The Fresno test was developed to assess the performance of evidence-based practice through clinical scenarios. Changing the scenarios from a medical to nursing focus would enable retesting of its validity with nurse participants.

Introduction Evidence-Based Practice (EBP) aims to integrate the best available evidence with clinical experience, patient preferences and clinical circumstances (Straus et al. 2011). EBP relies on prerequisite knowledge and skills to gather and organize the evidence; communicate relevant information to clients; and assist them to make the best decisions for their own health (Irwig et al. 2007). Although the move towards EBP for optimal clinical care has been well recognized for more than three decades, nurse clinicians may feel uncomfortable about engaging in this process. Researchers 2182

repeatedly find that nurses welcome the idea of EBP, but do not feel they have adequate knowledge and skills to implement it (Sherriff et al. 2007, Gerrish et al. 2008, Munroe et al. 2008, Foo et al. 2011, Stichler et al. 2011, Yip Wai et al. 2013). A systematic review by Estabrooks et al. (2003a) found a positive association between individual’s attitudes towards research and research utilization (RU). Other factors such as time constraints, workload and lack of organizational support (Wright et al. 1996, Estabrooks 1998, Upton & Lewis 1998, Nagy et al. 2001, Ervin 2002, Gerrish et al. 2007, Brown et al. 2009, Waters et al. 2009, Chiu et al. 2010) have also been identified as impacting on nurse clinicians’ use of evidence in healthcare settings. Various instruments have been used to measure nurses’ use of evidence in their practice but the validity of these instruments is often variable or unknown (McSherry 1997, Upton & Lewis 1998, Melnyk et al. 2004, Pravikoff et al. 2005, FAHC 2007, Hart et al. 2008). Validity refers to the meaning or interpretation of test scores and reflects how people respond to the context of assessment. Six distinguishable aspects of validity are described: content, substantive, structural, generalizability, external; and consequential aspects of construct validity which guide the assessment score interpretations for educational and psychological measurement (Messick 1995a,b). Alternatively, validity can be defined as a ‘hypothetical test’ which assesses a specific concept that the researcher is attempting to measure. Validity is traditionally divided into three attributes: content, construct and criterion validity (Streiner & Norman 2008). These different views of validity are similar in context and the choice of using either is arbitrary. Reliability reflects the stability of a test being measured repeatedly under different circumstances, during which the test is expected to yield similar results. The most common tests of reliability are internal consistency, inter-rater reliability and test–retest reliability (Streiner & Norman 2008). Single or multiple validity and reliability tests are used to describe the psychometric properties of a test, scale or instrument. Shaneyfelt et al. (2006) reviewed 115 instruments for evaluating EBP education and found that the Fresno test developed by Ramos et al. (2003) had the best psychometric properties for measuring knowledge and skills in evidence use; however, non-medical health professionals such as nurses or physiotherapists were rarely sampled (Shaneyfelt et al. 2006). Estabrooks et al. (2003a) completed a systematic review of the determinants of RU among nurses by evaluating the validity of instruments used in 20 studies. Instrument validity was ranked as low, medium or high using a 14-point scoring system. Estabrooks et al. (2003a) concluded that the Nursing Practice Questionnaire (NPQ) © 2014 John Wiley & Sons Ltd

JAN: REVIEW PAPER

Instruments for measuring evidence-based knowledge, skills and attitudes

used in two of the studies (Brett 1989, Michel & Sneed 1995) had the highest validity. However, the NPQ uses the theoretical framework of Roger’s Theory of Diffusion to measure research utilization in practice and is not therefore directly suitable for measuring EBP competence (defined as knowledge, skills and attitudes) in nurse clinicians. Furthermore, in this article, we take the view that while RU is an important part of implementing EBP, it does not encompass the entire process of EBP, traditionally comprising five stages or steps. Research utilization can be simply defined as the use of research (Estabrooks 1998) or as part of the EBP process when evidence (research) is being applied (Polit & Beck 2010). In addition to identifying the determinants of RU, Estabrooks et al. (2003b) also reviewed the psychometric properties of RU instruments used in nursing. They found that three multi-scale instruments and their modifications had been replicated in 20 studies while the remaining instruments had little or no validity reported. The Nursing Practice Questionnaire (NPQ) (Brett 1987), the Research Utilisation Questionnaire (Champion & Leach 1989) and the Edmonton Research Orientation Survey (Pain et al. 1996) all appear to have some valid psychometric properties for measuring RU in the nursing context (Estabrooks et al. 2003b) but none of them measure knowledge, skills and attitudes for EBP. A more recent systematic review by Frasure (2008) critically analysed the psychometric properties of 14 instruments used for measuring nurses’ attitudes towards RU. The review concluded that the Research Utilisation Survey (Estabrooks 1999) as replicated by (Kenny 2005), was a reliable instrument for measuring the attitude of clinicians towards research use. Squires et al. (2011) also reviewed 60 self-reported RU instruments used with nurses and other healthcare professionals. Evidence about content processes, internal structure and relationships to other variables were used to form a grid for classifying the validity of instruments from Levels one to three. Instruments were rated according to a non-peer-reviewed descriptive assessment of validity where the level-hierarchy accounted for the number of validity sources reported, but not their strength. Five studies were graded as Level one but only two of these (Varcoe & Hilton 1995, Stiefel 1996) had reliability test scores reported (Squires et al. 2011). The review did not conclude which RU instruments had the best psychometric properties and variables associated with the use of evidence such as knowledge or skills were not discussed. Based on Bandura’s Social Cognitive Therapy, Chang and Crowe (2011) developed an instrument to measure the selfefficacy and knowledge of EBP in nurses. Participants were asked to rate their confidence in using EBP on an 11-point scale. This 26-item instrument was reported to have good © 2014 John Wiley & Sons Ltd

psychometric properties but the reliability of an additional six EBP knowledge questions was not tested. Again, this instrument is not entirely suitable for measuring EBP competence in nurse clinicians because perceived self-efficacy does not necessarily equate to competence. Another recent study has measured the correlation between self-perception of EBP competence and actual competence in a group of medical students (Lai & Teng 2011). The authors concluded that the use of objective measurement tools such as the Fresno test (Ramos et al. 2003), seem to be more reliable than using self-evaluation methods for assessing EBP competence (Lai & Teng 2011). Therefore, while previous studies have examined self-reported research utilization and EBP self-efficacy measures in nursing, none have used valid instruments that objectively measure knowledge, skills and attitudes for evidence-based nursing practice. In this article, knowledge refers to the theoretical and practical understanding of evidence-based nursing practice. Skill refers to a nurse’s ability to apply his/her knowledge using the five steps of EBP while attitude is a hypothetical construct that represents an individual’s thoughts about the concepts of EBP (Dictionary 2000). Therefore, EBP knowledge and skills refers to one’s ability to: (1) formulate a question in response to clinical query/issue; (2) retrieve best available evidence from various sources; (3) appraise the strength of evidence; (4) apply evidence in client’s best interest and values; and (5) assess the effectiveness of (1)–(4) to determine improvement in patients or practice (Straus et al. 2011).

The review Aim This systematic review summarizes the psychometric properties of instruments measuring the EBP knowledge, skills and attitudes of nurses. It addresses the following research question: What is the most valid and reliable instrument for measuring evidence-based knowledge, skills and attitudes in nursing practice?

Design A systematic review of published research on the measurement properties of tools to assess knowledge, skills and attitudes for EBP in nursing was conducted (a psychometric systematic review). The Cochrane Collaboration systematic review method was used to specify the research question, types of population, intervention and outcome of interest, inclusion and exclusion criteria, search and retrieve relevant 2183

K. Leung et al.

studies (Higgins & Green 2011). The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) framework is used to present the review process (de Vet et al. 2011) (Figure 1) with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guideline used to report study methods and results (Moher et al. 2009). Synthesis of the results focussed on evaluating the psychometric properties of instruments using a purposed designed Psychometric Grading Framework (Leung et al. 2012) and appraising the characteristics and

Embase 801 references

PubMed Pre & Medline 3183 references

PsyINFO 168 references

methodological quality of studies. In the targeted search for this review, the population of interest were defined as nurses who had completed a test or instrument (the intervention) to measure their EBP knowledge, skills or attitudes (the outcome or dependent variable).

Search methods The MEDLINE, EMBASE, CINAHL, ERIC, Cochrane Database of Systematic Reviews, All EBM reviews and PsycInfo

CINAHL 748 references

ALL EBM reviews 653 references

Maternity & Infant care 71 references

ERIC 21 references

References retrieved from multiple database searches (n = 5645) Excluded duplicate and/or irrelevant references based on abstracts (n = 5576) Included for further investigation (n = 69 )

Additional references from manual searches of reference lists and review articles (n = 22 )

Full-text articles assessed for eligibility (n = 91 )

Excluded/irrelevant based on full texts* (n = 32 )

Total number of studies = 59 Total number of instruments = 24

Figure 1 COSMIN flowchart of the search and systematic review process. *Reasons for exclusion were not mutually exclusive: Incomplete report, full-text unavailable: 2; Testing of instrument not performed: 8; Participants were not nurse clinicians: 12; Instrument not aimed to measure EBP/RU knowledge, skills & attitudes: 8; Qualitative study/instrument: 4; Instrument measured the application of EBP knowledge/ intervention in specialty care: 2. 2184

© 2014 John Wiley & Sons Ltd

JAN: REVIEW PAPER

Instruments for measuring evidence-based knowledge, skills and attitudes

databases were systematically searched for literature published between 1960–April 2013 with no language restrictions applied. Search strategies were customized for each database due to the different range of search interfaces and controlled vocabulary search terms. Textword searches were used to retrieve older records such as articles published before 1976, for which abstracts are generally not included online. The following search terms were included: ‘evidencebased practice’ or ‘evidence-based nursing’ or ‘evidencebased medicine’ or ‘evidence-based health care’; ‘instrument’ or ‘assessment tool’ or ‘questionnaire’ or ‘scale’; ‘knowledge’; ‘skill(s)’;’attitudes’ and methodological terms relevant to this review such as ‘systematic review’ and ‘overview’. An example of the Medline search strategy is in Table S1. The reference lists of selected studies were hand-searched. The authors or their organizations were contacted to obtain copies of unpublished manuscripts/studies. As the focus of this review was to identify EBP instruments that aimed to measure nurses’ knowledge, skills and attitudes, ‘research utilization’ (RU) was not included as a search term. However, a range of studies were identified through the bibliographic database search in which instruments originally developed for measuring variables associated with RU were subsequently used to study EBP. For instance, the Research Awareness Questionnaire originally developed by McSherry (1997) to measure RU variables was later used to measure the EBP attitudes of nursing staff (McSherry et al. 2006) (Table S2). Therefore, any RU instruments subsequently used to measure EBP in single or multiple replications of studies were retained in the review. Inclusion and exclusion criteria Studies were included in the review if they: (1) recruited nurse clinicians as participants; (2) aimed to measure the EBP knowledge, skills and/or attitudes of participants; (3) contained a description of the instruments’ developmental strategy; (4) presented results of validity or reliability test scores; (5) reported levels of EBP/RU knowledge, skills and/ or attitudes; and (6) used quantitative methodology. Where a replication study referred to an index study for details of instrument criterion, this index study was also included as it contained information on the developmental strategy of the instrument. We excluded studies that: (1) used only qualitative methods to explore EBP knowledge, skills and attitudes (because validity test scores would not be available for comparison); (2) did not mention the background of participants; (3) were abstracts, duplicate or incomplete reports; and (4) did not report results for each study variable. In the context of this review, nurse clinicians are defined as Enrolled nurses, Registered Nurses and/or © 2014 John Wiley & Sons Ltd

Midwives who have completed their basic training and are currently registered to practice in clinical settings. Title and abstract review The first investigator (KL) excluded studies which did not meet the inclusion criteria, based on title and abstract review. LT and DW subsequently screened all remaining titles and abstracts of studies to determine the inclusion of full-text articles for final review. Disagreements about the inclusion or exclusion of studies were resolved by discussion between three reviewers. Landis and Koch (1977) suggest that a Kappa statistic between 061–08 may be considered as substantial evidence of agreement between reviewers (Landis & Koch 1977). On this basis, there was good agreement between the three reviewers for the inclusion of articles for final full-text review (Kappa = 066; 95% CI 048–084; P < 0001).

Search outcome The original search retrieved 5645 articles about EBP/RU instruments. One article by Gonzalez-Torrente et al. (2012) was translated from Spanish to English for full-text review. After the first review of title and abstract, 5576 studies were excluded as they did not meet inclusion criteria. Sixty-nine studies were selected for further title and abstract review by the second and third reviewers, in addition with 22 studies retrieved from manual searches. Thus, ninety-one studies were retained for full-text review; with 32 of these subsequently excluded. Finally, 59 studies were eligible for examination of their psychometric properties as shown by the COSMIN framework (Figure 1). Quality appraisal – the Psychometric Grading Framework Several appraisal checklists have been developed for evaluating the validity of measurement tools but none have been rigorously evaluated. Most consist of yes-no checklists about measures and methods used but do not assess the strength of the measurement properties (Greenhalgh et al. 1998, Estabrooks et al. 2003a, Jerosch-Herold 2005, Terwee et al. 2007, 2012). We therefore used a recently developed Psychometric Grading Framework (PGF) for rating the strength of the validity of measurement tools included in this review. The PGF is based on the most commonly used statistical tests and guidelines for threshold values recommended by leading psychometricians and biostatisticians (Leung et al. 2012). The PGF consists of two scales. Scale 1 is a matrix for assigning a level (A–D) to six psychometric properties (content validity, construct validity, criterion validity, internal consistency, test–retest reliability and inter-rater reliability) based on the strength of any measures reported. Scale 2 2185

K. Leung et al.

grades the overall psychometric strength of the instrument (Good – Very weak) by combining the number and level of psychometric measures arising from Scale 1 (Leung et al. 2012). All instruments included in the final review were graded against this framework.

Data abstraction Data extraction forms were developed and piloted on 15 randomly selected studies. These included items to define the study population and methodology, descriptions and developmental strategies of EBP instruments, psychometric measurement scores, EBP domains and other relevant information. Data were independently extracted by all three investigators and final results were compared to achieve consensus. Methods used for reviewing psychometric properties of instruments The development methods and characteristics of the included measurement instruments and study populations were reviewed and recorded on the data extraction form. When the description of an instrument development method was either unclear or was a replication of another instrument, the original instrument (as the index study) was also included and reviewed for its psychometric properties. Any ambiguous or unclear descriptions of instrument development method were clarified with the correspondence author. The psychometric test scores of the instrument were then rated against the PGF, grading the validity of the instrument as either Good, Adequate, Weak or Very Weak. The instrument with the best validity was further appraised for its methodological quality.

Synthesis Only papers reporting instruments with measurable validity scores were critically appraised for quality. Simple descriptive statistics (frequency and percentage) for the characteristics of all reviewed studies and the inter-rater reliability of reviewers’ decision for inclusion of full-text review (Kappa statistic) were calculated using SPSS Version 21.0 (New York, NY, USA; http://www.ibm.com).

Results Study Characteristics Fifty-nine studies were included in the systematic review representing 24 different EBP/RU instruments. All instru2186

Table 1 Characteristics of all included studies (N = 59). N (%) Participants’ background* Nurse clinicians Allied health professionals Medical practitioners Number of participants ≤100 101–499 ≥500 Response rate ≤50% ≥51% Not reported Sampling method Probability Non-probability Not reported Administration of instruments* Postal/email Distributed at work/conference Interview Not reported EBP instruments*,†(N = 36) Knowledge Skills Attitudes Knowledge, Skills, Attitudes‡ Other variable(s) RU instruments*,†(N = 23) Knowledge Skills Attitudes Knowledge, Skills, Attitudes‡ Other variable(s)

59 (100) 6 (1017) 3 (508) 11 (1864) 27 (4576) 21 (3559) 33 (5593) 21 (3559) 5 (847) 17 (2881) 31 (5254) 11 (1864) 21 30 1 8

(3559) (5085) (169) (1356)

9 11 9 15 28

(2500) (3056) (2500) (4167) (7778)

5 0 23 0 20

(2174) (0) (100) (0) (8696)

*Categories are not mutually exclusive. † EBP/RU instruments are instruments that had been used for measuring participants’ perceived level of knowledge, skills and attitudes towards evidence-based practice (EBP) or research utilization (RU). ‡ Instruments that measured three study variables: Knowledge, Skills and Attitudes.

ments were self-report and required participants to rate their perceived level of knowledge, skills and/or attitudes towards EBP/RU (Table 1). Participants were mainly nurse clinicians. Most studies (80%) had a sample size of over 100 but response rates were generally poor (≤50%). A nonprobability sampling method was most commonly used (53%) with instruments that are more likely to be distributed via mail (36%) or at workplaces (51%). Instruments identified and categorization as EBP/RU Table S2 lists all identified instruments and their relationships to each other through substantive replications and © 2014 John Wiley & Sons Ltd

JAN: REVIEW PAPER

Instruments for measuring evidence-based knowledge, skills and attitudes

adaptations. Only six studies had used single instruments once. For example, the Research Utilisation Questionnaire of Baessler et al. (1994) was originally developed to measure research utilization but had been replicated seven times to measure evidence use in nursing practice. Others such as Alcock et al. (1990) and McSherry (1997) had used EBP and RU instruments to variously measure participants’ evidence-based knowledge, skills and/or attitudes. We further categorized instruments based on the stated aims of the study in which they were used. If instruments were used for measuring variables associated with EBP knowledge, skills and attitudes, they were classified as EBP instruments (Group I). Instruments aimed at measuring variables associated with RU were placed in Group II, regardless of whether the original version was designed to measure RU. Group I therefore includes 36 studies representing 16 selfreported instruments to measure EBP, while Group II includes 23 studies representing 11 instruments that specifically aimed to measure RU. Measurement properties and validity of instruments There were only 15 studies in Group 1 that measured all three variables associated with EBP (knowledge, skills and attitudes). Among these, only two were graded by the PGF as having ‘adequate’ instrument validity, two as ‘weak’ and 11 as ‘very weak’ (Table 2). Only one or two of each of the variable(s) associated with EBP were measured in the remaining 21 studies in Group 1. Of these, a study by Filippini et al. (2011) measured knowledge and attitudes and was graded as having ‘adequate’ validity. Seven were graded as ‘weak’ (Table S3) while the remaining 13 studies were graded as having ‘very weak’ (Table S4) validity using the PGF. As shown in Table S2, content validity, internal consistency and construct validity are the three most common attributes reported. Most instruments were reported to have content validity except two by Alcock et al. (1990) and Larrabee et al. (2007). More than half have ‘fair’ to ‘good’ internal consistency and 15 studies reported different strengths of construct validity. Only two instruments had inter-rater reliabilities reported (McSherry et al. 2006, Munroe et al. 2008) but none had criterion validity reported. Other reported measures like acceptability and interpretability have not been used for validity assessment in this study as there is no apparent way of quantifying them. The revised version of the Evidence Based Practice Questionnaire (EBPQ) developed by Upton and Lewis in 1997 was the only instrument found to have ‘adequate’ validity (Upton & Upton 2006, Koehn & Lehman 2008). These authors used a random sampling method to recruit © 2014 John Wiley & Sons Ltd

participants; had a good sample size (N = 750) and high response rate (75%), further supporting the generalizability of their study. The original EBPQ had five sections (subscales) made up of 164 items (Upton & Lewis 1997) but the revised EBPQ has three subscales and 24 items (Upton & Upton 2006). The EBPQ can be completed in 20 minutes and is practical for busy clinicians to use. Most importantly, this instrument has been especially modified for measuring the knowledge, skills and attitudes of nurses towards EBP. However, two factors may affect the psychometric properties of the instrument. Firstly, the construct and content validity were based on a literature review and discussion with health and social care professionals. Key factors associated with the implementation of EBP were identified but the conceptual theory of evidence-based nursing practice was not mentioned or discussed in the study. Secondly, the hypothesis used to establish construct and discriminant validity was set to compare the correlation between EBPQ scores and an independent measure of awareness of a local EBP initiative. This hypothesized variable could have been influenced by funding, strategy and support for implementation. Therefore, even if nurses were aware of the local initiative, it does not infer that they had better knowledge, skills and attitudes for EBP. In addition, participants were asked to rate their perceived level on items called ‘Research skills’ and ‘IT skills’ in one of the subscales. ‘IT skills’ may imply general computer skills which are distinct from special skills for searching evidence; while ‘Research skills’ may potentially dissociate nurse clinicians who are more commonly users of research rather than researchers. Nevertheless, the instrument demonstrates good internal consistency (Cronbach’s a = 087) and content validity. There is limited to good construct validity (r = 054–095) across three subscales but the total percentage variance explained for the instrument was only 6177%. The effect size for discriminant validity was statistically significant and adequate (Eta = 002–007).

Discussion Previous reviews have focussed on the validity of instruments for measuring nurses’ attitudes to research use (Frasure 2008), using research in practice (Estabrooks et al. 2003b, Squires et al. 2011) and self-efficacy in EBP (Chang & Crowe 2011). The aim of this review was to examine the psychometric properties of instruments used to measure knowledge, skills and attitudes of nurses for EBP and to recommend a valid and reliable instrument for measuring EBP knowledge, skills and attitudes to educators/researchers in nursing. 2187

2188 458 nurses/447%/ convenience

1144 hospital nurses/ 754%/Not stated

619 nurses/609%/did not mention

342 nurses/428%/Selected

(Stichler et al. 2011)

(Brown et al. 2009)

(Yip Wai et al. 2013)

(Gonzalez-Torrente et al. 2012)

(Mokhtar et al. 2012)

- EBPQ - 3 subscales - 24 items

- EBPQ (Revised) - did not mention no. of subscales & items

- Investigator developed (modified & adopted multiple scales) - 3 sections - 70 items - EBPQ (Spanish) -3 subscales - 24 items

609 nurses/Pre- 24%; Post-20%/Not stated

40 nurses/31%/ Convenience

(Upton & Upton 2006)

- EBPQ (Revised) - 3 subscales - 24 items

(Mollon et al. 2012)

751 nurses/75%/ random

(Koehn & Lehman 2008)

- EBPQ (Revised) - 3 subscales - 24 items

- Investigator developed - 3 sections - 43 items - EBPQ - 3 subscales - 24 items

422 nurses/41%/ open to all

Source

Instrument*

Participants/Response rate (%)/Sampling method

- knowledge - skills - attitudes - knowledge - skills - attitudes

- knowledge - skills - attitudes

- knowledge - skills - attitudes

- knowledge - skills - attitudes

- knowledge - skills - attitudes

- knowledge - skills - attitudes

- knowledge - skills - attitudes

Study variable(s) of EBP

- information source for decision-making

Nursing Work Index

- barriers - training resources - literacy skills - beliefs

- practice - barriers & facilitators

Content: expert panel Internal consistency: a = 061–093 Construct: Percentage variance explained: 5755% Content: not reported Internal consistency: a = 068–095 Content: expert panel Internal consistency: a = 078–096 Construct: r = 009–018 Discriminant: Eta = 002–007

Content: expert panel Internal consistency: a = 094 Construct: r = 054–095 Discriminant: Eta = 002–007 Content: expert panel Internal consistency: a = 087 Construct: r = 054–095 Discriminant: Eta = 002–007 Content: expert panel Internal consistency: a = 074–092 (EBPQ) a = 06–087 (RUS) Construct: r = 030–040 Discriminant: Eta = 002–007 Content: expert panel Internal consistency: a = 067–094 Construct: r = 054–095 Content: expert panel Internal consistency: a = 069–095

- barriers

Barriers

Validity(ies)/Reliability(ies)

Other variable(s)

Very weak

Very weak

Very weak

Very weak

Weak

Weak

Adequate

Adequate

Strength

Table 2 Study characteristic and validity of instruments measuring all three variables (knowledge, skills and attitudes) of evidence-based practice (N = 15) in ranking order.

K. Leung et al.

© 2014 John Wiley & Sons Ltd

© 2014 John Wiley & Sons Ltd 816 nurses/65%/selected

370 nurses/74%/Random

(Nagy et al. 2001)

(Upton 1999)

- knowledge - skills - attitudes - knowledge - skills - attitudes

- knowledge - skills - attitudes - knowledge - skills - attitudes

- knowledge - skills - attitudes

- knowledge - skills - attitudes

- knowledge - skills - attitudes

Study variable(s) of EBP

- barriers & solutions

- organizational support - time

- barriers

- organizational readiness

Other variable(s)

Very weak

Content: professional translator + four research team members Internal consistency: a = 072–092 Construct: Percentage variance explained: 6229% Content: feedback from participants Construct: a = 074–083

Content: feedback from participant Construct: a = 073–081 Face & content: panel Internal consistency: a = 074–088

Very weak

Very weak

Very weak

Very weak

Content: literature review

Content: expert panel Internal consistency: a = 074–088

Very weak

Content: clinicians experience Internal consistency: a = 06–067 Inter-rater reliability: 055–071

Very weak

Strength

Validity(ies)/Reliability(ies)

*Abbreviations of instruments’ names: EBN, Evidence Based Nursing; EBNQ, Evidence Based Nursing Questionnaire; EBPQ, Evidence Based Practice Questionnaire; EBPQ-19, Evidence Based Practice Questionnaire-19; RUS, Research Utilisation Scale.

751 nurses/75%/random

(Upton & Upton 2005)

(Munroe et al. 2008)

- EBN Skills Assessment Tool - 3 subscales - 14 items - Investigator developed - did not state no. of subscales & items - EBPQ of Upton & Lewis (Old) -4 sections - 164 items - EBNQ - 6 factors - 50 items - EBPQ of Upton & Lewis (Old) - did not state no. of subscales & items 19 nurses & 66 allied health/43%/random

Pre-intervention: 744 hospital nurses; Postintervention: 310 hospital nurses/Pre: 12%; Post: 286%/convenience 40 nurses/20%/open to all

(Hart et al. 2008)

- EBNQ - 4 subscales - unclear

(Caldwell et al. 2007)

289 nurses/889%/Invitation by directorate

(Gomez et al. 2009)

- EBPQ–19 (Spanish) - 3 subscales - 19 items

Participants/Response rate (%)/Sampling method

Source

Instrument*

Table 2 (Continued).

JAN: REVIEW PAPER Instruments for measuring evidence-based knowledge, skills and attitudes

2189

K. Leung et al.

This systematic review of psychometric properties has found that only the revised EBPQ (Upton & Upton 2006, Koehn & Lehman 2008) has adequate validity for measuring knowledge, skills and attitudes in EBP amongst nurses. A recent literature review by Upton et al. (2014) further affirms the psychometric properties of this instrument; however, the attitude subscale may require further refinement (Upton et al. 2014). Unlike the systematic review conducted by Shaneyfelt et al. (2006), we defined the strength of instruments based on the number and magnitude of their psychometric test scores. This is different from the contemporary method of interpreting validity where the support of multiple evidence sources are evaluated (Downing 2003). Most of the reviewed instruments in this study were developed for measuring variables associated with using evidence in nursing practice but there have been many replications and adaptations of the same instruments. Some replications retained the original name and used the instrument to measure evidence-based nursing practice in different contexts without re-validation, for example, the studies by Tanner (2000) and Gerrish and Clayton (2004). Our review has revealed several common problems which are congruent with the findings of other related systematic reviews by Estabrooks et al. (2003b) and Squires et al. (2011). Many instruments lack construct clarity, measurement of criterion correlation and additional validity testing when the instrument was replicated in a different context. Secondly, all instruments are self-report and susceptible to recall biases and social desirability biases.

Strengths and limitations As in any systematic review, it is possible that some instruments were not identified. However, we searched multiple databases, hand-searched retrieved studies, tracked down twenty unpublished studies and contacted numerous authors and experts. Our search strategy would effectively capture most of the studies and replications. A decision was made not to include qualitative studies in the review of EBP instruments; however it is acknowledged that these may have uncovered important information about a nurse’s knowledge, skills and attitudes regarding EBP. It is not uncommon for RU to be discussed in the same context as EBP in nursing (Breimaier et al. 2011, Florin et al. 2012, Wallin et al. 2012) and other healthcare studies (Lyons et al. 2011). The search term ‘research utilisation’ was not used for our review as the aim was to focus on EBP knowledge, skills and attitudes. Instruments for measuring RU have been the subject of other reviews and we are aware that many RU instruments have been 2190

purposefully missed by our search. Based on the five steps of implementing EBP (Ask, Access, Appraise, Apply and Assess) (Straus et al. 2011), we consider RU to be equivalent to the step of applying research (evidence) in practice. We propose that the unique difference between RU and EBP is that RU starts with research itself but EBP begins with an answerable question (Polit & Beck 2010). Melnyk and Fineout-Overholt (2011) also describe RU as a component of EBP (Melnyk & Fineout-Overholt 2011) while Taylor (2007) defines RU as a process where research findings are critiqued, implemented and evaluated (Taylor 2007). A strength of our study is the use of the Psychometric Grading Framework which allows for the number and strength of validity measures to be assessed (Leung et al. 2012). Squires et al. (2011) on the other hand used an untested standard guideline (the Standards for Educational and Psychological Testing) to evaluate the psychometric properties of research utilization measures in their systematic review (AERA 1999, Eignor 2001). This standard guideline was based only on the expert consensus of health professionals in North America and Canada. In a review of a draft version in 2011, Phelps (2011) criticized this guideline as lacking generalizability and unsuitable to be considered as an international standard. Estabrooks et al. (2003a) and Shaneyfelt et al. (2006) also used a level-based hierarchy to classify EBP instruments according to the number and types of validity evidences but these two reviews only provide a descriptive assessment of the instruments. Recently, the COSMIN checklist (Mokkink et al. 2010) has been developed for evaluating the psychometric properties of measurement instruments. It was originally developed for rating the quality of an instrument which measured health-related patient-reported outcomes such as the quality of life questionnaire (Terwee et al. 2007) and then revised for evaluating the methodological quality of studies in which the criteria for assessing the quality of instruments were also included (Mokkink et al. 2010). Although the authors suggested that the same measurement properties are likely to be relevant for other kinds of instruments (Mokkink et al. 2010), the items designed for each measurement property (e.g. cross-cultural validity, item response theory) are not always suitable for appraising the quality of instruments that measure non-patient-reported outcomes such as in evidence-based practice instruments. The lack of these measurement properties would considerably impact on the overall methodological quality score. Furthermore, the COSMIN checklist only requires users to rate the presence or absence of reported statistical tests based on a Delphi consensus of threshold values. The total score reflects the quality of each measurement property of a © 2014 John Wiley & Sons Ltd

JAN: REVIEW PAPER

Instruments for measuring evidence-based knowledge, skills and attitudes

study, but not the overall strength of an instrument. Although the checklist has been used in several systematic reviews (manuscripts under review), the revised version has only recently been tested on 46 articles in a systematic review by the same authors who took part in developing the checklist (Terwee et al. 2011, 2012). The scoring outcome of their review was based on the opinion of only one rater (Terwee et al. 2012). Interestingly, the authors admitted that the decision for the COSMIN scoring system was ‘based on arguments rather than evidence’ and identified this as one of the shortcomings of their study (Terwee et al. 2012, 656). Thus, the validity and reliability of the checklist are still yet to be affirmed. On the other hand, the Psychometric Grading Framework was built on best practice standards for psychometric testing (Streiner & Norman 2008) and is based on the most commonly used statistical tests and guidelines for threshold values recommended by leading psychometricians and biostatisticians. It was specifically developed for rating the quality of an instrument itself, rather than the methodological quality of a study. This framework offers a wider variety of statistical test measurements for evaluating the validity of an instrument potentially for both patient and non-patient outcome measures. The framework is unique and can be used to rank the strength of evidence of validity, allowing users to quantitatively conclude the validity of instruments.

Implications Valid and reliable instruments are required to measure the EBP competence of nurses in order to facilitate the development of curricula, and to evaluate the effectiveness of different educational initiatives for competence in EBP. The revised EBPQ (Upton & Upton 2006) was found to have adequate validity and was the most feasible instrument to be used in practice. However, the establishment of the construct and discriminant validity of this instrument is potentially compromized on a number of criteria including its self-report format. Thus, in the absence of a psychometrically sound and objective instrument to measure knowledge, skills and attitudes towards EBP in nursing, there remains a need to develop a valid assessment tool for this purpose. The newly developed instrument by Chang and Crowe (2011) focuses on measuring the self-efficacy of nurses in using evidence in practice. This self-report instrument reports to have good psychometric properties but the EBP knowledge questions have not yet been tested for reliability. While the measurement of clinicians’ EBP knowledge, skills and attitudes remains important for educators and researchers, an instrument to objectively measure these © 2014 John Wiley & Sons Ltd

constructs needs to be developed. The Fresno test (Ramos et al. 2003) was developed to assess the performance of each component of EBP through clinical scenarios rather than relying on participants’ self-evaluation. The Fresno test has high validity (Shaneyfelt et al. 2006) and has already been modified for use by allied health professionals (Tilson 2010). One option may be to modify the Fresno test for nurses by changing the medically focussed scenarios to nursing cases. On the other hand, qualitative methodologies may be more appropriate to measure how nurses’ attitudes contribute to evidence used in practice and to explore how individual beliefs and perceptions influence their behaviour towards using evidence in the clinical setting. Given the proven association between positive attitudes to RU and its implementation (Estabrooks et al. 2003a), it may also be important to include some assessment of EBP attitudes which are not covered by the Fresno test.

Conclusion This systematic review has summarized and appraised the psychometrics and quality of instruments that have been used to measure knowledge, skills and attitudes of nurse clinicians in EBP. Although the revised version of the EBPQ demonstrated the highest validity when it was graded against the Psychometric Grading Framework, its use in practice is limited by the self-report format. As the authors of the EBPQ have indicated in their recent publication that one subscale of this instrument still requires refinement, we conclude that there is no proven reliable instrument that can currently be recommended for use in measuring the knowledge, skills and attitudes of nurses towards evidence-based practice. Nevertheless, the findings of this systematic review provide a road map to assessing the quality of instruments to measure nurses’ knowledge, skills and attitudes in evidencebased practice. There is a need for the development and evaluation of an objective competency-based assessment tool suitable for nursing practice. Adaptation of tools from other health disciplines may be possible and should be considered. Such a tool would have the potential to inform and influence the quality of educational and clinical nursing practice for the future.

Acknowledgements This study is part of Kat Leung’s PhD project and she would like to thank Associate Professor Lyndal Trevena and Associate Professor Donna Waters for their valuable advice and supervising role in editing the manuscript.

2191

K. Leung et al.

Funding This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Conflict of interest No conflict of interest has been declared by the authors.

Author contributions Kat Leung had full access to all the data of the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Associate Professor Lyndal Trevena and Associate Professor Donna Waters participated in the article screening process and full-text review of included articles. All authors have given approval of the version to be published. All authors have agreed on the final version and meet at least one of the following criteria [recommended by the ICMJE (http://www.icmje.org/ethical_1author.html)]:

• •

substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; drafting the article or revising it critically for important intellectual content.

Supporting Information Online Additional Supporting Information may be found in the online version of this article: Table S1. Search strategy for systematic review of EBP instruments – Medline. Table S2. Validity of instruments in 59 included studies in alphabetical order. Table S3. Study characteristic and validity of instruments measured one or two study variable(s) of EBP that had ‘weak’ psychometric strength (N=7) in reverse chronological order. Table S4. Study characteristic and validity of instruments measured one or two study variable(s) of EBP that had ‘very weak’ psychometric strength (N=13) in reverse chronological order.

References AERA (1999) Standards for Educational and Psychological Testing. American Educational Research Association, Washinton, DC. Alcock D., Carroll G. & Goodman M. (1990) Staff nurses’ perceptions of factors influencing their role in research. The Canadian Journal of Nursing Research 22, 7–18.

2192

Baessler C., Curran J., McGrath P., Blumberg M., Fennessey A., Perrong M., Cunningham J., Jacobs J. & Wolf Z. (1994) Medical-surgical nurses’ utilization of research methods and products. MEDSURG Nursing 3, 113–141. Breimaier H.E., Halfens R.J.G. & Lohrmann C. (2011) Nurses’ wishes, knowledge, attitudes and perceived barriers on implementing research findings into practice among graduate nurses in Austria. Journal of Clinical Nursing 20, 1744–1756. Brett J.L. (1987) Use of nursing practice research findings. Nursing Research 36, 344–349. Brett J.L. (1989) Organizational integrative mechanisms and adoption of innovations by nurses. Nursing Research 38, 105–110. Brown C.E., Wickline M.A., Ecoff L. & Glaser D. (2009) Nursing practice, knowledge, attitudes and perceived barriers to evidencebased practice at an academic medical center. Journal of Advanced Nursing 65, 371–381. Caldwell K., Coleman K., Copp G., Bell L. & Ghazi F. (2007) Preparing for professional practice: how well does professional training equip health and social care practitioners to engage in evidence-based practice? Nurse Education Today 27, 518–528. Champion V. & Leach A. (1989) Variables related to research utilization in nursing: an empirical investigation. Journal of Advanced Nursing 14, 705–710. Chang A.M. & Crowe L. (2011) Validation of scales measuring self-efficacy and outcome expectancy in evidence-based practice. Worldviews on Evidence-Based Nursing (2), 106–115. Chiu Y., Weng Y., Lo H., Hsu C., Shih Y. & Kuo K.N. (2010) Comparison of evidence-based practice between physicians and nurses: a national survey of regional hospitals in Taiwan. Journal of Continuing Education in the Health Professions 30, 132–138. Dictionary, O.E. (2000) The Oxford English dictionary online. Oxford University Press, Oxford. Retrieved from http://www. oed.com on 12 September 2012. Downing S.M. (2003) Validity: on the meaningful interpretation of assessment data. Medical Education 37, 830–837. Eignor D.R. (2001) Standards for the development and use of tests: the standards for educational and psychological testing. European Journal of Psychological Assessment 17, 157–163. Ervin N.E. (2002) Evidence-based nursing practice: are we there yet? Journal of the New York State Nurses Association 33, 11–16. Estabrooks C. (1998) Will evidence-based nursing practice make practice perfect? Canadian Journal of Nursing Research 30, 15–36. Estabrooks C.A. (1999) The conceptual structure of research utilization. Research in Nursing & Health 22, 203–216. Estabrooks C.A., Floyd J.A., Scott-Findlay S., O’Leary K.A. & Gushta M. (2003a) Individual determinants of research utilization: a systematic review. Journal of Advanced Nursing 43, 506–520. Estabrooks C.A., Wallin L. & Milner M. (2003b) Measuring knowledge utilization in health care. Journal of Policy Evaluation and Management 1, 3–36. FAHC (2007) FAHC studying nursing knowledge of evidencebased practice. Vermont Nurse Connection 10, 14–15. Filippini A., Sessa A., Giuseppe G.D. & Angelillo I.F. (2011) Evidence-based practice among nurses in Italy. Evaluation & the Health Professions 34, 371–382. Florin J., Ehrenberg A., Wallin L. & Gustavsson P. (2012) Educational support for research utilization and capability beliefs

© 2014 John Wiley & Sons Ltd

JAN: REVIEW PAPER

Instruments for measuring evidence-based knowledge, skills and attitudes

regarding evidence-based practice skills: a national survey of senior nursing students. Journal of Advanced Nursing 68, 888–897. Foo S., Majid S., Mokhtar I.A., Zhang X., Luyt B., Chang Y.-K. & Theng Y.-L. (2011) Nurses’ perception of evidence-based practice at the National University Hospital of Singapore. Journal of Continuing Education in Nursing 42, 522–528. Frasure J. (2008) Analysis of instruments measuring nurses’ attitudes towards research utilization: a systematic review. Journal of Advanced Nursing 61, 5–18. Gerrish K. & Clayton J. (2004) Promoting evidence-based practice: an organizational approach. Journal of Nursing Management 12, 114–123. Gerrish K., Ashworth P., Lacey A., Bailey J., Cooke J., Kendall S. & McNeilly E. (2007) Factors influencing the development of evidence-based practice: a research tool. Journal of Advanced Nursing 57, 328–338. Gerrish K., Ashworth P., Lacey A. & Bailey J. (2008) Developing evidence-based practice: experiences of senior and junior clinical nurses. Journal of Advanced Nursing 62, 62–73. Gomez J.d.P., Morales-Asencio J.M., Abad A.S., Veny M.B., Roman M.J.R. & Ronda y.F.M. (2009) Validation of the Spanish version of the Evidence Based Practice Questionnaire in Nurses. Revista Espa~ nola de Salud P ublica 83, 577–686. Gonzalez-Torrente S., Pericas-Beltran J., Bennasar-Veny M., Adrover-Barcelo R., Morales-Asencio J. & De Pedro-Gomez J. (2012) Perception of evidence-based practice and the professional environment of Primary Health Care nurses in the Spanish context: a cross-sectional study. BMC Health Services Research 12, 227. Greenhalgh J., Long A., Brettle A. & Grant M. (1998) Reviewing and selecting outcome measures for use in routine practice. Journal of Evaluation in Clinical Practice 4, 339–350. Hart P., Eaton L., Buckner M., Morrow B., Barrett D., Fraser D., Hooks D. & Sharrer R. (2008) Effectiveness of a computerbased educational program on nurses’ knowledge, attitude and skill level related to Evidence-based practice. Worldviews on Evidence-Based Nursing 5, 75–84. Higgins J. & Green S. (2011) Cochrane Handbook for Systematic Reviews of Interventions: Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, Retrieved from http://www. Cochrane-handbook.org on 10 September 2012. Irwig L., Irwig J., Trevena L. & Sweet M. (2007) Smart Heart Choices: Making Sense of Health Advice. Hammersmith Press, London. Jerosch-Herold C. (2005) An evidence-based approach to choosing outcome measures: a checklist for the critical appraisal of validity, reliability and responsiveness studies. Bristish Journal of Occupational Therapy 68, 347–353. Kenny D.J. (2005) Nurses’ use of research in practice at three US Army hospitals. Canadian Journal of Nursing Leadership 18, 45–67. Koehn M.L. & Lehman K. (2008) Nurses’ perceptions of evidencebased nursing practice. Journal of Advanced Nursing 62, 209– 215. Lai N. & Teng C. (2011) Self-perceived competence correlates poorly with objectively measured competence in evidence based medicine among medical students. BMC Medical Education 11, 1–8.

© 2014 John Wiley & Sons Ltd

Landis J.R. & Koch G.G. (1977) The measurement of observer agreement for categorical data. Biometrics 33, 159–174. Larrabee J., Sions J., Fanning M., Withrow M. & Ferretti A. (2007) Evaluation of a program to increase evidence-based practice change. The Journal of Nursing Administration 37, 302–310. Leung K., Trevena L. & Waters D. (2012) Development of an appraisal tool to evaluate strength of an instrument or outcome measure. Nurse Researcher 20, 13–19. Lyons C., Brown T., Tseng M.H., Casey J. & McDonald R. (2011) Evidence-based practice and research utilisation: perceived research knowledge, attitudes, practices and barriers among Australian paediatric occupational therapists. Australian Occupational Therapy Journal 58, 178–186. McSherry R. (1997) What do registered nurses and midwives feel and know about research? Journal of Advanced Nursing 25, 985–998. McSherry R., Artley A. & Holloran J. (2006) Research awareness: an important factor for evidence-based practice? Worldviews on Evidence-Based Nursing 3, 103–115. Melnyk B.M. & Fineout-Overholt E. (2011) Evidence-Based Practice in Nursing & Healthcare: A Guide to Best Practice. Lippincott Williams & Wilkins, Philadelphia, PA. Melnyk B.M., Fineout-Overholt E., Fischbeck Feinstein N., Li H., Small L., Wilcox L. & Kraus R. (2004) Nurses’ perceived knowledge, beliefs, skills and needs regarding evidence-based practice: implications for accelerating the paradigm shift. Worldviews on Evidence-Based Nursing 1, 185–193. Messick S. (1995a) Standards of validity and the validity of standards in performance asessment. Educational Measurement: Issues and Practice 14, 5–8. Messick S. (1995b) Validity of psychological assessment: validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist 50, 741–749. Michel Y. & Sneed N.V. (1995) Dissemination and use of research findings in nursing practice. Journal of Professional Nursing 11, 306–311. Moher D., Liberati A., Tetzlaff J., Altman D.G. & The PRISMA Group (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Medicine 6, e1000097. Mokhtar I.A., Majid S., Foo S., Zhang X., Theng Y.-L., Chang Y.K. & Luyt B. (2012) Evidence-based practice and related information literacy skills of nurses in Singapore: an exploratory case study. Health Informatics Journal 18, 12–25. Mokkink L.B., Terwee C.B., Patrick D.L., Alonso J., Stratford P.W., Knol D.L., Bouter L.M. & Vet H.A.d. (2010) The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Quality Life Research 19, 539–549. Mollon D., Fields W., Gallo A.-M., Wagener R., Soucy J., Gustafson B. & Kim S.C. (2012) Staff practice, attitudes and knowledge/skills regarding evidence-based practice before and after an educational intervention. The Journal of Continuing Education in Nursing 43, 411–419. Munroe D., Duffy P. & Fisher C. (2008) Research for practice. Nurse knowledge, skills and attitudes related to evidence-based

2193

K. Leung et al. practice: before and after organizational supports. MEDSURG Nursing 17, 55–60. Nagy S., Lumby J., McKinley S. & Macfarlane C. (2001) Nurses’ beliefs about the conditions that hinder or support evidence-based nursing. International Journal of Nursing Practice 7, 314–321. Pain K., Hagler P. & Warren S. (1996) Development of an instrument to evaluate the research orientation of clinical professionals. Canadian Journal of Rehabilitation 9, 93–100. Phelps R.P. (2011) Extended comments on the draft Standards for Edcuational and Psychological Testing (But, in particular, Draft Chapters 9, 12 & 13). Nonpartisan Education Review/Essays 7, 1–10. Polit D.F. & Beck C.T. (2010) Essentials of Nursing Research: Appraising Evidence for Nursing Practice. Lippincott Williams & Wilkins, Philadelphia, PA. Pravikoff D.S., Tanner A.B. & Pierce S.T. (2005) Readiness of U.S. nurses for evidence-based practice. American Journal of Nursing 105, 40–51. Ramos K.D., Schafer S. & Tracz S.M. (2003) Validation of the Fresno test of competence in evidence based medicine. British Medical Journal 326, 319–321. Shaneyfelt T., Baum K.D., Bell D., Feldstein D., Houston T.K., Kaatz S., Whelan C. & Green M. (2006) Instruments for evaluating education in evidence-based practice: a systematic review. Journal of the American Medical Association 296, 1116–1127. Sherriff K.L., Wallis M. & Chaboyer W. (2007) Nurses’ attitudes to and perceptions of knowledge and skills regarding evidencebased practice. International Journal of Nursing Practice 13, 363–369. Squires J.E., Estabrooks C.A., O’Rourke H.M., Gustavsson P., Newburn-Cook C.V. & Wallin L. (2011) A sytematic review of the psychometric properties of self-report research utilization meausres used in healthcare. Implementation Science 6, 1–18. Stichler J.F., Fields W., Kim S.C. & Brown C.E. (2011) Faculty knowledge, attitudes and perceived barriers to teaching evidencebased nursing. Journal of Professional Nursing 27, 92–100. Stiefel K.A. (1996) Career commitment, nursing unit culture and nursing research utilization, Vol. Thesis, University of South Carolina, USA. Straus S.E., Glasziou P., Richardson W.S. & Haynes R.B. (2011) Evidence-Based Medicine: How to Practice and Teach It. Church Livingstone Elsevier, Edinburgh. Streiner D. & Norman G.R. (2008) Health Measurement Scales: A Practical Guide to their Development and Use. Oxford University Press, Oxford. Tanner A. (2000) Readiness for Evidence-based Practice: Information Literacy Needs of Nurses in a Southern U.S. State. Unpublished dissertation, Northwestern State University of Louisiana, Natchitoches, LA. Taylor M.C. (2007) Evidence-Based Practice for Occupational Therapist. Blackwell Publishing, Oxford UK. Terwee C.B., Bot S.D.M., de Boer M.R., van der Windt D.A.W.M., Knol D.L., Dekker J., Bouter L.M. & de Vet H.C.W.

2194

(2007) Quality criteria were proposed for measurement properties of health status questionnaires. Journal of Clinical Epidemiology 60, 34–42. Terwee C.B., Schellingerhout J.M., Verhagen A.P., Koes B.W. & de Vet H.C.W. (2011) Methodological quality of studies on the measurement properties of neck pain and disability questionnaires: a systematic review. Journal of Manipulative and Physiological Therapeutics 34, 261–272. Terwee C., Mokkink L., Knol D., Ostelo R.J.G., Bouter L. & Vet H.W. (2012) Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist. Quality of Life Research 21, 651– 657. Tilson J.K. (2010) Validation of the modified Fresno Test: assessing physical therapists’ evidence based practice knowledge and skills. BioMed Central Medical Education 10, 38. Upton D. (1999) Clinical effectiveness and EBP 2: attitudes of health-care professionals. British Journal of Therapy & Rehabilitation 6, 26–30. Upton D. & Lewis B. (1997) Survey of Attitudes towards Clinical Effectiveness and Evidence Based Practice: Baseline Report. Clinical Effectiveness Unit, Cardiff. Upton D. & Lewis B. (1998) Clinical effectiveness and EBP: design of a questionnaire. British Journal of Therapy & Rehabilitation 5, 647–650. Upton D. & Upton P. (2005) Nurses’ attitudes to evidence-based practice: impact of a national policy. British Journal of Nursing 14, 284–288. Upton D. & Upton P. (2006) Development of an evidence-based practice questionnaire for nurses. Journal of Advanced Nursing 53, 454–458. Upton D., Upton P. & Scurlock-Evans L. (2014) The reach, transferability and impact of the evidence-based practice questionnaire: a methodological and narrative literature review. Worldviews on Evidence-Based Nursing 00, 1–9. Varcoe C. & Hilton A. (1995) Factors affecting acute-care nurses’ use of research findings. Canadian Journal of Nursing Research 27, 51–71. de Vet H.C.W., Terwee C.B., Mokkink L.B. & Knol D.L. (2011) Systematic reviews of measurement properties. In Measurement in Medicine: A Practical Guide, Cambridge University Press, New York, pp. 275–314. Wallin L., Bostr€ om A.-M. & Gustavsson J.P. (2012) Capability beliefs regarding evidence-based practice are associated with application of EBP and research use: validation of a new measure. Worldviews on Evidence-Based Nursing 9, 139–148. Waters D., Crisp J., Rychetnik L. & Barratt A. (2009) The Australian experience of nurses’ preparedness for evidence-based practice. Journal of Nursing Management 17, 510–518. Wright A., Brown P. & Sloman R. (1996) Nurses’ perceptions of the value of nursing research for practice. Australian Journal of Advanced Nursing 13, 15–18. Yip Wai K., Mordiffi S.Z., Liang S., Kim Xue & Z. & Majid, S. (2013) Nurses’ perception towards evidence-based practice: a descriptive study. Singapore Nursing Journal 40, 34–41.

© 2014 John Wiley & Sons Ltd

JAN: REVIEW PAPER

Instruments for measuring evidence-based knowledge, skills and attitudes

The Journal of Advanced Nursing (JAN) is an international, peer-reviewed, scientific journal. JAN contributes to the advancement of evidence-based nursing, midwifery and health care by disseminating high quality research and scholarship of contemporary relevance and with potential to advance knowledge for practice, education, management or policy. JAN publishes research reviews, original research reports and methodological and theoretical papers. For further information, please visit JAN on the Wiley Online Library website: www.wileyonlinelibrary.com/journal/jan Reasons to publish your work in JAN:

• High-impact forum: the world’s most cited nursing journal, with an Impact Factor of 1·527 – ranked 14/101 in the 2012 ISI Journal Citation Reports © (Nursing (Social Science)).

• Most read nursing journal in the world: over 3 million articles downloaded online per year and accessible in over 10,000 libraries worldwide (including over 3,500 in developing countries with free or low cost access).

• • • •

Fast and easy online submission: online submission at http://mc.manuscriptcentral.com/jan. Positive publishing experience: rapid double-blind peer review with constructive feedback. Rapid online publication in five weeks: average time from final manuscript arriving in production to online publication. Online Open: the option to pay to make your article freely and openly accessible to non-subscribers upon publication on Wiley Online Library, as well as the option to deposit the article in your own or your funding agency’s preferred archive (e.g. PubMed).

© 2014 John Wiley & Sons Ltd

2195

Systematic review of instruments for measuring nurses' knowledge, skills and attitudes for evidence-based practice.

To identify, appraise and describe the characteristics of instruments for measuring evidence-based knowledge, skills and/or attitudes in nursing pract...
169KB Sizes 1 Downloads 3 Views