bs_bs_banner

Nursing and Health Sciences (2014), 16, 262–273

Review Article

Knowledge tests in patient education: A systematic review Jukka Kesänen, MNSc, RN,1,5 Helena Leino-Kilpi, PhD, RN,1,2 Dinah Arifulla, MNSc, RN,3,4 Mervi Siekkinen, MNSc, RTT1,2 and Kirsi Valkeapää, PhD, RN1,6 1 Department of Nursing Science, University of Turku, 2Hospital District of South-West Finland, 3Turku Vocational Institute, 4National Institute for Health and Welfare, Turku, 5Hospital Orton, Helsinki and 6Lahti University of Applied Sciences, FUAS Federation of Universities of Applied Sciences, Lahti, Finland

Abstract

This study describes knowledge tests in patient education through a systematic review of the Medline, Cinahl, PsycINFO, and ERIC databases with the guidance of the PRISMA Statement. Forty-nine knowledge tests were identified. The contents were health-problem related, focusing on biophysiological and functional knowledge. The mean number of items was 20, with true–false or multiple-choice scales. Most of the tests were purposely designed for the studies included in the review. The most frequently reported quality assessments of knowledge tests were content validity and internal consistency. The outcome measurements for patienteducation needs were comprehensive, validating knowledge tests that cover multidimensional aspects of knowledge. Besides the measurement of the outcomes of patient education, knowledge tests could be used for several purposes in patient education: to guide the content of education as checklists, to monitor the learning process, and as educational tools. There is a need for more efficient content and health problem-specific knowledge-test assessments.

Key words

Finland, knowledge, knowledge test, patient education, systematic review.

INTRODUCTION Health knowledge is an essential component of patient health. Patients should be able to make informed choices, enabling them to take control and responsibility for their health-related issues (Funnell et al., 1991). It is therefore important to assess patients’ knowledge levels in order to determine their needs and the knowledge they receive (Leino-Kilpi et al., 1998; Johansson et al., 2003; Ryhänen et al., 2012). Patients’ health knowledge can be tested in different ways. In this review, the focus was on written tests aimed at measuring patients’ knowledge levels. Knowledge consists of facts, skills, and personal understanding about a subject. Understanding involves the person’s own knowledge level and the processing of knowledge. Knowledge can be measured with so-called “learning achievement tests”, which are usually divided into objective and essay tests. Objective tests are highly structured, consisting of questions or statements. The answers are short and univocal, consisting of one word, a few words, or a sentence (presented as open-ended responses or as a set of answers). Such tests are effective for measuring the knowledge of facts, as they are objective and easy to score. The essay-type

Correspondence address: Jukka Kesänen, Hospital Orton,Tenholantie 10, FIN-00280 Helsinki, Finland. Email: [email protected]. Received 26 September 2012; revision received 3 September 2013; accepted 6 September 2013.

© 2013 Wiley Publishing Asia Pty Ltd.

question permits the respondent to select, organize, and present the answer in a more personal form (McDonald, 2007). In this review, we define knowledge tests as a measurement tool to assess patients’ knowledge, mainly addressing the facts of their health problem (McDonald, 2007). A highquality knowledge test includes a number of important elements. The development of a knowledge test should start with delineating the content of the knowledge domain (Downing & Haladyna, 2006), followed by drawing up questions or items measuring the domain. These items should be specific for the domain without unnecessary additions. The target population of patients should be considered in the process; furthermore, an adequate validation procedure is needed (Pink et al., 2009). In order to ensure adequate evaluation of patient knowledge, the instruments for measurement of patient education should contain the aforementioned elements to be of high quality. However, earlier studies suggest that there seems to be a lack of critical appraisal regarding the quality assessment of knowledge tests in patient-education research. There is only one systematic review concerning the education of patients with asthma focusing on the quality assessment of knowledge tests (Pink et al., 2009).As a result of this shortage of available evidence concerning the use of knowledge tests in patient education, we have adopted a broad and comprehensive approach to apply knowledge tests in different areas of patient education, instead of concentrating on one specific field. doi: 10.1111/nhs.12097

Knowledge tests in patient education

Purpose The purpose of this systematic review was to describe the development, structure, content, and quality of the knowledge tests identified, as well as their role in evaluating the effectiveness of patient education. The focus was on knowledge tests used in experimental (randomized, controlled and quasiexperimental) patient-education designs. The ultimate goal was to report the current status of patient-knowledge tests and provide guidance suggestions aimed to improve their development for use in patient education. We focused on recent knowledge tests (between the years 2000–2012) because of the rapid pace of changes in health care.The speed of technical development of healthcare methods is constantly increasing, while hospitalization periods are shorter, causing changes in the content and delivery of patient education. The specific research questions concerning the identified knowledge tests in patient education were: (i) How were the knowledge tests developed? (ii) What is the structure and content of knowledge tests? (iii) What is the functional role of knowledge tests? (iv) What kind of quality evaluation was conducted on the knowledge tests?

METHODS Search methods This systematic review of studies using knowledge tests was undertaken according to the guidelines of the PRISMA Statement (http://www.prisma-statement.org). Studies were collected by searching the Medline, Cinahl, PsycINFO, and ERIC electronic databases from 1 January 2000 to 20 February 2012. The language for returns was limited to English. Following this, a manual search in the reference lists of these collected articles was conducted to retrieve relevant articles pertaining to studies that used knowledge tests. The following search terms were used:“patient education”, “patient counseling”, “patient teaching”, “patient learning”, “patient information”, “knowledge test”, “knowledge questionnaire”, “knowledge inquire”, “knowledge scale”, “knowledge instrument”, “knowledge measurement”, and “health problem-specific knowledge”. These search terms were selected in order to capture studies that clearly included a knowledge test.

Study selection Eligibility criteria were set as follows: (i) the study sample consisted of patients in connection with their specific health problem; (ii) patient-education interventions included a knowledge test; (iii) a comparative intervention with other patient-education methods or standard care; (iv) patient knowledge as an outcome; (v) study design was either a randomized, controlled trial (RCT) or quasiexperimental (without randomizing). Studies concerning a person’s general knowledge of health problems or other types of health education were excluded. In the first phase of the selection process, the publications identified from the search were evaluated on title and

263

abstract level. Returns deemed as irrelevant were removed by one reviewer (JK) if the participants were not patients and there was no educational intervention. The final eligibility assessment of full-text papers was performed independently by two reviewers (JK and MS).

Data collection We used a data-extraction sheet developed for this study that was based on the Cochrane Consumers and Communication Review Group data extraction template and CONSORT 2010 checklist (http://www.consort-statemet.org). Data were extracted from each included study concerning participants, design, type of intervention, comparison intervention, the knowledge outcome in intervention and control groups, and other reported outcome measures. The data extracted from knowledge tests included the functional role of the test, purpose, framework, development, description, content, and quality measurements. Data extraction was conducted independently by two authors (JK and DA). A total of 53 studies were identified in the review (Table 1). The initial search of databases provided a total of 211 citations. The manual search of the reference lists in the publications retrieved for full review provided no new knowledge tests. However, the original development reports of the knowledge tests were included in this review if mentioned in the actual article. Of these, 37 duplicates were removed, and 93 were discarded following abstract review, as these papers clearly did not meet the study criteria. The full texts of the remaining 81 articles were examined in more detail, and 28 did not meet the inclusion criteria. Thirteen of these studies were discarded because the participants were either not patients or the patients included were not identifiable among the other participants. In four studies, the outcome was not ascertained because the knowledge level could not be determined or the measurement used was not a knowledge test. In three studies, the design was not experimental, and seven studies included no educational intervention. In one study, the primary source of the knowledge test used was not published and was unobtainable. A flow diagram of the study selection is presented in Figure 1.

Analysis The data were analyzed by content analysis and descriptive statistics. This analysis was conducted on the knowledge test, and examined the test’s development, structure, content, and quality aspects. The functional role of the knowledge test used in each study was also described. The development and structure of the knowledge tests were analyzed multidimensionally using deductive content analysis (Burns & Grove, 2005).The framework for analyzing the content of the knowledge tests was developed for this study according to biophysiological, functional, ethical, experiential, social, and financial dimensions (Leino-Kilpi et al., 1998; Johansson et al., 2003; Ryhänen et al., 2012). The functional role that a knowledge test plays was analyzed using the following classification: (i) placement evaluation,which determines the level of learner performance © 2013 Wiley Publishing Asia Pty Ltd.

Boyde et al., 2013, Australia

© 2013 Wiley Publishing Asia Pty Ltd. RCT Random cross-over study RCT Pre–post test RCT RCT RCT RCT RCT

Garber et al., 2002, USA

16.

17. Garcia et al., 2001, USA 18. George et al., 2008, UK

19. Groves et al., 2010, UK 20. Gyomber et al., 2010, Australia

28. Lo et al., 2009, Taiwan 29. Louie et al., 2006, Hong Kong 30. Lovisi Neto et al., 2009, Brazil

23. 24. 25. 26. 27.

Heikkinen et al., 2008, Finland Herenda et al., 2007, Bosnia and Herzegovina Hill & Bird, 2003, UK Hill et al., 2010, Canada Kakinuma et al., 2011, Japan Keulers et al., 2007, Netherlands Klemetti et al., 2010, Finland

RCT RCT

Freedman et al., 2011, Canada

15.

21. 22.

RCT

Ford et al., 2004, USA

14.

RCT QE RCT

QE

RCT

Faller et al., 2009, Germany

Psychoeducation‡

Pre–post test QE

13.

Computer-assisted learning program‡ Web-based education‡

QE RCT

10. Dilles et al., 2011, Belgium 11. Elkjaer et al., 2010, Denmark, Ireland 12. Even et al., 2010, France

Multimedia learning program¶ Stroke education group ↑ Disease-specific educational program (6 week educational program with sessions lasting 1 h)‡

Leaflet and verbal explanation‡ Disease-specific education‡ Interactive animated video¶ Computer-based patient education‡ Nutritional face-to-face counseling¶

Internet based education¶ Three month intensive education‡

Structural educational model to improve pressure ulcer prevention¶ Educational and social support intervention‡ Brief Intervention in Type 1 diabetes, Education for Self-efficacy§ Three relevant Web sites addresses¶ Interactive multimedia presentation‡

Intensive gastroenteritis education§

Monthly additional dietary education¶

Interactive program¶

Leaflet and CD-ROM‡

RCT

Danino et al., 2005, France

9.

RCT

Motivational interviewing‡ Video and leaflet and discussion with a nurse¶ (i) One-to-one education + telephone counseling‡, (ii) Group education + telephone counseling‡ Video-assisted education¶

Self-care manual with DVD‡

Patient-centered group educational program¶ Psycho educational video game and regular commercial games¶ Standard consultation and multimedia material¶

Cowan et al., 2007, USA

Pre–post test Pre–post test RCT RCT RCT

RCT RCT

Intervention (change in knowledge)

8.

5. Byers et al., 2010, USA 6. Chan et al., 2004, Hong Kong 7. Chiou et al., 2004, Taiwan

4.

1. Atak et al., 2008, Turkey 2. Beale et al., 2007, Australia, Canada, USA 3. Beischer et al., 2007, Australia

Study design

Waiting list§

Standard education‡

Leaflet only‡ Standard education§ Standard education‡ Face-to face patient education Standard education‡

Face-to-face education‡ Three month passive education‡

Standard education‡ Standard education§

Waiting list§ Waiting list§

Standard education‡

Standard education§

Standard education‡

Lecture-based program‡

Standard education‡ Standard education

Leaflet

Same education by physician‡

Standard education§ Video and leaflet only‡ Telephone counseling only§

Standard treatment‡ Regular commercial video games‡ Standard consultation‡

Control (change in knowledge)

Characteristics and outcomes of the studies included in the review and the functional role of knowledge tests

Author, year, country

Table 1.

Rheumatoid arthritis (n = 100) Patient with COPD (n = 93) Patients on pre-anesthetic visit (n = 211) Carpal tunnel syndrome (n = 96) Parents with children admitted for ambulatory tonsillectomy (n = 107) Stoma patients ( = 54) Stroke patients (n = 54) Rheumatoid arthritis (n = 58)

Ambulatory surgical patients (n = 147) Patients with type 2 diabetes (n = 91)

Hip or knee surgery patients (n = 118) Patients undergoing radical prostatectomy (n = 40)

Patient in rehabilitation after gastric cancer surgery (n = 84) Hemodialysis patients with hypophosphatemia (n = 63) Caregivers of small children with acute gastroenteritis (n = 105) Patients with spinal cord dysfunction (n = 41) Patients with type 2 diabetes (n = 502) Patients with type 1 diabetes (n = 102)

Patients with bipolar disorder (n = 50)

Emergency room patients scheduled to computed tomography scan (n = 107) Women undergoing breast reduction or abdominoplasty. (n = 80) Patients with heart failure (n = 37) Patients with ulcerative colitis (n = 333)

Patients of stroke (n = 20) Colposcopy patients (n = 220) Patients with HIV/AIDS (n = 67)

Patient with heart failure (n = 38)

Patients with type 2 diabetes (n = 80) Adolescents and young adults with cancer (n = 373) Patients with plantar fasciitis (n = 41)

Participants (n)

S S S

S S S S S

S S

S S

S S

S

S

S

S

S

S S

S

S

S S S

S

S

S S

Functional role of KT†

264 J. Kesänen et al.

Continued

Sorrell et al., 2009, USA

Taylor et al., 2004, USA

Urnes et al., 2008, Norway Victor et al., 2005, UK Walker et al., 2007, UK Wang et al., 2010, Taiwan Wang & Chiou, 2011, Taiwan Wells, 2011, USA

43.

44. 45. 46. 47. 48. 49.

Yang et al., 2005, Taiwan Yehle et al., 2009, USA Yen et al., 2008, Taiwan

51. 52. 53.

Pre–post test RCT RCT Pre–post test Nursing instruction session‡ Shared medical visit¶ Comprehensive education‡

Group education program + standard care‡ Nurse-led education program§ Booklet and mind map‡ Education program¶ Multimedia education‡ (i) Verbal education‡ (ii) Verbal and written education‡ Group education‡

Educational symposium‡

Three film groups.(i) oral hygiene‡,

Self-management education‡ Standardized sleep apnea patient education‡

Interactive patient educational program‡ Additional individual information‡

Computer-based tutorial on inhaler technique‡ Asthma school‡ Interactive video program and booklet‡ Chronic Care Model‡

(i) Audiotape of consultations¶

Individualized information booklet and standard leaflet¶ KT‡

Intervention (change in knowledge)

Standard education§ Standard care‡

Standard care§

Standard care Booklet§ Booklet‡ Standard education‡ Standard education§ Standard care§

(ii) Root canal procedure‡ (iii) Fears of endodontic pain‡

Waiting list§

Standard education‡ Booklet‡ (i) Provider education only§, (ii) Standard care§ Didactic approach‡ Standard education§

(ii) Generic audiotape‡, (iii) No audiotape§

Standard leaflet§

Control (change in knowledge)

Children with asthma (n = 62) Heart failure (n = 18) Chronic kidney disease (n = 66)

African Americans with osteoarthritis (n = 32) Gastroesophageal reflux disease (n = 203) Osteoarthritis of the knee (n = 127) Rheumatoid arthritis (n = 363) Asthma patients (n = 118) Hemodialysis patients (n = 60) Patients with end-stage renal disease (n = 85) Asthma patients (n = 85)

Endodontic patients (n = 104)

Patients with type 2 diabetes (n = 131) Adults with sleep apnea (n = 34)

Patient with glaucoma (n = 100) Patients with asthma (n = 157)

Patients with asthma (n = 34) Patients with asthma (n = 40) Candidates for spinal surgery (n = 100) Patients with type 2 diabetes (n = 119)

Cardiac surgery (n = 84)

Bariatric surgery (n = 63)

Stroke patients (n = 84)

Participants (n)

S S S

S

S S S S S S

S

S

S S

S S

S S S S

S

F

S

Functional role of KT†

†Functional roles of knowledge tests in patient education: P, Placement evaluation: to determine learner performance at the beginning of education; F, Formative education: to monitor learning progress during education; D, Diagnostic evaluation: to diagnose learning difficulties during education; and S, Summative evaluation: to evaluate achievement at the end of education; ‡significantly-increased knowledge level; §no significant change in knowledge level; ¶knowledge increased more than in the comparison group; COPD, chronic obstructive pulmonary disease; KT, knowledge test; QE, quasiexperimental; RCT, randomized, controlled trial.

Yang et al., 2003, Taiwan

50.

Pre–post test RCT RCT RCT QE QE QE

RCT Pre–post test RCT

42.

40. 41.

RCT RCT

Rendell, 2000, UK Rootmensen et al., 2008, Netherlands Sixta & Ostwald, 2008, USA Smith et al., 2004a, Australia

38. 39.

Mishra et al., 2010, UK

33.

Pre–post test RCT

RCT RCT RCT RCT

Madan and Tichansky, 2005, USA

32.

RCT

Study design

34. Navarre et al., 2007, USA 35. Neri et al., 2001, Italy 36. Phelan et al., 2001, USA 37. Piatt et al., 2006, USA

Lowe et al., 2002, UK

31.

Author, year, country

Table 1.

Knowledge tests in patient education 265

© 2013 Wiley Publishing Asia Pty Ltd.

266

J. Kesänen et al.

results in only 14 studies (26%), and an adequate sample size calculation in 16 (30%). The participants in 36 of the studies were patients with chronic health problems, while 13 studies focused on surgical patients (Table 1). It was observed that the use of knowledge tests had increased during the timeframe of this review (2000–2012). Of the 53 studies in this review, only seven were published between 2000 and 2003, 19 between 2004 and 2007, while as many as 26 were published between 2008 and 2011 (Table 3). All the educational interventions were health problem specific, and the educational methods used in different studies showed a high level of variation. They comprised different forms of interactive electronic education, printed leaflets, group education, case management, psychoeducation, and structural educational model methods. In addition to the primary outcome of patient-knowledge levels, other reported knowledge variables included self-management, self-efficacy, satisfaction, anxiety, stress, depression, compliance, quality of life, utilization of health care, coping, health beliefs, locus of control, preference of procedure, pain, physical ability, and economical and clinical variables (e.g. blood tests, blood pressure, weight, exacerbation of disease) (Table 1). In this review, 49 knowledge tests were identified (Table 3). A patient-knowledge questionnaire developed by Hill et al. (1991) was used in two languages: English (Walker et al., 2007) and Portuguese (Lovisi Neto et al., 2009). Figure 1. Flowchart of study selection.

at the beginning of their education; (ii) formative evaluation, which monitors their learning progress during the education process; (iii) diagnostic evaluation for any learning difficulties during their education; and (iv) summative evaluation, which assesses the level of achievement at the end of education (Bloom et al., 1971; McDonald, 2007). The quality of knowledge tests was determined using the criteria developed by Terwee et al. (2007). These criteria consist of construct validity, internal consistency, reproducibility, criterion validity, responsiveness, floor and ceiling effects, and interpretability (Table 2). We used only four criteria: content validity, construct validity, internal consistency, and reproducibility, as the data in the reports were insufficient to analyze all eight criteria.

RESULTS Study characteristics Out of the 53 studies included in this review, 37 were RCT, and 16 were quasiexperimental studies. The quality assessment of the studies was undertaken in accordance with the CONSORT 2010 Statement (http://www.consort-statement .org). In general, according to the criteria of the CONSORT Statement, the randomization of RCT was described incompletely, with only Elkjaer et al. (2010) meeting the criteria. Furthermore, the confidence interval was reported in the © 2013 Wiley Publishing Asia Pty Ltd.

Development of knowledge tests Most of the knowledge tests (69.4%, 34/49) were developed by the researchers for the study concerned. Instruments previously developed by other researchers were used in only 26.5% (13/49) of the studies. The previously-developed tests concerned outcome measures of the education of patients with diabetes (Piatt et al., 2006; Herenda et al., 2007; George et al., 2008; Sixta & Ostwald, 2008), heart failure (Yehle et al., 2009), and rheumatoid arthritis (Walker et al., 2007; Lovisi Neto et al., 2009). Four researchers used their own previously-developed knowledge tests for caregivers of small children with gastroenteritis (Freedman et al., 2011), patients after stroke (Lowe et al., 2002), cardiac surgery (Mishra et al., 2010), and sleep apnea (Smith et al., 2004a).The development of these knowledge tests was not recorded in detail. The theoretical framework of the studies was not disclosed. All tests were health problem or treatment related, with the exception of the knowledge tests of Heikkinen et al. (2008), which used the concept of empowering knowledge as a framework (Table 3). The development of the tests was based on the knowledge of professionals (16 mentions), the content of patienteducation interventions (13 mentions), patient focus groups (2 mentions), the guidelines of health-problem treatments or procedure performances (8 mentions), previous knowledge tests (9 mentions), and other literature than guidelines or previous knowledge tests (18 mentions). In some tests, more than one basis for test development was mentioned. The origin of the development was not disclosed in one report (Victor et al., 2005).

Knowledge tests in patient education

267

Table 2. Criteria for rating the quality of knowledge tests Rating Content validity (the extent to which the domain of interest is comprehensively sampled by the items in the questionnaire) Construct validity (the extent to which scores on a measure relate to other measures in a manner consistent with theoretical hypotheses) Internal consistency (the extent to which items in measure are intercorrelated) Reproducibility (the extent to which the yields the same results on the same participants on repeated applications

2 (positive)

1 (intermediate)

Clear description, including measurement aim, target population, and methods of item selection. End-users AND experts involved in item selection. Specific hypothesis formed AND 75% of results in accordance with this hypothesis

No clear description OR only end user involvement OR doubtful methods

No end-user involvement

No hypothesis stated but description of recognized method of scale validation (e.g. extreme group validation)

Less than 75% hypothesis confirmed

No factor analysis performed, Cronbach’s α 0.70–0.95

Cronbach’s α > 0.70 or > 0.95

Doubtful methods (e.g. time interval not mentioned)

ICC or kappa < 0.70

Factor analysis performed on > 100 participants, Cronbach’s α 0.70–0.95 for each dimension Sample size at least 50 participants and ICC or kappa > 0.70. Pearson correlation coefficient considered inadequate

0 (negative)

Adapted from Terwee et al.,69 with permission. Data also extracted regarding criterion validity (the extent to which scores on a measure relate to a gold standard), Responsiveness (the ability of a questionnaire to detect clinically important changes over time), floor and ceiling effects (no. respondents who achieved lowest or highest scores), and interpretability (degree to which one can assign qualitative meaning to quantitative scores), but excluded from analysis owing to insufficient data. ICC, intraclass correlation coefficients.

Structure and content of knowledge tests The names of the knowledge tests were descriptive in 63.3% (31/49) of the cases. They described the content area of the knowledge tests; however, 36.7% (18/49) of the knowledge tests had not been given names. They were referred to as “a knowledge test” or by using similar synonyms. In the names, the term “test” was used (n = 12), as were the synonyms “questionnaire” (n = 15), “scale” (n = 1), “measure” (n = 1), “score” (n = 1), “quiz” (n = 1), and “checklist” (n = 1) (Table 3). In many studies, the description of the structure of the knowledge tests was unclear. In 25 studies, knowledge tests were included in the reports, available in earlier testdevelopment reports, or on the Internet.The number of items and the type of scale were described in only a few reports.The number of items varied from six to 65 (mean: 20). The scales used in the knowledge tests were mostly true–false (n = 18), multiple-choice (n = 17), or mixed scales (n = 4). Nine of 12 true–false scales also had a “do not know” option. In addition, there were open-ended questions (Smith et al., 2004b), fill in the blank (Yehle et al., 2009), a Likert scale (Victor et al., 2005), and a face-to-face interview (Mishra et al., 2006). The information regarding scale type was missing in six studies (Table 3). The content of the knowledge test was disclosed in 81.6% (40/49) of the cases. The content was not reported or the test was not available in nine cases. The classification of content was made using previously-developed frameworks (Leino-Kilpi et al., 1998; Johansson et al., 2003; Ryhänen

et al., 2012). Most of the knowledge tests consisted of biophysiological and functional knowledge (Table 3); in one knowledge test (Heikkinen et al., 2008), the content was multidimensional.

Functional role of knowledge tests In all studies, the functional role of knowledge tests was used to measure the outcomes in patient educational interventions (Table 1). In 98.1% (52/53) of the studies, the functional role of the test was summative evaluation, that is, evaluation after an educational intervention. In one of the 53 studies, however, the functional role of the test was formative evaluation, monitoring the learning progress during education (Table 1). The patients were asked to retake the knowledge test until all of their answers were correct (Madan & Tichansky, 2005). There was no evidence found showing that knowledge tests were used in focusing or planning educational interventions.

Assessment of the quality of knowledge tests The assessment of quality was conducted according to the quality criteria of Terwee et al. (2007) (Table 2). The quality assessment of the tests, however, is rarely reported (Table 3). When previously-developed knowledge tests were used, the test-development and validation data were extracted from the original report. In 26.5% (13/49) of knowledge tests, the assessment of quality was not mentioned. Content validity © 2013 Wiley Publishing Asia Pty Ltd.

Beale et al., 2007 Beischer et al., 2008 Boyde et al., 2013 (van der Wal et al., 2005) Byers et al., 2010 (Sullivan & Dunton, 2004) Chan et al., 2004 Chiou et al., 2004

2. 3. 4.

© 2013 Wiley Publishing Asia Pty Ltd. n/a

n/a (14)

0 1

True–false (22) Multiple choice (15 + 20)

28. 29.

Lo et al., 2009 Louie et al., 2006

n/a 0 1

1

1 2 2

n/a

2

Multiple choice (14) True–false–don’t know (40) True–false (32)

n/a (14)

True–false, multiple choice (26) True–false–don’t know (27) Multiple choice (23)

Multiple choice (15)

Multiple choice (23)

2

Knowledge of Self-Care Stroke Knowledge Test

n/a 1 2

True–false–don’t know (65)

Bristol COPD Knowledge Questionnaire

0

True–false, multiple choice, open ended (21) Multiple choice (16) Multiple choice (20) True–false (38)

2

2

Multiple choice (30)

True–false–don’t know (24)

2 n/a 2

n/a 1

1

1 1 2

1

Content validity‡

Multiple choice (10) n/a (15) Multiple choice (15)

Multiple choice (13) n/a (20)

Multiple choice (20)

Multiple choice (18) True–false–don’t know (9) Multiple choice (15)

n/a (12)

Item type (no. items)

Hill et al., 2010 (White et al., 2006) 25. Kakinuma et al., 2011 26. Keulers et al., 2007 27. Klemetti et al., 2010

24.

Hill & Bird, 2003

DPA Questionnaire

Standard Anesthesia Learning Test

23.

Groves et al., 2010 (Miller et al., 1999)

19.

MDRTC Diabetes Knowledge Test

George et al., 2008 (Fitzgerald et al., 1998)

18.

Caregiver Gastroenteritis Knowledge Questionnaire The Pressure Ulcer Knowledge Test Diabetes Knowledge Questionaire-24 MDRTC Diabetes Knowledge Test

Dutch Heart Failure Knowledge Scale Crohn’s and Colitis Knowledge Score Lithium Knowledge Questionnaire Gastric Carcinoma Questionnaire

Medication Side Effects Self-Care Knowledge Questionnaire Post Consent Measure

Dutch Heart Failure Knowledge Scale Stroke Knowledge Test

The Cancer Knowledge Test

20. Gyomber et al., 2010 21. Heikkinen et al., 2008 22. Herenda et al., 2007 (Fitzgerald et al., 1998)

Garcia et al., 2001

17.

13. Faller et al., 2009 14. Ford et al., 2004 15. Freedman et al., 2011 (Freedman et al., 2008) 16. Garber et al., 2002

Cowan et al., 2007 Danino et al., 2005 Dilles et al., 2011 (van der Wal et al., 2005) 11. Elkjaer et al., 2010 (Eaden et al., 1999) 12. Even et al., 2010

8. 9. 10.

6. 7.

5.

Atak et al., 2008

Name of knowledge test

Characteristic, quality, and content of the knowledge tests included in the review

1.

Researcher†

Table 3.

2 n/a

n/a n/a n/a

n/a

n/a

2 n/a 2

n/a

2

2

n/a

n/a n/a 2

n/a

2

n/a n/a 2

n/a n/a

2

n/a n/a 2

n/a

Construct validity‡

1 n/a

n/a n/a n/a

1

n/a

n/a 1 1

n/a

1

2

n/a

n/a n/a n/a

n/a

1

n/a n/a n/a

n/a 1

1

n/a n/a n/a

n/a

Internal consistency‡

n/a n/a

n/a n/a n/a

1

n/a

n/a n/a n/a

n/a

n/a

n/a

n/a

n/a n/a n/a

n/a

n/a

n/a n/a n/a

n/a n/a

n/a

n/a n/a n/a

n/a

Reproducibility‡

Literature Educational material Literature, experts, patients Literature Literature, experts, patients

Derived from a previous knowledge test National expert group using several methods Modified from a previous knowledge test Literature, experts Educational material National expert group using several methods Educational material and previous knowledge test Experts

Educational material

Educational material Experts and patients Literature, expert

Disease management, experts Experts

Educational material Experts Literature

n/a Educational material

Disease management

Expert and educational material Experts n/a Literature

Framework/base of development

Bf n/a

Bf, Fu, Ex n/a Bf, Fu

Bf, Fu

n/a

Bf Bf, Fu, So, Ex, Et, Fi Bf, Fu

Bf, Fu, Ex, Et

Bf, Fu

Bf, Fu

Bf, Fu

Bf, Fu Bf, Fu Bf, Fu

Bf, Fu

Bf, Fu

Bf, Et n/a Bf, Fu

n/a n/a

Bf, Fu

Bf, Fu Bf, Fu Bf, Fu

Bf, Fu

Content of the knowledge test§

268 J. Kesänen et al.

Lovisi Neto et al., 2009 (Hill et al., 1991)

Madan & Tichansky, 2005 Mishra et al., 2010 (Mishra et al., 2006) Navarre et al., 2007 (Erickson et al., 1998) Neri et al., 2001

Phelan et al., 2001

Piatt et al., 2006 (Fitzgerald et al., 1998) Rendell, 2000 Rootmensen et al., 2008

Sixta & Ostwald, 2008 (Garcia et al., 2001) Smith et al., 2004a

Sorrell et al., 2009 Taylor et al., 2004 Urnes et al., 2008

31.

32. 33.

36.

37.

40.

42. 43. 44.

Yang et al., 2005

Yehle et al., 2009 (Artinian et al., 2002) Yen et al., 2008

51.

52.

Renal Protection Knowledge Checklist

Questionnaire of Dialysis Self-Care Knowledge Life Options Hemodialysis Knowledge Test Asthma General Knowledge Questionnaire for Adults Asthma Knowledge Questionnaire Heart Failure Knowledge Test

The Knowledge Scale Questionnaire Asthma General Knowledge Questionnaire for Adults

The GERD Knowledge Test

Dental Knowledge Questionnaire

Diabetes Knowledge Questionaire-24 The Apnoea Knowledge Test

MDRTC Diabetes Knowledge Test

Inhaler Technique Knowledge Test Self Administered Questionnaire on Asthma Knowledge Back Pain Knowledge Test

Patient Knowledge Questionnaire (Brazilian version) Preoperative True/False Quiz

Stroke Knowledge Questionnaire

Name of knowledge test

True–false–don’t know (31) True–false and multiple choice (22) Multiple choice, fill-inthe-blank item (16) True–false (20)

True–false (25)

True–false (24)

True–false–don’t know (31)

Multiple choice (21) True–false–don’t know (18) True–false-don’t know (24) Multiple choice (13) and open ended (2) Multiple choice (15) True–false (6) True–false–don’t know (24) Likert (10) Multiple choice (16)

True–false–don’t know (17) Multiple choice (23)

n/a (9)

True–false (21) Face-to-face administered (16) Multiple choice (9)

Multiple choice (1), true–false–don’t know (7) Multiple choice (16)

Item type (no. items)

0

0

2

0

n/a

n/a

2

n/a 0

n/a n/a 2

2

2

n/a n/a

2

1

n/a

2

n/a 2

2

1

Content validity‡

n/a

n/a

n/a

n/a

n/a

n/a

1

n/a n/a

2 n/a n/a

1

2

n/a n/a

2

1

n/a

1

n/a 1

1

n/a

Construct validity‡

0

0

1

1

n/a

n/a

n/a

n/a n/a

n/a n/a 1

0

2

n/a n/a

1

n/a

n/a

n/a

n/a n/a

n/a

n/a

Internal consistency‡

n/a

n/a

n/a

n/a

n/a

n/a

0

n/a n/a

n/a n/a 2

n/a

n/a

n/a n/a

n/a

n/a

n/a

n/a

n/a n/a

1

n/a

Reproducibility‡

Disease

Disease

Previous test

Disease

Literature, patients’ interest

Disease management

n/a

n/a Disease

Previous knowledge test, educational material, experts Educational material Educational material n/a

National expert group using several methods Disease and experts Previous knowledge test and experts n/a

n/a

n/a

n/a

n/a Experts

Disease, experts, patients

Experts and patients

Framework/base of development

Bf, Fu

Bf, Fu

Bf, Fu

Bf, Fu

Bf, Fu

Bf, Fu

Bf, Fu

n/a Bf, Fu

n/a Bf, Fu Bf

Bf, Fu

Bf, Fu

Bf Bf, Fu

Bf, Fu

Bf, Fu

n/a

Fu

Bf, Fu, Ex Bf, Fu

Bf, Fu

Bf, Fu

Content of the knowledge test§

†Author in the brackets is the developer of the knowledge test. ‡Rating of psychometric properties: 2 = positive, 1 = intermediate, 0 = negative (Table 2). §Content of knowledge tests: identified dimensions of knowledge of the knowledge tests Bf, biophysical; COPD, chronic obstructive pulmonary disease; Et, ethical; Ex, experiential; Fi, financial dimension; Fu, functional;. MDRTC, Michigan Diabetes Research and Training Center; n/a, information not available in the report; So, social.

53.

50.

Wells, 2011 (Curtin et al., 2004) Yang et al., 2003

49.

45. Victor et al., 2005 46. Walker et al., 2007 (Hill et al., 1991) 47. Wang et al., 2010 (Allen & Jones, 1998, Allen et al., 2012) 48. Wang & Chiou, 2011

41.

38. 39.

35.

34.

Lowe et al., 2007 (Lowe et al., 2002)

Continued

30.

Researcher†

Table 3.

Knowledge tests in patient education 269

© 2013 Wiley Publishing Asia Pty Ltd.

270

was most often reported, gaining a positive or intermediate rating in 49% (24/49) of knowledge tests. Positive or intermediate internal consistency was reported in 17% (9/53) of the studies. Positive or intermediate construct validity assessments were reported in 28.6% (14/49) of knowledge tests, and reproducibility in 6.1% (3/49). Criterion validity, responsiveness, floor and ceiling effects, or interpretability were not assessed in any of the knowledge tests. Twenty of the 49 knowledge tests did not report these criteria or had a negative rating. The strongest validation ratings, according to the criteria of Terwee et al. (2007), which addressed a maximum of three out of eight criteria, was seen in only seven knowledge tests (in nine studies). Of these seven knowledge tests, two were developed within the study in question to measure patients’ knowledge of diabetes (Garcia et al., 2001) and gastroesophageal reflux disease (Urnes et al., 2008). Five knowledge tests were developed to measure the knowledge of Crohn’s disease and colitis (Eaden et al., 1999), diabetes (Fitzgerald et al., 1998), rheumatoid arthritis (Hill et al., 1991), stroke (Sullivan & Dunton, 2004), and chronic obstructive pulmonary disease (White et al., 2006).

DISCUSSION The tests examined in this review were developed based on a health problem or educational interventions. However, as a result of insufficient test descriptions in many publications and the diversity of domains, we were unable to make a comprehensive assessment of whether the knowledge test covered the entire domain of the educational program the test was intended to measure. In the development of knowledge tests, it is important to make sure that the test covers the entire domain of interest in order to increase the external validity of measurement (McDonald, 2007). This assessment is an important aspect for future research, but it would perhaps be more productive to conduct it in relation to a health problem or procedure. A specific analysis would allow more profound concentration on the assessment of the content of the domain by comparing the specific best practice guidelines and the content of knowledge tests. Most of the knowledge tests examined in this review were developed by researchers for the purposes of an actual study. The reason for developing a new instrument was seldom disclosed. In the cases where the reason was explained, it was driven by the absence of an adequate, existing instrument focusing on the educational intervention of interest. Before engaging in developing a knowledge test for educational interventions, researchers should find out whether relevant knowledge tests already exist. A description of the structure of the identified knowledge tests was limited, even though some tests were attached to the reports. The most frequently-used knowledge tests scales were multiple choice and true–false based. A wellconstructed scaled knowledge test can be an effective method in patient education, regardless of the format of the test (Rodrigues, 2003). An appropriate knowledge test structure can be selected according to the purpose of its use. In knowledge measurement, it is important to focus on issues that are important from the viewpoint of the patient. A © 2013 Wiley Publishing Asia Pty Ltd.

J. Kesänen et al.

patient’s knowledge needs are individual (Johansson et al., 2003). The structure of a knowledge test can be developed in such a way that patients can deepen their knowledge according to their own preferences. In order to make it possible for patients to increase their knowledge, the test must be multilevel. The multilevel structure of a knowledge test makes it possible to assess the knowledge level according to the patient’s capacity for knowledge and the baseline level of their knowledge. The amount and level of knowledge can be structured as: “must know”, “good to know”, and “detailed explanations” (Heikkinen et al., 2008). To achieve this aspect, patients’ involvement in the development process of knowledge tests is essential. The knowledge tests in this review were used as an outcome measure of educational interventions, while the other possible functional roles of such tests were not utilized. Knowledge tests could also be used to diagnose patients’ educational needs and to monitor the progress of their education. Diagnosing a patient’s knowledge needs in the planning phase of patient education helps educators tailor education from the patient’s perspective. Monitoring the progress of patient education makes it possible to guide the education according to patients’ individual needs. The issues patients are familiar with can be confirmed, and further educational interventions can be targeted according to their educational needs. When combined with an individual discourse, the knowledge test structures the educational session and acts as a checklist, while the discourse defines the amount of knowledge and level of the patient’s deeper understanding. However, patients’ literacy levels have an impact on their knowledge (Mancuso, 2010) and must be considered among patients with lower literacy levels. Furthermore, knowledge tests can be used in other connections; for example, in nursing education to evaluate learning outcomes, or in nursing management to assess the quality of nursing. Many of these tests, however, did not report adequate validation and reliability procedures. The quality assessment of knowledge tests reported in the studies was limited. The validity of the instrument is critical for the validity of research, which is why evidence-based nursing needs standardized measuring approaches: a valid, reliable, and previously-used instrument promotes the reliability of the study concerned (Burns & Grove, 2005). Content validity is usually assessed as face validity by an expert or patient panel, but face validity provides the weakest validity assessment (Burns & Grove, 2005). Construct validity was rarely reported. To assess the reliability of tests, computing alpha coefficients is recommended for each administration of a test (Burns & Grove, 2005). In this review, Cronbach’s alpha coefficient was calculated in 17% (9/53) of the studies, in line with DeVon et al.’s (2007) study. In their systemic review, Pink et al. (2009) found that although content validity, construct validity, internal consistency, and reproducibility were reported in some studies, criterion validity, responsiveness, floor and ceiling effect, and interpretability were not. Measurements without sound validation are a threat to the validity and reliability of a study (Burns & Grove, 2005). Many knowledge tests were developed similar to the study in

Knowledge tests in patient education

question, but very few provided validation measures. Furthermore, the educational interventions were developed for the studies concerned, which might be the reason for test development, that is, there were no existing tests. A health problem-specific patient-education plan that is wellstructured, customizable according to patients’ needs, and universal would be a method to standardize patient education and make it possible to conduct well-established knowledge measurement. There were limitations in this review. The studies included in this review were very diverse. Patient groups, educational interventions, their control education, and outcomes were all very different. Despite this, we wanted to include as many knowledge tests as possible in order to gain a current overview of knowledge tests in patient education. Furthermore, the tests included were RCT and quasiexperimental studies concerning patients. Knowledge tests developed for other types of study designs were not included. This might provide a limitation in regard to the general quality and quantity of knowledge tests. Other possible reports than the original development reports might not have been captured if they were not mentioned in the references, which might limit the description of the validation process of knowledge tests. Additionally, the language limitation to English might have proved a limitation. To minimize the risks of errors, study selection, data extraction, and quality assessment were conducted independently by two authors, according to the established guidelines of the PRISMA Statement (http://www.prisma-statement.org). When disagreements occurred, consensus was easily reached with reference to the established eligibility criteria, even though interpretation of the criteria did of course remain subjective.

Conclusions Based on the review, a typical knowledge test in patient education is a 20 item, multiple-choice or true–false, scaled tool for measuring the biophysiological and functional dimensions of health-problem, health problem-management, or treatment-related knowledge. The diversity of educational interventions did not make it possible to assess the internal validity of knowledge tests on the basis of the reports. For this, we would have needed more information on the content of educational interventions. More specific information of the content of educational interventions is therefore needed for a more detailed assessment of the correlation of knowledge-test coverage with the content of educational intervention as a whole. Due to the inadequate description of the content of the knowledge tests and educational interventions, we were unable to make a comprehensive assessment of the content domain of knowledge tests. Based on this review, there are a number of tools reported in the literature to assess the success of an educational intervention, but few of them have been properly validated or even described sufficiently to be used with confidence in future research or in the evaluation of patient-education programs. The most promising knowledge tests for reuse are

271

associated with chronic health problems that require knowledge in self-care: Crohn’s disease (Eaden et al., 1999), diabetes (Fitzgerald et al., 1998; Garcia et al., 2001), asthma (Hill et al., 2010), stroke (Sullivan & Dunton, 2004), and gastroesophageal reflux disease (Urnes et al., 2008). The development of knowledge tests in patient education requires careful planning concerning the analysis of the content of the educational intervention and the target of the measurement. The development of a test calls for the utilization of theoretical knowledge of test theories, and guidelines of test development and validation criteria (Huff et al., 2010). The measurement of patients’ knowledge levels as a desired outcome of patient education is increasing in accordance with the need for the assessment of the outcomes of nursing interventions (Hallberg, 2009). In the future, there is a need to focus more attention on the quality assessment of such tests to ensure that validated and established instruments measuring the outcomes of patient education are available.

ACKNOWLEDGMENTS This study was partly supported by the Finnish Nurses Association and the Finnish Association of Nursing Research.

CONTRIBUTIONS Study Design: JK, KV. Data Collection and Analysis: JK, DA, MS. Manuscript Writing: JK, HLK, DA, MS, KV.

REFERENCES Allen RM, Abdulwadud OA, Jones MP, Abramson M, Walters H. A reliable and valid asthma general knowledge questionnaire useful in the training of asthma educators. Patient Educ. Couns. 2012; 39: 237–242. Allen RM, Jones MP. The validity and reliability of an asthma knowledge questionnaire used in the evaluation of a group asthma education self-management program for adults with asthma. J. Asthma 1998; 35: 537–545. Artinian NT, Magnan M, Sloan M, Lange MP. Self-care behaviors among patients with heart failure. Heart Lung 2002; 31: 161–172. Atak N, Gurkan T, Kose K. The effect of education on knowledge, self management behaviours and self efficacy of patients with type 2 diabetes. Aust. J. Adv. Nurs. 2008; 26: 66–74. Beale IL, Kato PM, Marin-Bowling VM, Guthrie N, Cole SW. Improvement in cancer-related knowledge following use of a psychoeducational video game for adolescents and young adults with cancer. J. Adolesc. Health 2007; 41: 263–270. Beischer AD, Clarke A, de Steiger RN, Donnan L, Ibuki A, Unglik R. The practical application of multimedia technology to facilitate the education and treatment of patients with plantar fasciitis: a pilot study. Foot Ankle 2007; 1: 30–38. Bloom BB, Hastings JT, Maudaus GF. Handbook on Formative and Summative Evaluation of Student Learning. New York: McGraw Hill, 1971. Boyde M, Song S, Peters R, Turner C, Thompson DR, Stewart S. Pilot testing of a self-care education intervention for patients with heart failure. Eur. J. Cardiovasc. Nurs. 2013; 12: 39–46. Burns N, Grove SK. The Practice of Nursing Research. Conduct, Critique, and Utilization. Saint Louis, MO: Elsevier Saunders, 2005.

© 2013 Wiley Publishing Asia Pty Ltd.

272

Byers AM, Lamanna L, Rosenberg A. The effect of motivational interviewing after ischemic stroke on patient knowledge and patient satisfaction with care: a pilot study. J. Neurosci. Nurs. 2010; 42: 312–322. Chan YM, Lee PW, Ng TY, Ngan HY. Could precolposcopy information and counseling reduce women’s anxiety and improve knowledge and compliance to follow-up? Gynaecol. Oncol. 2004; 95: 341–346. Chiou PY, Kuo BI, Chen YM, Wu SI, Lin LC. A program of symptom management for improving self-care for patients with HIV/AIDS. AIDS Patient Care STDS 2004; 18: 539–547. Cowan EA, Calderon Y, Gennis P, Macklin R, Ortiz C, Wall SP. Spanish and English video-assisted informed consent for intravenous contrast administration in the emergency department: a randomized controlled trial. Ann. Emerg. Med. 2007; 49: 221–230. Curtin R, Sitter D, Schatell D, Chewing B. Self-management, knowledge, and functioning and well-being of patients on hemodialysis. Nephrol. Nurs. J. 2004; 37: 378–396. Danino AM, Chahraoui K, Frachebois L et al. Effects of an informational CD-ROM on anxiety and knowledge before aesthetic surgery: a randomised trial. Br. J. Plast. Surg. 2005; 58: 379– 383. DeVon HA, Block ME, Moyle-Wright P et al. A psychometric toolbox for testing validity and reliability. J. Nurs. Scholarsh. 2007; 39: 155–164. Dilles A, Haymans V, Martin S, Droogné W, Denhaerynck K, De Geest S. Comparison of computer assisted learning program to standard education tools in hospitalized heart failure. Eur. J. Cardiovasc. Nurs. 2011; 10: 187–193. Downing SM, Haladyna TM. Handbook of Test Development. New York: Routledge, 2006. Eaden JA, Abrams K, Mayberry JF. The Crohn’s and Colitis Knowledge Score: a test for measuring patient knowledge in inflammatory bowel disease. Am. J. Gastroenterol. 1999; 94: 3560–3566. Elkjaer M, Shuhaibar M, Burisch J et al. E-health empowers patients with ulcerative colitis: a randomised controlled trial of web-guided “constant-care” approach. Gut 2010; 59: 1652–1661. Erickson SR, Horton A, Kirking DM. Assessing metered-dose inhaler technique: comparison of observation vs. patient selfreport. J. Asthma 1998; 35: 575–583. Even C, Thuile J, Kalck-Stern M, Criquillion-Doublet S, Gorwood P, Rouillon F. Psychoeducation for patients with bipolar disorder receiving lithium: short and long term impact on locus of control and knowledge about lithium. J. Affect. Disord. 2010; 123: 299–302. Faller H, Koch GF, Reusch A, Pauli P, Allgayer H. Effectiveness of education for gastric cancer patients: a controlled prospective trial comparing interactive vs. lecture-based programs. Patient Educ. Couns. 2009; 76: 91–98. Fitzgerald JT, Funnell MM, Hess GE et al. The reliability and validity of a brief diabetes knowledge test. Diabet 1998; 21: 706–710. Ford JC, Pope JF, Hunt AE. Gerald B. The effect of diet education on the laboratory values and knowledge of hemodialysis patients with hyperphosphatemia. J. Ren. Nutr. 2004; 14: 36–44. Freedman SB, Couto M, Spooner L, Haladyn K. The implementation of a gastroenteritis education program. Am. J. Emerg. Med. 2011; 29: 271–277. Freedman SB, Deiratany S, Goldman RD, Benseler S. Development of a caregiver gastroenteritis knowledge questionnaire. Ambul. Pediatr. 2008; 8: 261–265. Funnell MM, Andersson RM, Arnold MS et al. Empowerment: an idea whose time has come in diabetes education. Diabetes Educ. 1991; 17: 37–41. Garber SL, Rintala DH, Holmes SA, Rodriguez GP, Friedman J. A structured educational model to improve pressure ulcer

© 2013 Wiley Publishing Asia Pty Ltd.

J. Kesänen et al.

prevention knowledge in veterans with spinal cord dysfunction. J. Rehabil. Res. Dev. 2002; 39: 575–588. Garcia AA, Villagomez ET, Brown SA, Kouzekanani K, Hanis CL. The Starr County diabetes education study: development of the Spanish-language diabetes knowledge questionnaire. Diabetes Care 2001; 24: 16–21. George JT, Valdovinos AP, Russell I et al. Clinical effectiveness of a brief educational intervention in type 1 diabetes: results from the BITES (Brief Intervention in Type 1 diabetes, Education for Selfefficacy) trial. Diabet. Med. 2008; 25: 1447–1453. Groves ND, Humphreys HW, Williams AJ, Jones A. Effect of informational Internet web pages on patients’ decision-making: randomised controlled trial regarding choice of spinal or general anaesthesia for orthopaedic surgery. Anaesthesia 2010; 65: 277– 282. Gyomber D, Lawrentschuk N, Wong P, Parker F, Bolton DM. Improving informed consent for patients undergoing radical prostatectomy using multimedia techniques: a prospective randomized crossover study. BJU Int. 2010; 106: 1152–1156. Hallberg IR. Moving nursing research forward towards a stronger impact on health care practice? Int. J. Nurs. Stud. 2009; 46: 407– 412. Heikkinen K, Leino-Kilpi H, Nummela T, Kaljonen A, Salanterä S. A comparison of two educational interventions for the cognitive empowerment of ambulatory orthopaedic surgery patients. Patient Educ. Couns. 2008; 73: 272–279. Herenda S, Tahirovic H, Poljakovic D. Impact of education on disease knowledge and glycaemic control among type 2 diabetic patients in family practice. Bosn. J. Basic. Med. Sci. 2007; 7: 261– 265. Hill J, Bird H. The development and evaluation of a drug information leaflet for patients with rheumatoid arthritis. J. Rheumatol. 2003; 42: 66–70. Hill J, Bird HA, Hopkins R, Lawton C, Wright V. The development and use of a patient knowledge questionnaire in rheumatoid arthritis. Br. J. Rheumatol. 1991; 30: 45–49. Hill K, Mangovski-Alzamora S, Blouin M, Guyatt G, Heels-Ansdell D, Bragaglia P, Tamari I, Jones K, Goldstein R. Disease-specific education in the primary care settings increases the knowledge of people with chronic obstructive pulmonary disease: A randomized controlled trial. Patient Educ Couns. 2010; 81: 14–18. Huff K, Steinberg L, Matts T. The promises and challenges of implementing evidence-centered design in large-scale assessment. Appl. Meas. Educ. 2010; 23: 310–324. Johansson K, Leino-Kilpi H, Salanterä S et al. Need for change in patient education: a Finnish survey from the patient’s perspective. Patient Educ. Couns. 2003; 51: 239–245. Kakinuma A, Nagatani H, Otake H, Mizuno J, Nakata Y. The effect of short interactive animation video information on preanesthetic anxiety, knowledge, and interview time: a randomized controlled trial. Anesth. Analg. 2011; 112: 1314–1318. Keulers B, Welters CF, Spauwen PH, Houpt P. Can face-to-face patient education be replaced by computer-based patient education? A randomised trial. Patient Educ. Couns. 2007; 67: 176–182. Klemetti S, Kinnunen I, Suominen T et al. The effect of preoperative nutritional face-to-face counseling about child’s fasting on parental knowledge, preoperative need-for-information, and anxiety, in pediatric ambulatory tonsillectomy. Patient Educ. Couns. 2010; 80: 64–70. Leino-Kilpi H, Luoto E, Katajisto J. Elements of empowerment and MS patients. Neurosci. Nurs. 1998; 20: 116–123. Lo SF, Wang YT, Wu LY, Hsu MY, Chang SC, Hayer M. A costeffectiveness analysis of a multimedia learning education program for stoma patients. J. Clin. Nurs. 2009; 19: 1844–1854.

Knowledge tests in patient education

Louie SW, Liu PK, Man DW. The effectiveness of a stroke education group on persons with stroke and their caregivers. Int. J. Rehabil. Res. 2006; 29: 123–129. Lovisi Neto BE, Jennings F, Barros Ohashi C, Silva PG, Natour J. Evaluation of the efficacy of an educational program for rheumatoid arthritis patients. Clin. Exp. Rheumatol. 2009; 27: 28–34. Lowe DB, Leathley MJ, Sharma AK. Assessment of stroke knowledge. Age Ageing 2002; 31 (Suppl. 1): 42. Lowe DB, Sharma AK, Leathey MJ. The CareFile Project: a feasibility study to examine the effects of an individualised information booklet on patients after stroke. Age Ageing 2007; 36: 83–89. Madan A, Tichansky D. Patients postoperatively forget aspects of preoperative patient education. Obes. Surg. 2005; 15: 1066–1069. Mancuso JM. Impact of health literacy and patient trust on glycemic control in an urban USA population. Nurs. Health Sci. 2010; 12: 94–104. McDonald ME. The Nurse Educator’s Guide to Assessing Learning Outcomes. Brooklyn: Jones and Bartlett Publishers, 2007. Miller KM, Wysocki T, Cassady JF, Cancel D, Izenberg N. Validation of measures of patients’ preoperative anxiety and anesthesia knowledge. Anesth. Analg. 1999; 88: 251–257. Mishra PK, Mathias H, Millar K, Nagrajan K, Murday A. A randomized controlled trial to assess the effect of audiotaped consultations on the quality of informed consent in cardiac surgery. AMA Arch. Surg. 2010; 145: 383–388. Mishra PK, Ozalp F, Gardner RS, Arangannal A, Murday A. Informed consent in cardiac surgery: is it truly informed? J. Cardiovasc. Med. 2006; 7: 675–681. Navarre M, Patel H, Johnson CE, Durance A, McMorris M, Bria W. Influence of an interactive computer-based inhaler technique tutorial on patient knowledge and inhaler technique. Ann. Pharmacother. 2007; 41: 216–221. Neri M, Spanevello A, Ambrosetti M, Ferronato P, Cagna C, Zanon P. Short and long term evaluation of two structured selfmanagement programmes on asthma. Monaldi Arch. Chest Dis. 2001; 56: 208–210. Phelan EA, Deyo RA, Cherkin DC et al. Helping patients decide about back surgery: a randomized trial of an interactive video program. Spine 2001; 26: 206–211. Piatt GA, Orchard TJ, Emerson S et al. Translating the chronic care model into the community: results from a randomized controlled trial of a multifaceted diabetes care intervention. Diabetes Care 2006; 29: 811–817. Pink J, Pink K, Elwyn G. Measuring patient knowledge of asthma: a systematic review of outcome measures. J. Asthma 2009; 46: 980– 987. Rendell J. Effect of health education on patients’ beliefs about glaucoma and compliance. Insight 2000; 25: 112–118. Rodrigues MC. Construct equivalence of multiple-choice and construct-response items: a random effects synthesis of correlation. J. Educ. Meas. 2003; 40: 163–184. Rootmensen G, van Keimpema AR, Looysen E, van der Schaaf L, de Haan R, Jansen H. The effects of additional care by a pulmonary nurse for asthma and COPD patients at a respiratory outpatient clinic: results from a double blind, randomized clinical trial. Patient Educ. Couns. 2008; 70: 179–186. Ryhänen AM, Rankinen S, Tulus K, Korvenranta H, Leino-Kilpi H. Internet based patient pathway as an educational tool for breast cancer patients. Int. J. Med. Inform. 2012; 81: 270–278.

273

Sixta CS, Ostwald S. Texas–Mexico border intervention by promotores for patients with type 2 diabetes. Diabetes Educ. 2008; 34: 299–309. Smith SS, Lang CP, Sullivan KA, Warren J. A preliminary investigation of the effectiveness of a sleep apnea education program. J. Psychosom. Res. 2004a; 56: 245–249. Smith SS, Lang CP, Sullivan KA, Warren J. Two new tools for assessing patients’ knowledge and beliefs about obstructive sleep apnea and continuous positive airway pressure therapy. Sleep Med. 2004b; 5: 359–367. Sorrell JT, McNeil DW, Gochenour LL, Jackson CR. Evidence-based patient education: knowledge transfer to endodontic patients. J. Dent. Educ. 2009; 73: 1293–1305. Sullivan K, Dunton NJ. Development and validation of the stroke knowledge test. Top. Stroke Rehabil. 2004; 11: 19–29. Taylor LF, Kee CC, King SV, Ford TA. Evaluating the effects of an educational symposium on knowledge, impact, and selfmanagement of older African Americans living with osteoarthritis. J. Community Health Nurs. 2004; 21: 229–238. Terwee CB, Bot DM, Boer MR et al. Quality criteria were proposed for measurement properties of health status questionnaires. J. Clin. Epidemiol. 2007; 60: 34–42. Urnes J, Petersen H, Farup PG. Disease knowledge after an educational program in patients with GERD – a randomized controlled trial. BMC Health Serv. Res. 2008; 8: 236. van der Wal MH, Jaarsma T, Moser DK, van Velduisen DJ. Development and testing of the Dutch Heart Failure Knowledge Scale. Eur. J. Cardiovasc. Nurs. 2005; 4: 273–277. Victor CR, Triggs E, Ross F, Lord J, Axford JS. Lack of benefit of a primary care-based nurse-led education programme for people with osteoarthritis of the knee. Clin. Rheumatol. 2005; 24: 358–364. Walker D, Adebajo A, Heslop P, Hill J, Firth J, Bishop P. Patient education in rheumatoid arthritis: the effectiveness of the ARC booklet and the mind map. J. Rheumatol. 2007; 46: 1593–1596. Wang KY, Wu CP, Ku CH, Chang NW, Lee YH, Lai HR. The effect of asthma knowledge and health-related quality of life in Taiwanese asthma patients. J. Nurs. Res. 2010; 18: 126–134. Wang LM, Chiou CP. Effectiveness of interactive multimedia CD on self-care and powerlessness in hemodialysis patients. J. Nurs. Res. 2011; 19: 102–111. Wells JR. Hemodialysis knowledge and medical adherence in African Americans diagnosed with end stage renal disease: results of an educational intervention. Nephrol. Nurs. J. 2011; 38: 155–162. White R, Walker O, Roberts S, Kalisky S, White P. Bristol COPD Knowledge Questionnaire (BCKQ): testing what we teach patients about COPD. Chronic Respir. Dis. 2006; 3: 123–131. Yang BH, Chen YC, Chiang BL, Chang YC. Effects of nursing instruction on asthma knowledge and quality of life in schoolchildren with asthma. J. Nurs. Res. 2005; 13: 174–183. Yang ML, Chiang CH, Yao G, Wang KY. Effect of medical education on quality of life in adult asthma patients. J. Formos. Med. Assoc. 2003; 102: 768–774. Yehle KS, Sands LP, Rhynders PA, Newton GD. The effect of shared medical visits on knowledge and self-care in patients with heart failure: a pilot study. Heart Lung 2009; 38: 25–33. Yen M, Huang JJ, Teng HL. Education for patients with chronic kidney disease in Taiwan: a prospective repeated measures study. J. Clin. Nurs. 2008; 17: 2927–2934.

© 2013 Wiley Publishing Asia Pty Ltd.

Knowledge tests in patient education: a systematic review.

This study describes knowledge tests in patient education through a systematic review of the Medline, Cinahl, PsycINFO, and ERIC databases with the gu...
246KB Sizes 0 Downloads 0 Views