Nurse Education Today 35 (2015) 347–359

Contents lists available at ScienceDirect

Nurse Education Today journal homepage: www.elsevier.com/nedt

Review

A systematic review of clinical assessment for undergraduate nursing students Xi Vivien Wu a,b,⁎, Karin Enskär b,1, Cindy Ching Siang Lee a,2, Wenru Wang a,2 a b

Alice Centre for Nursing Studies, Level 2, Clinical Research Centre, Block MD 11,10 Medical Drive, Singapore School of Health Sciences, Jonkoping University, Sweden, Box 1026, SE-551 11 Jönköping, Sweden

a r t i c l e

i n f o

Article history: Accepted 20 November 2014 Keywords: Clinical competence Clinical assessment practices Clinical assessment tool Assessment standards Undergraduate nursing students Preceptorship

s u m m a r y Background: Consolidated clinical practicum prepares pre-registration nursing students to function as beginning practitioners. The clinical competencies of final-year nursing students provide a key indication of professional standards of practice and patient safety. Thus, clinical assessment of nursing students is a crucial issue for educators and administrators. Objective: The aim of this systematic review was to explore the clinical competency assessment for undergraduate nursing students. Data sources: PubMed, CINAHL, ScienceDirect, Web of Science, and EBSCO were systematically searched from January 2000 to December 2013. Methods: The systematic review was in line with the Preferred Reporting Items for Systematic Reviews and MetaAnalyses guidelines. Published quantitative and qualitative studies that examined clinical assessment practices and tools used in clinical nursing education were retrieved. Quality assessment, data extraction, and analysis were completed on all included studies. Results: This review screened 2073 titles, abstracts and full-text records, resulting in 33 included studies. Two reviewers assessed the quality of the included studies. Fourteen quantitative and qualitative studies were identified for this evaluation. The evidence was ordered into emergent themes; the overarching themes were current practices in clinical assessment, issues of learning and assessment, development of assessment tools, and reliability and validity of assessment tools. Conclusion: There is a need to develop a holistic clinical assessment tool with reasonable level of validity and reliability. Clinical assessment is a robust activity and requires collaboration between clinical partners and academia to enhance the clinical experiences of students, the professional development of preceptors, and the clinical credibility of academics. © 2014 Elsevier Ltd. All rights reserved.

Introduction Consolidated clinical practicum prepares pre-registration nursing students to develop the required level of competency to function as beginning practitioners upon licensure registration. The clinical competence of final-year nursing students is a key element related to professional standards and patient safety (Kim, 2007); assessment of clinical competency is a crucial factor for educators and administrators.

⁎ Corresponding author at: Alice Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD 11,10 Medical Drive, Singapore 117597. Tel.: +65 66012756, +65 65165086. E-mail address: [email protected] (X.V. Wu). 1 Tel.: +46 36 10 10 00. 2 Tel.: +65 65165086.

http://dx.doi.org/10.1016/j.nedt.2014.11.016 0260-6917/© 2014 Elsevier Ltd. All rights reserved.

The purpose of clinical assessment is to prepare and induct students to work as safe, ethical, and accountable nurses (Bourbonnais et al., 2008). Assessment ought to consider the multidimensional nature of competence and the attributes required for the nursing profession (Levett-Jones et al., 2011). The reliability and validity of the instrument are fundamental to ensure fairness and consistency of assessment across settings and assessors. The complex clinical environment pose additional challenges for clinical assessment (Lewin, 2007). Despite the fact that the active involvement of students in their own work enhances learning, exposure to the real-life clinical environment has always created stressful situations for students. Nursing educators have historically served as advisors, providing resources and support for both students and preceptors in the assessment process (Chow and Suen, 2001). As such, clinical assessment is a collaborative exercise among students, preceptors and academics. This paper aims to discuss the current assessment process and practice, as well as explore the development of assessment tools, and the

348

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

Table 1 PICOS comprehensive review. Categories

Criteria

Population

Studies focus on pre-registration undergraduate nursing students undergoing clinical practice. Preceptors, nurse clinicians and academics who are guiding pre-registration undergraduate nursing students in their transition to practice are also included in the study. Quantitative and qualitative study designs are both considered, For example, the review included studies with interventions such as clinical assessment, skill competency assessment and evaluation of the assessment tool. It also included studies without the intervention of an assessment tool, and qualitative studies exploring the experiences of clinical assessment practices for nursing students, preceptors and academics. Studies evaluate the effectiveness of the assessment tool or assessment strategy, or the effectiveness of clinical teaching strategies and clinical practice programs in relation to students’ learning outcomes. Primary outcome: • Clinical competency assessment tool for transition to practice Secondary outcomes: • Clinical teaching pedagogy • Clinical support system for nursing students and preceptors • The role of academics in clinical practice Preceptors’ competency in clinical teaching and assessment • Editorials, opinion pieces, conference abstracts • Review papers • Papers written in a language other than English • Research focusing on competency assessment for nurses in clinical settings

Intervention and phenomena of interest

Comparator

Outcome and context

Specific exclusion criteria

validity and reliability of assessment instruments. A systematic review was performed using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) schema (Moher et al., 2009).

years of clinical experience, has recognized skills in the area of practice, and has completed a preceptorship course. The student works alongside their preceptor and provides direct care to the patients under the guidance of their preceptor. A comparison study evidenced that preceptorship was an important criteria for improving the competency of new graduate nurses (Bartlett et al., 2000). Methods The questions for this review were developed using the Population, Intervention, Comparison, Outcomes, Specific exclusion process (PICOS, Table 1): ‘What are the current assessment practices for nursing undergraduates in transition to practice?’, ‘What are the issues and concerns with learning and assessment?’, ‘How are the assessment tools developed?’, and ‘How reliable and valid are the assessment tools?’. Search strategy This review is based on the relevant criteria from PRISMA checklist (Moher et al., 2009) and the Cochrane Handbook for Systematic Reviews of Interventions. A total of five electronic databases were searched, including PubMed, CINAHL, ScienceDirect, Web of Science, and EBSCO. A systematic search strategy was formed, including key search terms and related text words. ‘Clinical assessment’, ‘clinical evaluation’, ‘clinical measurement’, ‘clinical competence’, ‘clinical standards’, ‘assessment tool’, ‘assessment standard’, ‘educational measurement’, ‘undergraduate nursing students’, ‘preceptorship’, ‘competence-based education’ were used in the search process. The search was conducted using combination of exact keywords on the title and abstract as these addressed by breaking down the review questions.

Background Inclusion and exclusion criteria Competency is defined as behaviors that reveal mastery at work and can be applied to determine work standards and formulate strategies to describe individuals and teams. Furthermore, competency is reflected in the terms of formation of power and responsibility, and the extension of decision-making (Hsieh and Chihuikao, 2003). Clinical competence is described as the theoretical and clinical knowledge used in the practice of nursing, incorporating psychomotor skills and problem-solving ability with the goal of safely providing care for patients (Hickey, 2010). Nevertheless, in the seminar work of Benner (1982), she emphasized that clinical competence develops over time as nurses progress through various levels of proficiency. Nursing has been recognized as a respectable profession worldwide. Professional regulatory bodies are set up in many countries to establish guidelines for nursing licensure and regulation of practice and education. The Nursing and Midwifery Board of Australia (2006) defines competency standards for registered nurses (RN) as the combination of skills, knowledge, attitudes, values, and abilities that underpin effective and/or superior performance in a profession/occupational area. The Singapore Nursing Board (SNB, 2012a, 2012b) states that the core competencies set the foundation for RNs to maintain their competence and acquire additional competencies or advanced clinical skills to deliver safe client care in response to changing healthcare needs and advancement in technology. According to the American Association of Colleges of Nursing (2008), clinical practicums provide opportunities for nursing students to learn in multiple care settings and receive appropriate guidance that fosters the development of clinical competence and professionalism. The preceptor model is used commonly in clinical education, and it allows the student to experience the realities of the nurse’s role while practicing their skills (Bergjan and Hertel, 2013; Hickey, 2010). In the Standards for Clinical Nursing Education (SNB, 2012a, 2012b), a nurse is qualified as a preceptor if he or she has a minimum of three

The inclusion criteria were (1) articles published from January 2000 to December 2013; (2) peer-reviewed research; (3) experimental, cohort, survey, or qualitative studies; (4) English language publications; and (5) research was on clinical competency assessment for undergraduate nursing students. The exclusion criteria were (1) editorials, opinion pieces, and conference abstracts; (2) review papers; (3) non-English language papers; and (4) the research focused on competency assessment for nurses in clinical settings. Search outcomes The reference management software Endnote X6 (Thomson Reuters, New York) was used to sort the records. After the removal of duplicates, the remaining 1290 records were assessed for relevance by the researcher, based on title and abstracts. Subsequently, 67 full-text records were retrieved. Thirty-three studies (16 quantitative studies and 17 qualitative studies) met the criteria of this systematic review. The process used to reduce and evaluate the records is illustrated in Fig. 1. Analysis It was anticipated that selection of papers would be biased by factors such as sample size, sample composition and tool selection. Each paper was critically appraised using the Qualitative Assessment and Review Instrument (QARI) critical appraisal instrument including 10 criteria (Pearson, 2004), and the Joanna Briggs Institute Meta Analysis of Statistics Assessment and Review Instrument (JBI-MAStARI) critical appraisal instrument. As most of the studies selected were descriptive studies, the nine critical appraisal criteria for descriptive/case series studies were used (Joanna Briggs Institute, 2011).

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

349

Fig. 1. PRISMA flow diagram.

Quality assessment Two reviewers assessed 33 studies, with the use of the appraisal method for both qualitative and quantitative research. Papers were included in the review if both reviewers agreed. If the two reviewers disagreed, a third reviewer would be invited to appraise the paper. Nineteen papers were excluded after critical appraisal. Fourteen papers were selected and reviewed again to ensure complete consensus. Both reviewers’ critical appraisal ratings for each study were entered into SPSS Version 21 (IBM Corp., Armonk, NY, USA). The statistical analysis reported Cohen’s Kappa score of k = 0.894 for quantitative studies, and k = 0.754 for qualitative studies. Both scores indicated a moderate to high level of agreement between the reviewers (Gravetter and Wallnau, 2009). Data extraction and synthesis Both reviewers individually reviewed the 14 studies using a data extraction form (Joanna Briggs Institute, 2011). In the first stage of the

process, study designs, sample, sample size, data collection method, analysis technique, and outcomes were summarized in Table 2. Data on the assessment tools were subsequently presented in Table 3 according to domains, reliability and validity of the assessment tool, and the assessment process. Data were organized into four overarching themes: current practices in clinical assessment, issues with learning and assessment, development of assessment tools, and reliability and validity of assessment tools. Results Characteristics of the included studies Six quantitative studies and eight qualitative studies were included in the systematic review. They were all published between 2002 and 2013. Studies were conducted in Australia, Denmark, Germany, Ireland, Norway, Scotland, Sweden, Turkey, and Taiwan. Among the quantitative studies, four studies used cross-sectional survey design, and two studies used longitudinal design. All qualitative studies used

350

Table 2 General description of the included papers. Country

Sample

Sample Design size

Data collection method

Analysis technique

Key findings or results

Hsu and Hsieh (2013)

Taiwan

Bachelor nursing students

599

Psychometric analysis, cross-sectional survey

Questionnaire

Descriptive; factor analysis; correlational; oblique rotation (Pormax); Cronbach’s α

Lee-Hsieh et al. (2003)

Taiwan

Bachelor nursing students

121

Questionnaire Longitudinal quasi-experimental

Levett-Jones et al. (2011)

Australia

Bachelor nursing students

654

Longitudinal educational evaluation survey

Questionnaire

Descriptive; factor analysis; correlational; Cronbach’s α

Löfmark and ThorellEkstrand (2014)

Sweden

First- (38), second- (91), and third-year (62) Bachelor nursing students and preceptors (101)

292

Cross-sectional descriptive

Questionnaire

Frequencies and summary statistics

O’Connor et al. (2009)

Ireland

Bachelor nursing students (29) and preceptors (27)

56

Cross-sectional survey

Questionnaire

Descriptive; correlational; thematic analysis

Norman et al. (2002)

Scotland

Bachelor nursing students (257) and midwifery students (43)

300

Descriptive correlational

Questionnaire

Descriptive; correlational; multivariate and univariate approach

Bradshaw et al. (2012)

Ireland

First- and second-year bachelor nursing students

13

Exploratory descriptive

Focus group discussions

Thematic content analysis

• Competency Inventory of Nursing Students (CINS) has satisfactory psychometric properties and could be a useful instrument for measuring the learning outcomes of nursing student. • “Ethics and accountability” was found to be the most important factor contributing to nursing student's competencies. • A clinical nursing competence measurement tool, Clinical Nursing Competence Questionnaire (CNCQ) was developed for this study. Using this tool, the authors evaluated the effectiveness of a nursing concept-based curriculum for RN-to-BSN students in Taiwan. • Full-time student self-evaluations of competence showed significant improvement as time passed, except in the area of professional self-growth. • Self-evaluations of part-time students decreased, except in the area of professional self-growth. • Both groups of students consistently evaluated themselves more highly than their instructors or supervisors. • The results of the Structured Observation and Assessment of Practice (SOAP) approach supports the premise that quality clinical assessment requires nursing students to be exposed to complex challenges undertaken in authentic clinical contexts, observed by registered nurses who are trained as assessors and have a strong educational and clinical background. • It was possible to use the revised assessment form in clinical nursing education (AssCE form) during different years of the program, and to combine factors in the AssCE form with learning outcomes in the course syllabus. The scale added to each factor facilitated the assessment dialogue and offered possibilities to illustrate the students’ development during clinical periods. • There were generally positive attitudes to the structure of the tool and positive experiences of its operation in practice. • However, there was dissatisfaction with the time spent completing the assessment tool and the preparation needed to carry out the assessment process. • Recommendations for practice include the need to consider placement length in the design process, and the need for a focus on user preparation. • A correlational analysis of data collected in relation to students showed that there is little or no relationship between most of the clinical competence assessment methods currently used. • A multi-method UK-wide strategy for clinical competence assessment for nursing and midwifery is needed. • Continued collaboration between all stakeholders is recommended in order to develop a more consistent, holistic approach to competence assessment methods used in clinical practice. • This study supports the development of a national competence assessment strategy that reflects the practice-based nature of nursing, with documentation that is straightforward to use in a busy clinical environment. • This may enhance consistency in preceptors' approaches to the process, and help ensure that staff members are familiar with the documentation.

Descriptive Chi-square; t test; generalized linear model for repeated measurement; Bonferroni posterior test; factor analysis

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

Author (s) and year

Ireland

Preceptors

16

Exploratory descriptive

Focus group discussions

Thematic content analysis

Elcigil and Yıldırım Sarı (2007)

Turkey

Third-year bachelor nursing students

24

Exploratory descriptive

Focus group discussions

Content analysis

Lilja Andersson et al. (2013)

Sweden

Third-year bachelor nursing students

577

Exploratory descriptive

Survey with open-ended questions

Content analysis

McSharry et al. (2010)

Ireland

Preceptors (8), clinical placement coordinators (7), clinical nurse managers (7), lecturers (7), and bachelor nursing students (7)

36

Descriptive, interpretative, and reflective inquiry

Focus group discussions

Diekelmann’s (1992) framework

Nielsen et al. (2013)

Denmark Clinical supervisors (9) and bachelor nursing students (27)

36

Descriptive, action research

Focus group discussions

Content analysis

Struksnes et al. (2012)

Norway

Nurses working at nursing homes

49

Exploratory descriptive

Focus group discussions

Seven steps of phenomenographic analysis

Bachelor nursing students (5), preceptors (6), nurses (80), and nurse educators (5)

96

Exploratory descriptive

Focus group discussions; written feedback

Content analysis

Ulfvarson and Sweden Oxelmark (2012)

• Competing demands in the clinical environment affects preceptors’ experiences of the competency assessment process. • Enhancing clinical assessment skills requires reviewing competency documentation and finding a common language for student assessment, to promote greater student skill development within competency frameworks. • Problems most encountered by student nurses were anxiety about how the clinical educator assessed them, being interrogated, receiving negative feedback and having communication problems. • If students do not get feedback, they are not aware of other’s perspectives of their strengths and weaknesses. • Succeeding and receiving positive feedback gives students the occasion to reflect on their own development. This may contribute to increased self-confidence. Educators should increase their positive feedback to increase student motivation. • Students highlighted the fact that the Swedish National Clinical Final Examination (NCFE) gave them greater confidence and awareness of their own clinical competence. • However, conditions in the bedside test differed depending on the care situation. This makes it difficult to standardize results completely, as patients' conditions and the situation can vary from day to day. • The fairness of the clinical assessment is questionable. Objective Structured Clinical Examination (OSCE) is an option, but does not cover the whole caring situation. It is difficult to achieve the flexibility and problem-solving skills that characterize good nursing care. • A flexible, eclectic model that optimizes the expertise of individual nurse lecturers is advocated. Key areas include faculty practice, clinical research and practice development. • The study identified support for preceptors’ teaching and assessment role. The preceptor preparation workshop was reviewed and preceptor updates provided. Local policies have been implemented to support preceptors in the clinical assessment of students. • A clinical placement coordinator role has replaced the traditional “link lecturer role” and consequently students are adequately supported in clinical practice. The study highlighted the importance of fostering good working relationships between nurse lecturers and clinical personnel. • A Model of Practical Skill Performance functioned as a generic and holistic instrument that could be used in different contexts and for different patients. • The model fulfills the requirement of being a multi-level tool that can be used by inexperienced nursing students and experienced clinical supervisors. • To capture the specific elements in a skill, clinical guidelines must be consulted. • The model helped the students become aware of the aspects needed to obtain quality in how they provided care. • Structured written information combined with group supervision seemed to promote professional and personal development among clinical nurses, and subsequently developed their competence in assessing nursing students. • Group supervision may be a way to involve the whole care unit in the process of developing a total learning environment. • Assessment of Clinical Education (ACIEd) assesses the nursing students’ ability to perform a task, and the quality of the effort made. • The tool is best used as a basis for discussion between the teacher, preceptor and student, to help them form an estimation concerning the student’s level of knowledge and skill.

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

Cassidy et al. (2012)

351

352

Table 3 Comparison of the assessment tools. Assessment tool

Domains and criteria

Assessment practices and process

Reliability and validity of the instrument

Hsu and Hsieh (2013) Taiwan

Competency Inventory of Nursing Students (CINS)

• Measuring core competence requires the integration of cognitive, affective, and psychomotor skills (Mackenzie, 2009). • The nursing students were undertaking a self-assessment to evaluate their competence, including knowledge, skills, communication, attitudes, values and professional judgment.

Reliability: High Cronbach’s α = 0.98 Validity: Valid Scale-content validity index (S-CVI) of 0.99; item-content validity index (I-CVI) ranging from 0.83 to 1.00. The content validity index is taken as a barometer for item and instrument clarity, homogeneity and relevance. Exploratory factor analysis, total variance =69.84%

Lee-Hsieh et al. (2003) Taiwan

Clinical Nursing Competence Questionnaire (CNCQ)

Based on eight core values: American Nurse Association (ANA); Löfmark; American Association of College of Nursing (AACN); Black; Hsu and Hsieh; Taiwan Nursing Accreditation Council (TNAC). The CINS covers: 1. Ethics and accountability (15 items) 2. General clinical skills (7 items) 3. Lifelong learning (6 items) 4. Clinical biomedical science (5 items) 5. Caring (6 items) Critical thinking and reasoning (4 items). Based on the literature and professional nursing competence questionnaires of Girot (1993), Grabbe (1988), Hsu and Lin (1993), Schwirian (1978), and Yu and Ku (1998). The CNCQ covers: 1. Caring competence (8 items) 2. Communication and coordination competence (7 items) 3. Management and teaching competence (3 items) 4. Professional self-growth competence (4 items).

Reliability: High Cronbach’s α = 0.93 Validity: Valid Exploratory factor analysis, total variance =60.57%

Levett-Jones et al. (2011) Australia

Structured Observation and Assessment of Practice (SOAP)

• The CNCQ contains 22 items. For the 18 items in the first three dimensions, on a 1 to 5 scale, 5 represents the ability to independently, safely, and accurately complete each nursing activity without advice from instructors and supervisors; handle problems in a minimum amount of time; apply nursing theories and knowledge accurately; focus on clients while performing activities; and appear confident. • Scores of 4, 3, 2, and 1 represent successive reductions in the ability to independently perform nursing tasks. • For the fourth dimension, the scale refers to frequency of performance, with 5 representing “all the time” and 1 representing “never.” Therefore, the total score possible on all 22 items ranged from 22 to 110. The SOAP model is conducted in the following order: 1. Observation: During a two-three hour observation period where students are engaged in their usual patient care activities, each of the student’s discrete nursing behaviors is documented in sequence by their assessor using a situation, action, outcome (SAO) format. 2. VIVA: There is no checklist; the process is contextually responsive and seeks to understand more than the student’s observed behaviors. It also examines the knowledge, values, and attitudes that inform the student’s practice. 3. Mapping: Trends or patterns in the student’s cumulative behaviors are identified by mapping their behaviors and responses against the ANMC Competency Standards for the Registered Nurse (2005). The SOAP approach removes ambivalence by providing evidence for assessors to discriminate areas of performance and to make a valid and reliable judgment of competence. A result is determined by comparing the evidence gathered during the observation and VIVA with the ANMC Competency Standards (2005). 4. Formative and summative feedback: During a two-hour debriefing session immediately following the assessment, the assessor provides formative and summary feedback to the student. The student’s clinical strengths and areas requiring further development are clearly identified. This approach is a mechanism for giving students individualized, detailed, and non-threatening feedback, which promotes critical self-reflection and enables them to respond in a positive way when the need for improvement is identified.

The SOAP model was developed with reference to the literature and in consultation with educational and clinical experts. It was piloted and initially evaluated with a group of 60 students in early 2004. The SOAP model is a six-hour holistic assessment of nursing students’ clinical knowledge, skills, behaviors, attitudes, and values, undertaken in a clinical context. The SOAP is an integral component of third-year nursing students’ final semester coursework. It is a “hurdle” requirement: students are required to achieve a “competent” rating in the SOAP in order to complete their program and to graduate. SOAP takes into account: 1. Provision of safe and effective nursing care consistent with client needs and the plan of care 2. Demonstration of effective oral communication skills 3. Demonstration of appropriate professional documentation skills 4. Practice within ethical and legal boundaries.

Reliability: High Evaluation survey of the SOAP: 1. Perceived learning outcomes α = 0.96 2. Consistency with general clinical performance α = 0.92 3. Quality of assessors α = 0.98 4. Anxiety and stress impact α = 0.90 Validity: Valid Exploratory factor analysis, total variance =77.65%

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

Author(s), year, and country

Revised assessment form in clinical nursing education (AssCE form)

O’Connor et al. (2009) Ireland

Shared Specialist Placement Document (SSPD)

Norman et al. (2002) Scotland

Nursing Competencies Questionnaire (NCQ) Key Areas Assessment Instrument (KAAI)

Bradshaw et al. (2012) Cassidy et al. (2012) Ireland

Competency Assessment Tool (CAT)

The first version was developed based on Swedish higher education qualification descriptors (SFS, 1992:1434; SFS, 1993:100) and international guidelines for nursing education (ICN, 1997; Salvage and Heijnen, 1997). The Bologna documents emphasize progression, learning outcomes, and criteria for grading all with regard to quality assurance. The five areas in the assessment form are: 1. Communication and teaching 2. Nursing process 3. Examinations and treatments 4. Management and cooperation 5. Professional approach. The SSPD is designed as a generic assessment document so it can be used to determine standards of practice in a range of clinical settings, and is not specific to any one clinical discipline. The areas covered are: 1. Professional and ethical practice (5 items) 2. Holistic approaches to care and integration of knowledge (5 items) 3. Interpersonal relationships (3 items) 4. Organization and management of care 5. Personal and professional development (3 items). The NCQ was developed by Bartlett et al. (1998) in Oxford based on two North American measures of competence: Schwirian’s (1978) Six-D Scale, developed in Columbus, OH, USA; and a measure developed by Deback and Mentkowski (1986). The NCQ consists of 78 items organized into eight constructs: 1. Leadership 2. Professional development 3. Assessment 4. Planning 5. Intervention 6. Cognitive ability 7. Social participation 8. Ego strength. The KAAI contains two questions. Question 1 asks respondents to rate the students in the key areas of knowledge, skills, values, and attitudes. Question 2 asked a global question about students’ performance throughout their placement and whether the clinical assessor would like to work with the student again. The CAT takes three main approaches to conceptualizing competence: a behavioral approach; a generic approach; and a holistic, integrated approach. The areas covered are: 1. Professional ethical practice 2. Holistic approaches to care and the integration of knowledge 3. Interpersonal relationships 4. Organization and management of care 5. Personal and professional development. Source: An Bord Altranais (2000)

• Progression is visible in an introduction page, which gives an overview of overall learning outcomes within the areas of knowledge and understanding, skills and abilities, judgment and approach, for years one, two and three in clinical education. • Each factor is in AssCE form described in two levels of achievement of goals: “Very good achievements of goals” and “Good achievements of goals.” The third level, “Inadequate achievements of goals,” is not described in words, but is a possible result. • A scale with nine steps covering the three levels is added to each factor.

Reliability: Not reported Validity: AssCE has a history of 14 years. The tool was revised in 2004, and the third revision was based on higher education reform across Europe.

• The process involved the drafting standards for practice under each domain of competence, and subsequent discussion in small groups to verify and validate these standards. • The SSPD was developed on the assumption that assessments are based on the standards for practice under each domain of competence, and that the assessment process is a collaborative exercise between student and the preceptor. • Completing the SSPD requires the student and the preceptor to follow a protocol, which comprises a series of three formal meetings, a record of which is maintained within the tool. • Nursing subjects rate their own competence on each item using a four-point frequency of performance scale (“Never” to “Always”). • The internal consistency of each subscale, as tested by Bartlett’s team, produced alpha coefficients ranging from 0.68 to 0.89 and the alpha for the scale overall was 0.95, indicating a homogeneous scale. • The NCQ discriminated in some respects between the competences of nurses who had graduated from a four-year degree compared to those who had graduated with a three-year diploma (Bartlett et al., 1998). • Clinical assessors, teachers and students answered the first question and the clinical assessor only answered the second question. • Clinical assessors and teachers were asked to compare students with a competent nurse or midwife and to rate each on a scale of 1–5, each point consisting of a brief description of the competence expected at that level. • Students were asked to rate themselves against the same criteria.

Reliability: Not reported Validity: This process was repeated until a consensus was reached as to what the standards of practice under each domain ought to be. The combination of clinicians and educators in this process served to enhance content and general validity of the produced tool.

Reliability: High Cronbach’s α = 0.96 Validity: Content validity was assessed by expert agreement from a panel of nurse teachers and practitioners, though the actual method used is not described. Reliability: High Lecturer: α = 0.93 Practice assessor: α = 0.86 Student: α = 0.75 Validity: Not reported

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

Löfmark and Thorell-Ekstrand (2014) Sweden

Reliability: Not reported • The competency assessment includes observable Validity: Not reported performance behavior referred to as critical elements in the competency documentation. • Preceptors were registered nurses working in disciplines of general health, mental health and intellectual disability nursing, who had completed a preceptorship training program.

353

(continued on next page)

354

Table 33 (continued) (continued) Assessment tool

Elcigil and Yıldırım Sarı (2007) Turkey

• Clinical nursing education in Turkey is delivered by faculty members in nursing schools. • Students at nursing schools receive practical training either after completing their theoretical courses or in conjunction with these studies. • In all of these applications, nursing school faculty members accompany the students one-on-one in their contact with patients. The faculty member of the nursing school who is present at the clinical application acts in cooperation with the clinical nurse, taking on the entire responsibility or most of the responsibility for the care of patients. The NCFE is an innovative method of examination, divided into • The bedside test lasts four hours and each student is Swedish National examined separately. two parts: a written test and a bedside test. The students are Clinical Final • During their annual clinical placement, the students take care tested on: Examination of one patient in need of comprehensive medical and nursing 1. Knowledge (NCFE) care. This may take the form of inpatient care (hospital care) or 2. Skills outpatient care (community care). 3. Capacity for critical thinking • During the bedside test, the student is observed by an “observing 4. Problem solving nurse”, who is guided by a structured assessment tool. 5. Ethical reasoning • In the third step, the student reflects on steps I and II, together 6. Independence and readiness to act. The NCFE is a structured assessment tool that reflects the areas with the observing nurse and a clinical lecturer. • The choice of patient is made by the observing nurse together of competence required of a registered nurse, namely: with a clinical lecturer. The patient is required to give his or her I. Assessment of the patient's needs and problems; and informed consent. analyses and planning • The clinical lecturer responsible for the examination decides II. Implementation and evaluation of nursing activities whether the student has passed or failed based on the scores in the III. Reflections and final judgment. assessment tool and the observing nurse's oral report. • Having passed this final examination, the students are considered sufficiently competent to enter the nursing profession. Approved by the Irish Nursing Board (An Bord Altranais (ABA), • Staff nurses (preceptors) providing direct patient care are Clinical practice responsible for facilitating learning, and supervising and competency-based 2005) the pre-registration nursing education program assessing students. complies with EU regulations (Government of Ireland, 2000). assessment • Clinical placement coordinators (CPCs) are employed by each It includes both theoretical and clinical instruction; the latter training hospital and are assigned several clinical areas where continues to be a central element of the program, making up students are allocated. The role involves supporting and half of the program duration. facilitating students and preceptors in clinical learning. The clinical practice component is assessed using a • Nurse lecturers engage in clinical practice and its competency-based assessment strategy set out by the Irish advancement, and develop mechanisms for maintaining their Nursing Board. own nursing expertise and credibility. Nurse lecturers in Ireland are expected to develop clinical roles and support clinical learning, and are responsible for ensuring the adequacy of the clinical learning environment and assessment process.

Lilja Andersson et al. (2013) Sweden

McSharry et al. (2010) Ireland

Not reported

Domains and criteria This Turkish education program takes four academic years or a total of 4600 hours of education, including theory and clinical practice. Students in the Bachelor Science Nursing program perform 2300 clinical and 2300 theoretical hours in the course of their studies. The curriculum aims to educate nurses and generalists rather than specialists.

Assessment practices and process

Reliability and validity of the instrument Reliability: Not reported Validity: Not reported

Reliability: Not reported Validity: Not reported

Reliability: Not reported Validity: Not reported

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

Author(s), year, and country

Nielsen et al. (2013) Denmark

Model of Practical Skill Performance (MPSP)

Struksnes et al. (2012) Norway

Alternative Supervision Model

Reliability: Not reported Validity: Not reported

Reliability: Not reported Validity: Not reported

Reliability: Not reported Validity: Various discussions among the experts have confirmed the face validity of the tool.

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

Assessment of Ulfvarson and Oxelmark (2012) Clinical Education (ACIEd; the tool) Sweden

• The model is generic, holistic, multi-professional, multi-level, (used by both novice and expert) and multi-modal (used in different contexts). • Pilot studies have been conducted on the model’s usefulness as a supervision tool (Larsen and Nielsen 2006), but this study is the first comprehensive study of its pedagogical potential in clinical settings. • The alternative model differs from a The model was developed in cooperation with clinical nurses in five nursing homes at five neighboring local communities in traditional supervision model because nurse teachers do not attend the three Norway. formal assessment meetings during the 10 weeks of practice. • To prepare the clinical nurses assessing the nursing students: 1. The nurse teachers provide evidence-based literature on the subject’s “knowledge”, “learning” and “supervision” 2. A pamphlet with descriptions of the level of performance that should be expected from the student in the clinical practice is handed out. These aims are sequenced throughout the 10 weeks of practice, and are presented as a “progression schedule” 3. A nurse teacher conducts regular group supervision sessions with the clinical nurses. • The tool is used on two occasions during The tool is based on the objectives of the course’s current a six-week clinical course: initially in a intended learning outcomes. half-way formative assessment discussion, Additional criteria are presented for how and to what extent and then in the final assessment. the student is expected to fulfill the intended learning • The student, a supervising nurse, and a outcomes – thereby reaching a certain grade (pass with distinction, pass or fail) – and how the criteria are connected to lecturer from the university are present during the assessment. the outcomes. These criteria are: • The preceptor conveys information and gives a 1. Nursing recommendation. 2. Documentation 3. Caring • After the half-way formative discussion, 4. Skills and manual handling. a plan of action is developed in case the assessment displays any uncertainties as to whether the student will reach the intended learning outcomes. The model is based on a view of practical skills as complex actions. These skills involve more than just technical and manual aspects and must be tailored to the specific patient in the specific context. The model consists of six categories capturing the essence of all practical nursing actions: 1. Substance 2. Sequence 3. Accuracy 4. Fluency 5. Integration 6. Caring comportment.

355

356

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

exploratory descriptive design. In terms of the contents, eight studies focused on clinical assessment tool development and psychometric testing, and six studies examined clinical learning and assessment practices. With regard to the settings, seven studies were conducted in universities for undergraduate nursing students, five studies were conducted in both universities and clinical settings for nursing students, preceptors, lecturers, graduates, and clinical supervisors; and two studies were conducted at clinical settings for preceptors and nurses. The sample size for quantitative studies ranged from 56 to 654, and the sample size for qualitative studies ranged from 13 to 577. Current assessment practices and processes All 14 papers discussed the current assessment practices and processes. Cassidy et al. (2012) elaborated that the competency assessment process in Ireland consists of a preliminary, intermediate, and final interview between the student and preceptor. Similarly, in Sweden, Löfmark and Thorell-Ekstrand (2014) recommended that students and preceptors had discussions at the midpoint and end of the clinical period, using the Assessment Form for Clinical Nursing Education (AssCE) to provide continuous feedback. It suggested that students lead the discussions and clinical lecturers contribute new perspectives and critical questioning to get the substance for final grading. In Australia, Structured Observation and Assessment of Practice (SOAP) (Levett-Jones et al., 2011) involved observing the student’s engagement with patient care activities for three hours, followed by the viva. The clinical educators provide formative and summative feedback to student. Swedish national clinical final examination requires the student engaged in patient care to be observed by an experienced nurse. The student then reflects on the process of care with the nurse and clinical lecturer. The clinical lecturer decides passed or failed based on scores in the assessment tool and oral report of the nurse (Lilja Andersson et al., 2013). In Turkey, nursing school faculty members accompany students one on one throughout their contact with patients (Elcigil and Yıldırım Sarı, 2007). A number of authors emphasized that the preceptor only conveys information and provides a recommendation, while the clinical lecturer from the university decides the final grading (Levett-Jones et al., 2011; Ulfvarson and Oxelmark, 2012). Struksnes et al. (2012) proposed an alternative supervision model which requires the lecturer to attend three formal assessment meetings with the student and preceptor. The results of this study indicated that structured written information with group supervision developed preceptors’ competence in assessment. O’Connor et al. (2009) suggested that students and preceptors need to follow a protocol - Shared Specialist Placement Document (SSPD). SSPD is a generic assessment tool that encompasses standards of practice in a range of clinical settings and indicators of student competence. Issues with learning and assessment In total, 12 papers explored issues with learning and assessment. The study conducted by Elcigil and Yıldırım Sarı (2007) reported nursing students expressed anxiety over the assessment, receiving negative feedback, and insufficient guiding by preceptors. The preceptor’s role is to facilitate learning, build a supportive clinical learning environment, assess the clinical competency of nursing students, review student progress and provide effective feedback to students (Cassidy et al., 2012; McSharry et al., 2010). The common challenge faced by preceptors is unfamiliarity with the theoretical knowledge and skills taught in the academic setting (Cassidy et al., 2012). McSharry et al. (2010) elaborated that the role of academics is to visit the students and preceptors at the hospital regularly, discuss their learning goals, review their progress, and provide support to both students and preceptors. McSharry et al. (2010) also expressed concern that the roles of academics in clinical education were not well defined and varied among different institutions and countries. McSharry et al. (2010) and

Struksnes et al. (2012) reported that increasing teaching, research commitments, and administrative duties have created additional pressure for academics, which has resulted in their roles focusing on clinical liaison rather than direct clinical teaching and patient care. Development of assessment tools Twelve papers reported on the development of assessment tools. The majority of the studies discussed the domains of the tool, with reference to the American Nurses Association, American Association of Colleges of Nursing, Irish Nursing Board, Swedish higher education, Taiwan Nursing Accreditation Council and literatures (Bradshaw et al., 2012; Cassidy et al., 2012; Hsu and Hsieh, 2013; Lee-Hsieh et al., 2003; Levett-Jones et al., 2011; Lilja Andersson et al., 2013; Löfmark and Thorell-Ekstrand, 2014; McSharry et al., 2010; Nielsen et al., 2013; Norman et al., 2002; O’Connor et al., 2009; Ulfvarson and Oxelmark, 2012). In general, assessment tools encompass the following overarching domains: professional attributes, ethical practices, communication and interpersonal relationships, nursing processes, critical thinking, and reasoning. The domains mostly focused on identifying the general attributes of a professional registered nurse. The Competency Assessment Tool (CAT) modified novice to expert model of skill acquisition (Benner, 1984), and levels of learning (Steinaker and Bell, 1979) to reflect the expected level of competency for students (Bradshaw et al., 2012; Cassidy et al., 2012). The revised AssCE and Assessment of Clinical Education (ACIEd) indicated how and to what extent the student is expected to fulfil the intended learning outcomes, with clear illustration of a scale with three grades (Löfmark and Thorell-Ekstrand, 2014; Ulfvarson and Oxelmark, 2012). Competency Inventory of Nursing Students (CINS) consists of eight core domains and 43 items. Each item is rated on a seven-point Likert scale, with higher scores representing higher student competencies (Hsu and Hsieh, 2013). Reliability and validity of assessment tools Six studies reported the process of ensuring the face and content validity of assessment tools through discussions with various nursing experts (Hsu and Hsieh, 2013; Lee-Hsieh et al., 2003; Löfmark and Thorell-Ekstrand, 2014; Norman et al., 2002; O’Connor et al., 2009; Ulfvarson and Oxelmark, 2012). Out of the 14 studies reviewed, only Hsu and Hsieh (2013) reported a scale-content validity index (S-CVI) of 0.99, and an item-content validity index (I-CVI) ranging from 0.83 to 1.00 for CINS. In CINS, the content validity index was taken as a barometer for item and instrument clarity, homogeneity and relevance. It is noted that the content validity of most of the instruments was determined through agreement of expert panels instead of statistical method. Thirteen studies reported the criterion validity of the instruments since the assessment tool is criterion-referenced (Bradshaw et al., 2012; Cassidy et al., 2012; Elcigil and Yıldırım Sarı, 2007; Hsu and Hsieh, 2013; Lee-Hsieh et al., 2003; Levett-Jones et al., 2011; Lilja Andersson et al., 2013; Löfmark and Thorell-Ekstrand, 2014; McSharry et al., 2010; Nielsen et al., 2013; Norman et al., 2002; O’Connor et al., 2009; Struksnes et al., 2012; Ulfvarson and Oxelmark, 2012). The construct validity of the assessment tool was discussed in two studies. Both studies performed exploratory factor analysis (EFA), and the result indicated total variance above 60 percent (Hsu and Hsieh, 2013; Lee-Hsieh et al., 2003). Hsu and Hsieh (2013) performed EFA using the completed questionnaires, and the final 43-item CINS has satisfactory psychometric properties and could be a useful instrument for measuring the learning outcomes of nursing students. Lee-Hsieh et al. (2003) conducted EFA for Clinical Nursing Competence Questionnaire (CNCQ) , and identified 22 items to evaluate the clinical competence of nursing students.

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

The reliability of the assessment tools was reported in three studies (Hsu and Hsieh, 2013; Lee-Hsieh et al., 2003; Norman et al., 2002). Hsu and Hsieh (2013) reported Cronbach’s α = 0.98 for CINS, which indicated a high internal consistency among the items. Lee-Hsieh et al. (2003) reported Cronbach’s α = 0.93 for CNCQ. These results, in combination with validity testing, confirmed that CINS and CNCQ are valid and reliable assessment tools. In Norman et al.’s study (2002), α = 0.96 for the 78-item Nursing Competencies Questionnaire (NCQ), which indicates a homogenous scale. Alphas for Key Areas Assessment Instrument (KAAI) from lecturers and practice assessors all exceeded 0.80, while the alpha from students is 0.75, which indicates good internal consistency. Discussion This systematic review aims to investigate the current assessment process and practice, the development of assessment tools, and the validity and reliability of assessment instruments. The literatures reviewed were of high quality. Despite the heterogeneity of methodologies and the sample, the data presented a broad understanding of current clinical assessment practices, processes and tools. Current assessment practices and processes The results of the review indicate clearly that the assessment process has certain similarities in different countries. The majority of clinical assessment focuses on collaboration among academics, nursing students, preceptors and hospitals, and provides timely, constructive feedback to students. Nevertheless, Bradshaw et al. (2012) and O’Connor et al. (2009) reinforced that the assessment process is a collaborative exercise between the student and preceptor. Watson et al. (2002) explored the issue of who should carry out clinical assessment. Preceptors become familiar with the work of each student over a period of time. This socialization process may bias the assessment. Lecturers, on the other hand, make a judgment based on a snapshot of observation, which may not be representative, particularly as the demonstration of competence could be affected by ‘stage fright’, local circumstances and resource deficiencies. Assessment processes guided by assessment tools provide an objective and fair assessment for students. Ulfvarson and Oxelmark (2012) justified that assessment tools serve as a basis to discuss students’ development based on the traditional image of the nurse’s professional function. Lilja Andersson et al. (2013) reiterated the importance of structured assessment tools guiding preceptors in the assessment process. Issues with learning and assessment Students, preceptors and academics are important entities in learning and assessment. Not surprising, each has own concerns and challenges in the assessment process. In general, the undergraduate curriculum equipped nursing students with theoretical knowledge and skills competency, although they may still lack in confidence and clinical exposures (Hickey, 2010). The meta-summary of the literature on students’ clinical experiences identified four major themes: a fear of harming patients, a desire to help people, a need to integrate theory and clinical practice, and a desire to master psychomotor skills (O'Connor, 2006). Preceptors, on the other hand, reported that new graduate nurses lacked psychomotor skills, critical thinking, time management, communication, and teamwork (Hickey, 2009). Hengstberger-Sims et al. (2008) explained that new graduate nurses gain their confidence through patient workload and time spent with a preceptor during undergraduate clinical placements to consistently improve issues of time management, competence, and confidence with nursing tasks. Besides the dynamic clinical situations, the approach and teaching experience of the preceptor also play an important part in the learning experiences of the students (Nahas and Yam, 2001).

357

Preceptors need to possess a strong familiarity with the principles of teaching and learning to effectively help students reach their learning goals (O'Connor, 2006). The meta-review by Tang et al. (2005) highlighted four categories of effective clinical teaching behaviors: professional competence, interpersonal relationship skills, personality characteristics, and teaching ability. However, many preceptors may not have exposure to formal academic training, which limits their opportunity to nurture the students (Ehrenberg and Haggblom, 2007). In addition, unfamiliarity of the assessment system may influence the preceptor’s ability to help students to bridge the gap between theory and clinical experience. Dolan’s (2003) study discovered that there were inconsistencies in preceptors’ interpretations of competency statements. This suggested the existence of variations in preceptors’ approaches to assessment. Moreover, preceptors had a dual role to guide the student and provide high-quality patient care. This dual role was highlighted as frustrating, due to the demand of clinical commitments and lack of time for students (Ehrenberg and Haggblom, 2007; Neary, 2001). Kristofferzon et al. (2013) advocated the multifaceted, crucial role of academics, which includes supporting, directing, motivating, facilitating, problem-solving, trouble-shooting, advocating, and monitoring. Furthermore, academics maintain clinical credibility while engaging in real contact with clinical practice (Henderson, 2002). Indeed, academics plan the assessment system and preceptors are the assessors to implement the assessment. A strong collaboration and partnership between the academics and the preceptors is essential in the clinical setting. It is only possible to achieve an unbiased and objective assessment of students when the preceptors are supported by the academics (Carlisle et al., 1997) and reach the required competence in assessment. Development of assessment tools In fact, majority of the assessment tools are developed with reference to the competency standard stated by national board of nursing. Studies on the competence model in nursing focus on three broad conceptualizations. First, a behavioral approach measures behaviors for the purpose of assessment of competence. Second, generic model targets for identifying general attributes of the practitioner, such as knowledge and critical thinking skills. Third, a holistic approach addresses the complex combinations of knowledge, attitudes, values, and skills used by professionals to function in various clinical situations (Bradshaw et al., 2012; Eraut, 1994; Gonczi, 1994; Watson et al., 2002). Authors strongly support the holistic approach as the most appropriate for nursing practice (Hanley and Higgins, 2005; O’Connor et al., 2009). As Dolan (2003) rightfully pointed out, the aim of training competent nurses is to ensure that patients receive a high standard of care. The professional bodies state the basic competency of a registered nurse. It is justifiable to use predefined standards to measure the competence of students, as criterion-referenced assessment facilitates a fair and reliable test (Dunn et al., 2002). A model of holistic competence in clinical practice consists of a combination of these domains: knowledge and understanding, clinical skills, interpersonal skills, problemsolving skills, clinical judgment, and management skills (Hanley and Higgins, 2005). An Bord Altranais (Nursing and Midwifery Board of Ireland) (2000) expanded on the domains of competence with performance criteria, and recommended further developing critical elements based on local situations. The grading scale provides an indication of the level of competence and clear direction for students on the possible opportunities of progression (Wu, 2012). Reliability and validity of assessment tools It is always a challenge for nurse educators to develop a reliable and valid assessment tool. Neary (2001) identified that lecturers, preceptors, and students had different interpretations of the system or process of assessment. Therefore, it is crucial to ensure the reliability and validity of the assessment tool. Many studies ensured face and content

358

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359

validity of the assessment tool by obtaining consensus through discussion with nursing experts. However, most of the studies reported this process without statistical data to support the content validity test. Criterion validity is described in the process of tool development, as most of the studies map the expected learning outcomes with reference to the national standard of nursing competency, and extensive literature. In fact, criterion validity can be tested by correlating the scores of an instrument to an outside reference (Kellar and Kelvin, 2013). Kellar and Kelvin (2013) recommended using factor analysis to test the validity of the items to decide how items should be grouped into subscales, which provides justification for the use of summated scales. Alternatively, internal consistency reliability is used to evaluate the degree to which different test items that probe the same construct produce similar results (Kellar and Kelvin, 2013). In this review, the construct validity of the assessment tools was reported in two studies, and the reliability of the assessment tools was reported in only three studies. Although authors reiterated that the use of established tools reasonably increases the validity of the instrument, it is worth noting that very few assessment tools were tested rigorously for psychometric properties (Watson et al., 2002). The instruments could be further evaluated to verify their validity and reliability, especially when used in different contexts and different populations of students. Limitations This review included both qualitative and quantitative studies, and the heterogeneous nature of the studies highlighted a huge range of variations in the study designs of the clinical assessment studies. In addition, the heterogeneous studies included in the review may limit the feasibility of a funnel plot interpretation. Thus, meta-synthesis could not be performed in this review. The reviewers were not blinded to the authorship of the studies in the process of critical appraisal. Nevertheless, the reviewers were not affiliated to any authors of the included studies. This review only included English language studies due to limited resources for translation services. Indeed, the use of English language papers only may introduce some bias into the results. The studies reported are mainly from Europe, Asia and Australia. The clinical assessment literature obtained from North American countries is limited. Further investigations on the literature indicated that North America emphasizes the use of Objective Structured Clinical Examination to assess the clinical competencies of students in a simulated environment (Cant et al., 2013), as well as the use of National Council Licensure Examination-Registered Nurses (NCLEX-RN, 2013) to measure the competencies required to provide quality nursing care and ensure the public's safety. Conclusions This paper applies a systematic approach to review the clinical assessment in nursing education. The results of the systematic review offer insights on the current assessment process and related concerns. There is an increasing demand on clinical nurses to mentor and assess students in clinical practice, and the preceptorship model supports this clinical nursing education. The review indicated that both preceptors and students require support and guidance from academics in the assessment process. Most assessment tools are criterion-referenced using the standards from the national board of nursing. The current trend of moving from a generic to a holistic model of clinical assessment supports the nurturing and development of competent nursing professionals. However, limited studies adequately evaluate the psychometric properties of the assessment instrument. From the review, it is concluded that there is a need to develop a holistic clinical assessment tool with reasonable level of validity and reliability. In addition, both preceptors and students must be prepared for the use of tools, and a support system is necessarily to be established. This collaborative approach is valuable to enhance

the learning experiences of the students, the professional development of preceptors in terms of pedagogical approaches and competency as assessors, and the clinical credibility of academics. As most of the studies reported the use of a cross-sectional approach, it is recommended that longitudinal studies are conducted to monitor the individual development of clinical competence as the student progresses through nursing education. Acknowledgement The study is funded by Research Grant for Doctoral Studies, Institute of Adult Learning, Singapore (IAL_RGDS_VW_01) and Teaching Enhancement Grant, Centre for Development of Teaching and Learning, National University of Singapore. The authors would like to thank Alice Lee Centre of Nursing Studies, National University of Singapore and School of Health Sciences, Jonkoping University, Sweden, for providing resources for this systematic review. References American Association of Colleges of Nursing, 2008. The essentials of baccalaureate education for professional nursing practice. American Association of Colleges of NursingWashington, D. C. ANMC, 2005. National competency standards for the registered nurse. Australian Nursing and Midwifery Coucil (ANMC)Canberra, Australia. Bartlett, H., Westcott, L., Hind, P., Taylor, H., 1998. An evaluation of pre-registration nursing education: a literature review and comparative study of graduate outcomes. Oxford Centre for Health Care. Oxford Brookes University, Oxford, Research & Development. Bartlett, H.P., Simonite, V., Westcott, E., Taylor, H.R., 2000. A comparison of the nursing competence of graduates and diplomates from UK nursing programmes. J. Clin. Nurs. 9 (3), 369–379. Benner, P., 1982. From novice to expert. Am. J. Nurs. 82 (3), 402–407. Benner, P., 1984. From novice to expert: Excellence and power in clinical nursing practice. Addison-Wesley, Menlo Park, California. Bergjan, M., Hertel, F., 2013. Evaluating students' perception of their clinical placements – Testing the clinical learning environment and supervision and nurse teacher scale (CLES + T scale) in Germany. Nurse Educ. Today 33 (11), 1393–1398. Bourbonnais, F.F., Langford, S., Giannantonio, L., 2008. Development of a clinical evaluation tool for baccalaureate nursing students. Nurse Educ. Pract. 8 (1), 62–71. Bradshaw, C., O'Connor, M., Egan, G., Tierney, K., Butler, M.P., Fahy, A., Tuohy, D., Cassidy, I., Quillinan, B., McNamara, M.C., 2012. Nursing students' views of clinical competence assessment. Br. J. Nurs. 21 (15), 923–927. Cant, R., McKenna, L., Cooper, S., 2013. Assessing preregistration nursing students' clinical competence: A systematic review of objective measures. Int. J. Nurs. Pract. 19, 163–176. Carlisle, C., Kirk, S., Luker, K.A., 1997. The clinical role of nurse teachers within a project 2000 course gramework. J. Adv. Nurs. 25 (2), 386–395. Cassidy, I., Butler, M.P., Quillinan, B., Egan, G., Mc Namara, M.C., Tuohy, D., Bradshaw, C., Fahy, A., Connor, M.O., Tierney, C., 2012. Preceptors' views of assessing nursing students using a competency based approach. Nurse Educ. Pract. 12 (6), 346–351. Chow, F.L.W., Suen, L.K.P., 2001. Clinical staff as mentors in pre-registration undergraduate nursing education: student's perceptions of the mentor's roles and responsibilities. Nurse Educ. Today 21, 350–358. Deback, V., Mentkowski, M., 1986. Does the baccalaureate make a difference? Differentiating nurse performance by education and experience. J. Nurs. Educ. 25, 275–285. Diekelmann, N.L., 1992. Learning as testing: a Heideggerian hermeneutical analysis of the lived experiences of students and teachers in nursing. Adv. Nurs. Sci. 14, 72–83. Dolan, G., 2003. Assessing student nurse clinical competency: will we ever get it right? J. Clin. Nurs. 12 (1), 132–141. Dunn, L., Parry, S., Morgan, C., 2002. Seeking quality in criterion referenced assessment. Learning Communities and Assessment Cultures Conference University of Northumbria. Ehrenberg, A., Haggblom, M., 2007. Problem-based learning in clinical nursing education: integrating theory and practice. Nurse Educ. Pract. 7 (2), 67–74. Elcigil, A., Yıldırım Sarı, H., 2007. Determining problems experienced by student nurses in their work with clinical educators in Turkey. Nurse Educ. Today 27 (5), 491–498. Eraut, M., 1994. Developing professional knowledge and competence. The Falmer Press, London. Girot, E.A., 1993. Assessment of competence in clinical practice: A phenomenological approach. J. Adv. Nurs. 18, 114–119. Gonczi, A., 1994. Competency based assessment in the professions in Australia. Assess. Educ. 1 (1), 27–44. Grabbe, L.L., 1988. A comparison of clinical evaluation tools in hospitals and baccalaureate nursing programs. J. Nurs. Educ. 27. Gravetter, F.J., Wallnau, L.B., 2009. Statistics for the behavioral science. Wadworth/ Cengage Learning, Belmont, CA. Hanley, E., Higgins, A., 2005. Assessment of clinical practice in intensive care: a review of the literature. Intensive Crit. Care Nurs. 21, 268–275. Henderson, S., 2002. Factors impacting on nurses' tranference of theoretical knowledge of holistic care into clinical practice. Nurse Educ. Pract. 2 (4), 244–250.

X.V. Wu et al. / Nurse Education Today 35 (2015) 347–359 Hengstberger-Sims, C., Eagar, S.C., Gregory, L., Andrew, S., 2008. Relating new graduate nurse competence to frequency of use. Collegian 15, 69–76. Hickey, M., 2009. Preceptor perceptions of new graduate nurse readiness for practice. J. Nurses Staff Dev. 25, 35–41. Hickey, M.T., 2010. Baccalaureate nursing graduates' perceptions of their clinical instructional experiences and preparation for practice. J. Prof. Nurs. 26 (1), 35–41. Hsieh, L., Chihuikao, K., 2003. Clinical nursing competence of RN-to BSN student in a nursing concept-based curriculum in Taiwan. J. Nurs. Educ. 12, 536. Hsu, M.Y., Lin, C.Y., 1993. The level of nursing curricula and competencies. Ministry of Education Taipei, Republic of China. Hsu, L.-L., Hsieh, S.-I., 2013. Development and psychometric evaluation of the competency inventory for nursing students: A learning outcome perspective. Nurse Educ. Today 33 (5), 492–497. ICN, 1997. Nursing education: past and present, Current and future trends. ICN, Geneva. Kellar, S.P., Kelvin, E.A., 2013. MUNRO'S statistical methods for health care research. Lippincott Williams & Wilkins, Philadelphia. Kim, K.H., 2007. Clinical competence among senior nursing students after their preceptorship experiences. J. Prof. Nurs. 23 (6), 369–375. Kristofferzon, M.-L., Mårtensson, G., Mamhidir, A.-G., Löfmark, A., 2013. Nursing students' perceptions of clinical supervision: The contributions of preceptors, head preceptors and clinical lecturers. Nurse Educ. Today 33 (10), 1252–1257. Larsen, K., Nielsen, C.M., 2006. Fundamental nursing/body care, testing a model of practical skill in relation to clinical supervision and assessment. VIA University College, Aarhus, Denmark. Lee-Hsieh, J., Kao, C., Kuo, C., Tseng, H., 2003. Clinical nursing competence of RN-to-BSN students in a nursing concept-based curriculum in Taiwan. J. Nurs. Educ. 42 (12), 536–545. Levett-Jones, T., Gersbach, J., Arthur, C., Roche, J., 2011. Implementing a clinical competency assessment model that promotes critical reflection and ensures nursing graduates' readiness for professional practice. Nurse Educ. Pract. 11 (1), 64–69. Lewin, D., 2007. Clinical learning environments for student nurses: key indices from two studies compared over a 25 year period. Nurse Educ. Pract. 7 (4), 238–246. Lilja Andersson, P., Ahlner-Elmqvist, M., Johansson, U.-B., Larsson, M., Ziegert, K., 2013. Nursing students' experiences of assessment by the Swedish National Clinical Final Examination. Nurse Educ. Today 33 (5), 536–540. Löfmark, A., Thorell-Ekstrand, I., 2014. Nursing students' and preceptors' perceptions of using a revised assessment form in clinical nursing education. Nurse Educ. Pract. 14 (3), 275–280. Mackenzie, K.M., 2009. Who should teach clinical skills to nursing students? Br. J. Nurs. 18, 395–398. McSharry, E., McGloin, H., Frizzell, A.M., Winters-O’Donnell, L., 2010. The role of the nurse lecturer in clinical practice in the Republic of Ireland. Nurse Educ. Pract. 10 (4), 189–195. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G., 2009. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 6 (6). Nahas, V.L., Yam, B.M.C., 2001. Hong Kong nursing students' perceptions of effective clinical teachers. J. Nurs. Educ. 40, 233–237.

359

NCLEX-RN, 2013. National council licensure examination for registered nurses. NCLEX-RN examination test plan for the National Council Licensure Examination for Registered Nurses. Neary, M., 2001. Responsive assessment: assessing student nurses' clinical competence. Nurse Educ. Today 21 (1), 3–17. Nielsen, C., Sommer, I., Larsen, K., Bjørk, I.T., 2013. Model of practical skill performance as an instrument for supervision and formative assessment. Nurse Educ. Pract. 13 (3), 176–180. Norman, I.J., Watson, R., Murrells, T., Calman, L., Redfern, S., 2002. The validity and reliability of methods to assess the competence to practise of pre-registration nursing and midwifery students. Int. J. Nurs. Stud. 39 (2), 133–145. O’Connor, T., Fealy, G.M., Kelly, M., Guinness, A.M.M., Timmins, F., 2009. An evaluation of a collaborative approach to the assessment of competence among nursing students of three universities in Ireland. Nurse Educ. Today 29 (5), 493–499. O'Connor, A., 2006. Clinical instruction and evaluation: A teaching resource. Jones and Bartlett, Boston. Pearson, A., 2004. Balancing the evidence, incorporating the synthesis of qualitative data into systematic reviews. JBI Reports 2 (2), 45–64. Requirements and standards for nurse registration education programmes. 2nd ed. An Bord Altranais, Dublin. Schwirian, P.M., 1978. Evaluating the performance of nurses: A multidimensional approach. Nurs. Res. 27, 347–351. Salvage, J., Heijnen, S., 1997. Nursing in Europe: a resource for better health. WHO Regional Publications, WHO, Copenhagen. SNB, 2012a. Core competencies for registered nurses. Singapore Nursing Board, Singapore, pp. 1–10. SNB, 2012b. Standard for clinical nursing education. Singapore Nursing Board, Singapore. Steinaker, N.W., Bell, M.R., 1979. The experiential taxonomy: A new approach to teaching and learning. Academic Press, New York. Struksnes, S., Engelien, R.I., Bogsti, W.B., Moen, Ö.L., Nordhagen, S.S., Solvik, E., Arvidsson, B., 2012. Nurses’ conceptions of how an alternative supervision model influences their competence in assessment of nursing students in clinical practice. Nurse Educ. Pract. 12 (2), 83–88. Tang, F., Chou, S., Chiang, H., 2005. Students' perceptions of effective and ineffective clinical instructors. J. Nurs. Educ. 44, 187–192. Ulfvarson, J., Oxelmark, L., 2012. Developing an assessment tool for intended learning outcomes in clinical practice for nursing students. Nurse Educ. Today 32 (6), 703–708. Watson, R., Stimpson, A., Topping, A., Porock, D., 2002. Clinical competence assessment in nursing: a systematic review of the literature. J. Adv. Nurs. 39 (5), 421–431. Wu, X.V., 2012. Redesigning authentic assessments in nursing education. In: Koh, K., Yeo, J. (Eds.), Mastering the art of authentic assessments, Vol II: From challenges from champions. Pearson, Singapore. Yu, S., Ku, N., 1998. Nursing competencies and their differences among different grade RN students in an RN-to-BSN program in Taiwan. ROC. Nurs. Res 6, 121–136 (Chinese).

A systematic review of clinical assessment for undergraduate nursing students.

Consolidated clinical practicum prepares pre-registration nursing students to function as beginning practitioners. The clinical competencies of final-...
729KB Sizes 0 Downloads 8 Views