Article

Continuous Curricular Feedback: A Formative Evaluation Approach to Curricular Improvement Stanley Goldfarb, MD, and Gail Morrison, MD

Abstract Curriculum evaluations are used to plan future revisions and other improvements in curriculum design. Most models are summative and occur at the end of a course, so improvements in instruction may be delayed. In this article, the authors describe the formative curriculum evaluation model adopted at the Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania.

C

urriculum evaluation is widely practiced in various professional schools and is a bedrock principle of the approach taken by the Liaison Committee on Medical Education (LCME) to assess the quality of American medical schools. This includes both evaluation of faculty teaching by medical schools as well as evaluation of the medical school curriculum by the LCME.1 Typical curriculum evaluations are summative and focus on achieving a rigorous outcome analysis. Although such an analytic approach evaluates how a curriculum benefits student education, it does not focus significantly on the reasons why particular outcomes occurred. The evaluation process, including assessment of student learning and faculty performance, typically clarifies the extent to which goals are attained, often based on some objective

Dr. Goldfarb is professor, Department of Medicine, and associate dean for curriculum, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. Dr. Morrison is professor, Department of Medicine, and senior vice dean for education, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. Correspondence should be addressed to Dr. Goldfarb, Academic Programs, 3450 Hamilton Walk, Stemmler Suite 100, Philadelphia, PA 19104; telephone: (215) 898-1530; e-mail: stanley. [email protected]. Acad Med. 2014;89:264–269. First published online December 19, 2013 doi: 10.1097/ACM.0000000000000103

264

In their model, representative student feedback is gathered in real time and used to modify courses and improve instruction. The central features of their continuous feedback model include developing a small cadre of preclinical and clinical student evaluators who are trained to obtain classwide input regarding all aspects of the curriculum, including teacher effectiveness, and meet regularly

(weekly or monthly) with relevant faculty and administrators. The authors show how this curriculum evaluation approach maximizes student involvement in course development and provides opportunities for rapid improvements in course content and instruction as well as for the identification of barriers to effective clinical and preclinical educational experiences.

standard such as examination results. The assessment of faculty performance and effectiveness of educational materials almost always consists of a studentcompleted questionnaire.2 It usually provides faculty with an “end-of-course” set of evaluations about how well the goals and objectives of the course were achieved and the effectiveness of faculty teaching efforts.3

powerful tool for determining overall efficacy.

Recent data suggest that students’ summative course-end evaluations tend to be dominated by emotional experiences according to the “peak-end rule”—that is, how the courses were at their peak (pleasant or unpleasant) and how they ended. Students’ impressions of both the peak and end of the course typically override the benefit of any continuous evaluations made by students throughout the duration of the course.4 If we could identify an absolute standard for course effectiveness, summative evaluations as the sole means of evaluating courses would suffice. However, standard setting in the preclinical curriculum, for complex, performance-based exams or even multiple-choice-based exams, remains an unresolved issue in education. The evaluation of medical school curricula, therefore, requires a component that is strongly processed based. To achieve an optimum result from assessing the educational process, a continuous evaluation, which is rarely employed in curriculum evaluation, becomes a

In this article, we describe the continuous feedback model used in the curriculum evaluation process at the Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania. This model, which is used in conjunction with summative evaluations, has been critical to the success of our curriculum. We begin by describing the radical reform of our curriculum, which led to the incorporation of continuous curricular feedback, then discuss curriculum evaluation approaches. We describe our curricular components and feedback model in detail, sharing our experiences and offering recommendations for other institutions that may wish to adopt a similar model. Curricular Reform

In 1997, the vice dean for education at the then-named University of Pennsylvania School of Medicine led the implementation of a curricular reform program, beginning with the entering class of 145 first-year students. The resulting modular system removed direct control over the curriculum from academic departments, integrating elements of traditional medical education across departments. Multidisciplinary teams of faculty were created to construct courses, termed “modular blocks,” in basic medical sciences (e.g.,

Academic Medicine, Vol. 89, No. 2 / February 2014

Article

physiology, biochemistry, and genetics) as well as organ-based pathophysiology blocks. It was clear that in order to monitor the curricula of such blocks, we needed to employ both a rigorous summative evaluation tool to gauge students’ perceptions of the success of the curriculum5 as well as a continuous feedback model using a time-intensive, process-oriented approach. Briefly, in the continuous feedback model we adopted, a cadre of students from each medical school class attend weekly (for the preclinical curriculum) or monthly (for the clinical curriculum) evaluation sessions with a representative group of faculty and medical school administrators who supervise the overall curriculum. The students, who serve for at least six months as evaluators, receive instruction in providing feedback as well as in use of some technology tools to ensure wide class participation in the formative evaluation. We believe that this approach has yielded the benefits of a continuous improvement model for curricular reform. Although our model of real-time feedback does not eliminate the need for a formal, quantitative evaluation at the end of each block, it provides qualitative information, a sense of security for students from any retribution, and the opportunity to correct issues in the curriculum as they arise. We have continued to refine our continuous feedback model, first introduced in 1997, as new tools to enhance student communication, such as social media, have emerged.

students’ perceptions of the quality of their medical education,8 a consequence of typical end-of-course evaluations is a lengthy delay in implementing improvements. The effectiveness of course-end evaluations is also limited by students’ insufficient understanding of the evaluative process, as well as their fear of retribution, particularly in clinical settings if course grades have not yet been finalized. In a study of anonymous evaluations, Afonso et al9 reported that students listed a lack of formal training about evaluating as their primary barrier to effective end-of-course evaluation; other barriers included a feeling of frustration with the evaluation process and the fear of encounters with the same attending physician in the future. The timing of evaluation can also influence the utility and validity of the process. Although there is a good correlation between evaluations performed at the end of a learning experience such as a lecture assessed by an end-of-the course evaluation,10 there is also evidence that a continuous improvement model using more timely evaluations may substantially improve the reliability of ratings.6 Finally, the relative merits of quantitative evaluation using a Likert-type scale methodology and of qualitative evaluation using free-text comments or focus groups have been examined.11–13 Clearly, both quantitative and qualitative methodologies provide important

information. Therefore, we adopted both into our overall curriculum evaluation. Components of the Curriculum

The curriculum at the Perelman School of Medicine is integrated by topic and organized into six modules, as illustrated in Figure 1. Some aspects of these modular blocks are taught by faculty of several departments. Hence, no one department is totally responsible for the whole module. Module 1, the first curricular block that students encounter, lasts about 14 weeks and focuses on cellular and molecular topics. About 200 faculty—predominantly faculty from the relevant basic science departments, augmented with clinical faculty—are involved in teaching this module. Supervision is accomplished by the oversight of a faculty module leader and an associate dean for curriculum. After completing Module 1, students begin Module 2, which is taught over 44 weeks. It consists of interdisciplinary blocks examining all major clinical systems. Organ anatomy, physiology, pathophysiology, pathology, embryology, pharmacology, and major organ-specific disease manifestations are taught in an integrated fashion during each organ block. The breadth of topics covered in this module requires rather intense supervision of the entire module as well as of each block to achieve successful integration of all the components. Module 3, which runs concurrently with Modules 1 and 2 from the time

Approaches to Curriculum Evaluation

As pointed out by Kogan and Shea,6 course evaluations are underused as a means of enhancing faculty teaching, but this is probably the most effective use of such evaluations. The questionnaires typically developed and distributed to students at the end of a course to assess teaching effectiveness are viewed as formative as well as summative, in that teachers may use the data to improve their future educational efforts.7 However, it can take a while, often until the following year’s version of the course, for the input from the questionnaire to manifest in improved outcomes. Although such standard evaluation methods have been shown to improve

Academic Medicine, Vol. 89, No. 2 / February 2014

Figure 1 Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania’s modular curriculum system. Redesigned in 1997, the modular system removed direct control over the curriculum from academic departments, and integrated elements of traditional medical education across departments. Modules 1 and 2 run concurrently with Module 3 from matriculation to the start of clinical clerkships (Module 4). Module 6 runs throughout the four years of medical school.

265

Article

of matriculation to the start of clinical clerkships, occurs two afternoons per week for 18 months and covers three major interdisciplinary topics: Introduction to Clinical Medicine, a course in taking clinical histories, performing physical examinations, and developing an approach to clinical reasoning; Clinical Evaluative Sciences, a course in evaluating the medical literature; and Health Care Systems, a course in health policy. Module 3 allows for the introduction of early clinical experiences. Following completion of the first three modules, second-year students begin Module 4, which consists of 48 weeks of required clinical rotations, divided into 12-week blocks. Module 5 runs for the last 18 months of medical school and consists of freely chosen electives; “selectives” such as one month of one- or two-week “Frontier Courses” in basic science and advanced clinical science topics, including bioethics; a 3-month research project; and required experiences in subinternships in medicine, emergency medicine, or pediatrics. Module 6, our educational program in professionalism, humanism, and patient communication, runs throughout the four years of medical school. All of the modules focus on smallgroup learning, using clinical vignettes to develop students’ skill in integrating information to solve clinical problems. These small groups may take the form of a pathology virtual laboratory, clinical case-based problem-solving sessions, journal clubs and literature evaluations, or introductions to clinical skills such as taking a history or performing examinations. Continuous Feedback Model

Because of our curriculum’s highly integrated structure, its emphasis on small-group learning, and the possibility of heterogeneous experiences within different small groups, we developed a continuous feedback model that involves multiple representatives from each stakeholder group: faculty, administration, and students. This system, supervised by leaders of the relevant modules and the associate dean of curriculum, allows real-time changes when any deficiencies in or potential improvements to individual curricular

266

components are identified by the stakeholders and need to be addressed rapidly. Evaluating the basic medical science curriculum Our approach to continuous curricular evaluation in the preclinical curriculum involves just-in-time weekly meetings of students in Module 1 and in Module 2. Each weekly meeting includes a team of faculty members and supervisors of the relevant component of the medical curriculum. Students evaluate Modules 3 and 6 with their respective faculty at these sessions as well. Additionally, at the end of each module, all students taking that module are required to complete a summative evaluation using an online system (Oasis, Paul Schilling Co., Madison, Wisconsin). Below, we describe in more detail the roles of each stakeholder group in the continuous evaluation process. Student evaluators. There are 8 to 10 student evaluators who represent each medical school class; they are selected through elections supervised by the medical student government, a student-led entity. Student evaluators are instructed to maintain e-mail contact with classmates to assess, in an ongoing fashion, the class’s evaluation of each instructional session. They are mentored during their initial efforts at providing feedback by the module director and the associate dean for curriculum. It is repeatedly stressed to them that their role is to represent their class, rather than provide their personal critiques. To help achieve this, student evaluators receive instruction in using electronic media to conduct student polls about curricular issues. They are frequently encouraged to avoid any impulse to “tell us what we want to hear”; rather, they are told, they should be rigorous and fair in their analysis, which cannot be stressed too much. Student evaluators’ desire to please is an impulse that is well documented in the literature.14 Student evaluators serve for at least six months and up to one year. Approximately one-third of the evaluators “retire” after six months and are replaced by newly elected representatives at key breaks in the curriculum.

Other attendees. Each curricular block is supervised by a designated faculty member (block director) working with another faculty member who also functions as the director of the entire module of preclinical instruction (the module leader). Key administrative staff, including the associate director of curriculum and the chief administrative officer and staff of the Curriculum Office, also attend and develop minutes of each session. Each component of the preclinical curriculum—the basic cellular and molecular portion and the organbased portion—has a module leader. The relevant block and module leaders participate in the weekly session, while the associate dean of curriculum and the administrative coordinator for the entire curriculum participate in all sessions. In addition, there are quarterly meetings of all block directors with the supervising module leaders to review issues of general curricular interest and to discuss best practices that have been developed in individual blocks. Content reviewed. Weekly calendars are serially reviewed with students providing commentary on each lecture and the overall clarity of small-group teaching for style and content. Teaching styles, adherence to the curriculum in individual small-group sessions, clarity of written materials, sequencing of information, and evaluation of students’ comfort with the taught material are evaluated for each instructional component. If problems are identified, block directors immediately assess the issue and, if necessary, provide feedback to individual faculty instructors. Changes are accomplished in real time so that any necessary remediation can be conducted prior to the completion of a block or any block component. Evaluating the clinical curriculum The clinical curriculum is evaluated through monthly conferences, which are structured so that each clinical clerkship is assessed. Because of the large number of simultaneous clerkships, 12 to 15 student representatives are elected for evaluation of clinical clerkships. Reports of students’ experiences during that month on each clinical clerkship are reviewed. In addition to a general discussion of important issues that have

Academic Medicine, Vol. 89, No. 2 / February 2014

Article

arisen in any clerkship, each month a specific clinical clerkship director is invited to attend the meeting to assess directly the feedback regarding content and teaching method of his or her particular clinical clerkship and to allow a more in-depth discussion of the clerkship’s effectiveness. Student evaluators. The 12 to 15 student evaluators are selected by their respective classes through elections supervised by the medical student government. They are instructed to maintain e-mail contact with classmates to assess, in an ongoing fashion, the class’s evaluation of each instructional session. In the monthly meetings, they report on their own clinical assignments as well as their classmates’ experiences. Faculty attendees. A faculty member is designated to supervise each clinical clerkship, and he or she works with a faculty member who is the module leader. As noted above, one clerkship is highlighted at each monthly session. The associate dean of curriculum and the administrative coordinator for the entire curriculum also participate in each session. Results of Real-Time Continuous Evaluation of the Curriculum

Short-term effects Real-time student feedback has led to innovations within each module, block, and clerkship. For example, we have instituted team-based examinations in our curriculum.15 This approach began in Module 1’s anatomy block and was piloted in the nephrology block in Module 2. During one of the monthly continuous feedback sessions for the organ-based Module 2, positive student commentary on the team-based exam in nephrology led to the block director in the immediately following pulmonary block to initiate what is now a highly popular team-based exam. Team-based exams have now become a standard component of assessment of student acquisition of knowledge in the majority of the blocks of Modules 1 and 2. The structure of review sessions highlighting critical learning points has also been changed on the basis of student input during continuous feedback sessions. When the Module 1 leader

Academic Medicine, Vol. 89, No. 2 / February 2014

learned that the embryology block had successfully used a structured questionand-answer format rather than a minilecture format for its review session, the Module 1 leader had the embryology block leader work with the genetics block leader to change its topic review session to incorporate the new format. In another example, faculty members’ inconsistent teaching and supervision in small-group, problem-based sessions in genetics were immediately addressed through faculty development sessions on the objective of student interaction and self-learning. One block restructured small-group sessions so that the 21 students in each classroom aggregated into groups of three or four to solve case-based problems; they then reaggregated into seven-person teams to discuss results with the entire small-group-session class. This successful strategy was rapidly emulated by the subsequent organ-based block. Digitized histology and pathology specimens that allow annotations (which can be “hidden” until students assess their knowledge of the material) were introduced in one block in Module 2 with great success and, because of the rapid feedback system, were incorporated in less than 30 days into the succeeding block. Recently, a block director, through his Twitter account, informed students of a particular issue in one of the lectures. Feedback in the weekly student evaluator meetings affirmed the popularity of this approach and led to increased use of social media in later sessions in the same block as well as the recommendation to expand the activity in other blocks. The monthly meetings with student evaluators in clinical clerkships often uncover important problems. Because it is usually impractical for the medical school or departmental administration to closely supervise students’ clerkship activities in outpatient, community practice, it is important to continuously receive feedback from these students. Communication between students at various community sites and course representatives has enabled rapid feedback from departmental clerkship supervisors to community faculty regarding student participation in the practice’s clinical activities. It has also

ensured that students have a robust clinical experience and direct involvement with a wide variety of patients. As these various examples illustrate, change has been made quickly in response to feedback. When an issue arises during the evaluation process, block or clerkship faculty administrators typically contact the relevant faculty within 24 hours and report to the associate dean for curriculum on outcomes of any interventions. The impact of the implemented changes is assessed in subsequent meetings with the preclinical and clinical student evaluators. We have not found there to be any detrimental aspects of this model. Occasionally, however, there are difficult discussions with faculty identified as problematic while they are still actively engaged in teaching a specific course. Long-term effects Our summative evaluation system (i.e., required course-end evaluations completed by all students) provides both quantitative evaluations of each block or clerkship and qualitative, highly useful student commentary regarding each instructor, lecture, small group, and overall course effectiveness. However, the more detailed and interactive realtime feedback from student evaluators allows us to plan long-term adjustments several months prior to full analysis of the summative evaluation. Moreover, the ability to question students when issues are fresh and recalled in detail significantly enhances the final evaluations. Several specific examples illustrate our use of continuous evaluations of the curriculum to achieve longerterm enhancements. In one instance, students on required clinical clerkships reported that nursing staff were hesitant to permit student involvement in some clinical procedures, despite the students’ adequate training and certification. This produced a nursing– student physician collaboration that involves nurse mentoring of students, which has enhanced student appreciation of nursing roles in clinical care and nurses’ appreciation for students’ capabilities. In addition, feedback about the inconsistency of student participation in basic medical

267

Article

procedures like intravenous catheter placement resulted in the rapid initiation of simulation training. The changes that arise from our continuous feedback model are not only specific to the style or educational methods of individual instructors but also lead to revisions to the overall curriculum. The effects related to individual faculty are most important when these instructors have ongoing interactions with students during a particular block. At the overall curriculum level, the nurse-mentoring program described above and the creation of procedure simulations resulted directly from feedback at meetings with student evaluators and stand out as substantial changes in curricular design. Any substantive changes that require curricular revisions are reported to the executive committee of the Curriculum Committee, and, if approved, they are reported to the Curriculum Committee at its bimonthly meeting. The implemented changes are evaluated in real time and are also assessed in subsequent quantitative summative evaluations. Discussion

Formative evaluation has been the province of student–teacher interactions for many years and serves a crucial purpose in avoiding delays in correcting errors or deficiencies.16 In curriculum evaluation, however, ratings for ineffective teaching and overall low course evaluations have been the traditional means of precipitating improvements or innovations in teaching,3 but usually only benefit successive classes. When the pressure of grades is removed from the continuous evaluation process, opportunities are produced for real-time improvement and enhanced performance.17 By creating a system of student evaluators who meet weekly with preclinical block and curriculum supervisors or monthly with clinical clerkship directors, we have established a continuous feedback mechanism for our curriculum that offers clear benefits for faculty improvement and student satisfaction. Student evaluators, who present the feedback, develop real skill and diplomacy in presenting both praise and criticism

268

to faculty. Any personal animosity is removed from the interaction; the focus becomes one of quality improvement rather than quality assurance. Including the curriculum supervisors in the process allows mediation of conflicts between course faculty and students, and the focus remains on improvement. There are, of course, potential barriers to instituting this model of curricular evaluation. First, it is very time- and resource intensive. For the basic science curriculum, block directors are required to attend a one-hour luncheon meeting each week to engage in the interactive feedback sessions with the student evaluators. At times, faculty members have been anxious about direct confrontations with class representatives, and at times, frank discussions about course shortcomings have required extra meetings to reduce tensions and determine a path forward. Senior faculty and administrative staff have been able to manage such situations; these episodes have emphasized the need for these individuals to attend the sessions. Staff members participate in order to record comments and to help implement any recommended changes. Second, and perhaps more important, a successful continuous feedback program requires skillful purveyors of feedback. The medical student evaluators require several weeks of experience in the evaluation process to learn to gather classmates’ perceptions and provide feedback to faculty in a diplomatic and constructive fashion. The students must also commit to reflecting the feedback of the entire class and not merely their own views. A staggered turnover of the student evaluators is crucial in maintaining the collegiality of the process. Finally, the faculty and administration must be willing to accept the feedback and act quickly to implement changes. Technical issues—like quality of teaching materials, timeliness of transmission to students, and uniformity of teaching activities among small-group sessions— should be rapidly addressed by faculty. Although the barriers to implementing this model are substantial, they can be overcome without any technical expertise. Continuous evaluation simply requires faculty and administrator time and commitment. Students enjoy the

responsibility and the leadership aspects of their role and have come to see the activity as highly worthwhile. We do not see any real barriers to implementing this approach at institutions with more traditional curricular models than ours. The keys, in our judgment, are to develop the student evaluators and to have senior faculty and a member of school administration present at all sessions. The real-time, continuous curricular feedback model provides meaningful, timely opportunities to correct deficiencies in both overall course characteristics and in the teaching styles or approaches of individual faculty. We recommend it as a useful model in medical education. Acknowledgments: We acknowledge the invaluable assistance of Mrs. Anna Delaney in providing superb administrative support as chief administrative officer and director, Curriculum Office in the Academic Programs office of the Perelman School of Medicine at the University of Pennsylvania. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable.

References 1 Liaison Committee on Medical Education. Functions and Structure of a Medical School: Standards for Accreditation of Medical Education Programs Leading to the MD Degree. June 2013. http://www.lcme.org/ publications/functions.pdf. Accessed October 29, 2013. 2 Abrahams M, Friedman C. Preclinical course-evaluation at U.S. and Canadian medical schools. Acad Med. 1997;71:371–374. 3 Wilkes M, Bligh J. Evaluating educational interventions. BMJ. 1999;318:1269–1272. 4 Woloschuk W, Coderre S, Wright B, McLaughlin K. What factors affect students’ overall ratings of a course? Acad Med. 2011;86:640–643. 5 Gilbert S, Davidson JS. Using the WorldWide-Web to obtain feedback on the quality of surgical residency training. Am J Surg. 2000;179:74–75. 6 Kogan J, Shea JA. Course evaluation in medical education. Teach Teach Educ. 2007;23:251–264. 7 Elzubeir M, Rizk D. Evaluating the quality of teaching in medical education: Are we using the evidence for both formative and summative purposes? Med Teach. 2002;24:313–319. 8 Feinstein E, Levine HG. Impact of student ratings on basic science portion of the medical school curriculum. J Med Educ. 1980;55:502–512. 9 Afonso NM, Cardozo LJ, Mascarenhas OA, Aranha AN, Shah C. Are anonymous evaluations a better assessment of faculty

Academic Medicine, Vol. 89, No. 2 / February 2014

Article teaching performance? A comparative analysis of open and anonymous evaluation processes. Fam Med. 2005;37:43–47. 10 Shores JH, Clearfield M, Alexander J. An index of students’ satisfaction with instruction. Acad Med. 2000;75(10 suppl):S106–S108. 11 Frasier PY, Slatt L, Kowlowitz V, Kollisch DO, Mintzer M. Focus groups: A useful tool for curriculum evaluation. Fam Med. 1997;29:500–507.

Academic Medicine, Vol. 89, No. 2 / February 2014

12 Lloyd-Jones G, Fowell S, Bligh JG. The use of the nominal group technique as an evaluative tool in medical undergraduate education. Med Educ. 1999;33:8–13. 13 Lewis BS, Pace WD. Qualitative and quantitative methods for the assessment of clinical preceptors. Fam Med. 1990;22:356–360. 14 Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review. JAMA. 2009;302:1316–1326.

15 Morrison G, Goldfarb S, Lanken PN. Team training of medical students in the 21st century: Would Flexner approve? Acad Med. 2010;85:254–259. 16 Miller A, Archer J. Impact of workplace based assessment on doctors’ education and performance: A systematic review. BMJ. 2010;341:c5064. 17 Hauer KE, Kogan JR. Realising the potential value of feedback. Med Educ. 2012;46: 140–142.

269

Continuous curricular feedback: a formative evaluation approach to curricular improvement.

Curriculum evaluations are used to plan future revisions and other improvements in curriculum design. Most models are summative and occur at the end o...
415KB Sizes 0 Downloads 0 Views