2014 APDS SPRING MEETING

Reliability, Validity, and Feasibility of the Zwisch Scale for the Assessment of Intraoperative Performance Brian C. George, MD,* Ezra N. Teitelbaum, MD,† Shari L. Meyerson, MD,† Mary C. Schuller, MSEd,† Debra A. DaRosa, PhD,† Emil R. Petrusa, PhD,* Lucia C. Petito, MA,‡ and Jonathan P. Fryer, MD† *

Department of Surgery, Massachusetts General Hospital, Harvard University, Boston, Massachusetts; Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; and ‡ School of Public Health, University of California, Berkeley, California †

PURPOSE: The existing methods for evaluating resident operative performance interrupt the workflow of the attending physician, are resource intensive, and are often completed well after the end of the procedure in question. These limitations lead to low faculty compliance and potential significant recall bias. In this study, we deployed a smartphone-based system, the Procedural Autonomy and Supervisions System, to facilitate assessment of resident performance according to the Zwisch scale with minimal workflow disruption. We aimed to demonstrate that this is a reliable, valid, and feasible method of measuring resident operative autonomy. METHODS: Before implementation, general surgery residents and faculty underwent frame-of-reference training to the Zwisch scale. Immediately after any operation in which a resident participated, the system automatically sent a text message prompting the attending physician to rate the resident’s level of operative autonomy according to the 4level Zwisch scale. Of these procedures, 8 were videotaped and independently rated by 2 additional surgeons. The Zwisch ratings of the 3 raters were compared using an intraclass correlation coefficient. Videotaped procedures were also scored using 2 alternative operating room (OR) performance assessment instruments (Operative Performance Rating System and Ottawa Surgical Competency OR Evaluation), against which the item correlations were calculated.

This work has been supported by funding from the State of Illinois via the Excellence in Academic Medicine Program (Grant #238), the Department of Surgery at Northwestern University’s Feinberg School of Medicine, and the Department of Surgery at Massachusetts General Hospital. We declare no financial relationships with any organizations that might have an interest in the submitted work and no other relationships or activities that could appear to have influenced the submitted work. Ethical and institutional board approval was granted before conducting this study. Information on the data used in this report is available on request. Correspondence: Inquiries to Brian C. George MD, Department of Surgery, Massachusetts General Hospital, 55 Fruit Street GRB 425, Boston, MA 02114; fax: þ(617) 724-3499; e-mail: [email protected]

e90

RESULTS: Between December 2012 and June 2013, 27

faculty used the smartphone system to complete 1490 operative performance assessments on 31 residents. During this period, faculty completed evaluations for 92% of all operations performed with general surgery residents. The Zwisch scores were shown to correlate with postgraduate year (PGY) levels based on sequential pairwise chi-squared tests: PGY 1 vs PGY 2 (χ2 ¼ 106.9, df ¼ 3, p o 0.001); PGY 2 vs PGY 3 (χ2 ¼ 22.2, df ¼ 3, p o 0.001); and PGY 3 vs PGY 4 (χ2 ¼ 56.4, df ¼ 3, p o 0.001). Comparison of PGY 4 to PGY 5 scores were not significantly different (χ2 ¼ 4.5, df ¼ 3, p ¼ 0.21). For the 8 operations reviewed for interrater reliability, the intraclass correlation coefficient was 0.90 (95% CI: 0.72-0.98, p o 0.01). Correlation of Procedural Autonomy and Supervisions System ratings with both Operative Performance Rating System items (each r 4 0.90, all p’s o 0.01) and Ottawa Surgical Competency OR Evaluation items (each r 4 0.86, all p’s o 0.01) was high. CONCLUSIONS: The Zwisch scale can be used to make

reliable and valid measurements of faculty guidance and resident autonomy. Our data also suggest that Zwisch ratings may be used to infer resident operative performance. Deployed on an automated smartphone-based system, it can be used to feasibly record evaluations for most operations performed by residents. This information can be used to council individual residents, modify programmatic curricula, and potentially inform national training guidelines. C 2014 Association of Program ( J Surg 71:e90-e96. J Directors in Surgery. Published by Elsevier Inc. All rights reserved.) KEY WORDS: graduate medical education, surgery, educa-

tional measurement, evaluation, surgical education COMPETENCIES: Medical

Knowledge, Practice-Based

Learning and Improvement

Journal of Surgical Education  & 2014 Association of Program Directors in Surgery. Published by 1931-7204/$30.00 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jsurg.2014.06.018

OBJECTIVES The central goal of surgical training must be to graduate residents who are competent to operate independently. Unfortunately, surgical fellowship directors report that 66% of residents cannot operate unsupervised for 30 minutes of a major procedure.1 This suggests that graduating residents are ill prepared for independent practice. This conclusion is supported by the opinion of many surgical residents themselves.2 Many investigators are working to understand the factors contributing to this problem, but it is already clear that residents must be provided more opportunities to gain progressive autonomy in preparation for their first day as an unsupervised attending physician.3,4 Although progressive resident autonomy is an educational imperative, we must grant autonomy in a way that is safe. Indeed, 9% of surgical technical errors are linked to poorly supervised residents.5 However, patient safety is but one factor that influences the amount of autonomy provided to residents in the operating room (OR). Increased productivity pressures,6 evolving ethical considerations, and increased concern regarding malpractice liabilities have the potential to incentivize faculty to limit resident operative autonomy. One solution is to transparently and deliberately grant progressive autonomy only to those residents who have demonstrated operative competence at a lower level of responsibility. In this type of system, it is imperative that resident autonomy and competence is accurately documented. Valid and reliable quantitative resident performance data permit faculty surgeons to individualize the amount of autonomy granted to a resident for any given procedure and patient. However, collecting these data requires continuous evaluation of resident intraoperative performance. Although widely desired,4,7 surgical educators lack a universal method for feasibly collecting, analyzing, and distributing continuous resident performance data. Existing methods for evaluating resident operative performance interrupt the attending physician’s workflow, are resource intensive, and are often completed well after the end of the procedure in question. These limitations lead to low faculty compliance and potential for significant recall bias. In this study, we aim to demonstrate that the Zwisch scale can be used to make valid and reliable measurements of both faculty operative guidance as well as resident intraoperative performance. Furthermore, we describe the feasibility of continuous evaluation by deploying the Zwisch instrument on a novel smartphone-based evaluation platform.

PARTICIPANTS AND SETTING All categorical and undesignated preliminary general surgery residents in a single department of surgery participated

(n ¼ 31) in this study. All attending faculty in the same department of surgery were eligible for this study and were invited to attend a rater training session. The training session used a frame-of-reference methodology and has been previously described.8 Between October 2012 and January 2013, a total of 27 faculty raters completed the training and were subsequently enrolled in the study. This study has institutional review board approval from Northwestern University, and all resident and faculty data were deidentified before analysis. All participants signed an institutional review board–approved consent form.

DESIGN Rating Scales The raters were asked to evaluate the amount of guidance they provided to the resident using the “Zwisch” scale, previously described in detail.9 Since the original theoretical description was published in 2013 the names attached to each level have been revised but it otherwise remains unchanged. Briefly, this 1-dimensional behaviorally anchored ordinal scale is used by raters to grade the degree of guidance the attending surgeon provides to the trainee during most of the critical portion of the procedure. At the lowest end of this 4-level scale, the attending physician performs the critical portion while explaining each step to the resident (termed “Show and Tell”). In the next level (“Active Help”), the attending physician actively guides the resident through the critical portion of the procedure. This is in contrast to the third level (“Passive Help”), where the resident performs critical portions of the operation independently while the attending physician passively provides skilled assistance and intervenes only when necessary to make an important teaching point or to optimize patient safety. At the most advanced level (“Supervision Only”), attending physician presence is necessary only to guarantee patient safety. At this level, the resident has enough proficiency to perform the procedure independently using a less skilled assistant, while the attending surgeon does not need to be directly involved in the procedure other than to provide close supervision. These levels were coded 1 to 4, with 4 representing the most advanced level. In addition to the Zwisch scale, raters were also asked to rank the complexity of the procedure for which they were completing an evaluation. These ratings were made relative to the surgeon’s overall experience with that same procedure. We used a 3-level scale, anchored with prompts of “Easiest 1/3,” “Average,” and “Hardest 1/3.” Lastly, for a subset of the data modified versions of the previously described OPRS10 and O-SCORE11 resident performance rating instruments were used to assess resident performance. The OPRS instrument was modified to exclude items which only pertain to the key steps of specific procedures. The O-SCORE instrument was modified to

Journal of Surgical Education  Volume 71/Number 6  November/December 2014

e91

exclude those items that did not pertain to intra-operative performance (e.g. those evaluating the resident’s preoperative evaluation and postoperative management). Data Collection Between December 3, 2012 and June 24, 2013, 2 overlapping samples were prospectively collected. For the first sample, all procedures performed by a participating faculty member and a participating resident during the given period were eligible for inclusion. For each procedure, an evaluation request was automatically sent to the smartphone of the attending physician immediately after the procedure was completed. This request, in turn, included a link that took the attending physician to a custom website optimized for smartphone access. The names of the attending physician and resident, the name of the procedure, and the date of the procedure were automatically retrieved from the hospital’s operative database and used to prepopulate the evaluation form. Faculty needed to complete only 2 questions: one for the Zwisch rating and another for the complexity rating (refer to the previous section Rating Scale). The system used for this phase of data collection has been named the Procedural Autonomy and Supervision System. The second sample was a subset of the first and included eight operations that were also video and audio recorded for subsequent review. This was a convenience sample although informal attempts were made to record a variety of procedure types for residents at varying levels of training. For laparoscopic cases, the laparoscopic camera image and an external video camera image with audio were both recorded and combined into a single “picture-in-picture” video file for subsequent review. The Zwisch ratings for the residents for these operations were determined by the operating attending surgeon in the normal fashion using the Procedural Autonomy and Supervision System. Additionally, a research fellow observed the operations in person and determined a Zwisch rating for the procedure, without knowledge of the rating recorded by the attending surgeon. A separate surgery faculty, who was blinded to both the ratings assigned by the operating attending physician and by the in-person observer, viewed the video recordings of the 8 procedures at a later date and assigned a Zwisch rating. To test relationships between the Zwisch scale and other measures of resident operative performance, the 8 recorded procedures described previously were reviewed by 2 additional surgery faculty. One of the faculty rated each resident’s performance using the 6 general items (degree of prompting, instrument handling, respect for tissue, time and motion, operation flow, and overall performance) from the Operative Performance Rating System (OPRS).10 Other OPRS items that pertain to the key steps of the specific procedure were excluded. The other faculty rated the resident performance using the 5 intraoperative items e92

(knowledge of procedural steps, technical performance, visuospatial skills, efficiency and flow, and communication) from the Ottawa Surgical Competency OR Evaluation (O-SCORE).11 The O-SCORE items that did not pertain to intraoperative performance (e.g., those evaluating the resident’s preoperative evaluation and postoperative management) were not assessed. Data on the level of training (year of residency) for each resident were collected using department records. The Accreditation Council for Graduate Medical Education (ACGME) case log data were downloaded from the ACGME for each participating resident and used to determine prior resident experience. Statistical Analysis Descriptive statistics were used to describe the distribution of faculty evaluations, of ratings for each PGY level, of case complexity, and of prior resident experience. The chisquared test was used to test for differences in Zwisch ratings between sequential PGY, case complexity, and prior experience groups. Interrater reliability between the Zwisch scores determined by the 3 video raters was tested using an intraclass correlation coefficient. The relationships between the Zwisch rating originally assigned by the operating attending surgeon and the individual item scores on the OPRS and O-SCORE scales from these video ratings were tested using a Spearman’s rank correlation coefficient.

RESULTS During the study period, 27 faculty completed 1490 operative performance assessments on 31 residents. Additional summary demographic data are included in Table 1. The number of procedures evaluated per faculty member ranged from 1 to 185, with a mean of 39.7. The distribution is skewed to the right, with a median of 26.5. The sample included 127 unique procedure types with many infrequently performed procedures (median 2 performances). The most common procedures were laparoscopic cholecystectomy (n ¼ 201) and laparoscopic appendectomy (n ¼ 152). With respect to case complexity, 193 cases (13.0%) were rated as being in the “Easiest 1/3”, with 895 (60.0%) rated as “Average,” and 402 (27.0%) as “Hardest 1/3.” TABLE 1. Demographic Data Number of residents By year of residency Year 1 Year 2 Year 3 9 6 5 Number of attending physicians Number of procedures Number of types of procedures

31 Year 4 5 27 1490 127

Year 5 6

Journal of Surgical Education  Volume 71/Number 6  November/December 2014

TABLE 2. Zwisch Levels by PGY* Zwisch Levels PGY 1 2 3 4 5 Totals

Show and Tell 89 35 75 17 32 248

(55.6) (12.4) (24.1) (7.9) (6.2) (16.7)

Active Help 65 166 142 76 147 596

(40.6) (58.9) (45.7) (35.3) (28.7) (40.2)

Passive Help 6 78 80 78 215 457

(3.8) (27.7) (25.7) (36.3) (41.9) (30.9)

Supervision Only 0 3 14 44 119 180

(0.0) (1.1) (4.5) (20.5) (23.2) (12.2)

*Number of procedures (percentage of total for the given PGY).

Faculty completed evaluations for 92% of all operations performed with general surgery residents at the following levels: 248 (16.7%) Zwisch ratings at “Show and Tell,” 596 (40.2%) at “Active Help,” 457 (30.9%) at “Passive Help,” and 180 (12.2%) at “Supervision Only.” For the video sample, we collected data on 4 residents operating with 2 faculty surgeons during 8 total procedures encompassing 5 different types of procedures. Each resident was in a different post-graduate year (PGY-2 through 5). The recorded procedures consisted of two laparoscopic cholecystectomies, two open inguinal hernia repairs, two parathyroidectomies, one total thyroidectomy, and one laparoscopic ventral hernia repair. With a single exception, increasing Zwisch scores correlated with increasing PGY levels, based on sequential pairwise chi-squared tests: PGY 1 vs PGY 2 (χ2 ¼ 106.9, df ¼ 3, p o 0.001); PGY 2 vs PGY 3 (χ2 ¼ 22.2, df ¼ 3, p o 0.001); PGY 3 vs PGY 4 (χ2 ¼ 56.4, df ¼ 3, p o 0.001). Comparison of PGY 4 to PGY 5 scores was not significantly different (χ2 ¼ 4.5, df ¼ 3, p ¼ 0.21). No first-year residents received a supervision score of 4 (“Supervision Only”). However, even at PGY 5, only 23.2% of the observed operations were supervised at level 4. (Refer to Table 2 for details and Fig. 1 for a graphical summary.) Similarly, increasing operative complexity is associated with more guidance being provided to the resident, based on sequential pairwise chi-squared tests: Easiest 1/3 vs Average (χ2 ¼ 51.3, df ¼ 3, p o 0.001) and Average vs

Hardest 1/3 (χ2 ¼ 87.0, df ¼ 3, p o 0.001). The residents were provided with minimal guidance (“Supervision Only”) in just 4.1% of operations deemed to be in the “Hardest 1/3” category. (These results are summarized in Table 3 and graphically in Fig. 2.) The ACGME case log data were used to calculate how many times a resident had performed a procedure of a given type before receiving a Zwisch evaluation for a procedure of the same type. When a resident had performed 5 or fewer prior operations of the given type, the median Zwisch level was “Active Help,” whereas if a resident had performed more than 5 prior operations of the given type, the median Zwisch level was “Passive Help” (χ2 ¼ 46.0, df ¼ 3, p o 0.001). Both distributions had an interquartile range of one Zwisch level (Fig. 3). The interrater reliability for the Zwisch ratings assigned by the operating attending physician, in-person observer, and video rater for the 8 operations was high (intraclass correlation coefficient ¼ 0.90, 95% CI: 0.72-0.98, p o 0.001). The correlations between the Zwisch ratings and individual OPRS and O-SCORE scale items are shown in Table 4. The Zwisch ratings given by the operating attending physicians were significantly correlated to a high degree with each of the OPRS and O-SCORE item scores assigned by the blinded video raters (ρ’s between 0.86 and 0.94, all p o 0.01). For the OPRS items that measures guidance specifically the correlation with the Zwisch level is -0.92 (p o 0.01), where the negative sign simply indicates that the scales are inverted.

FIGURE 1. Zwisch levels by resident postgraduate year (PGY). Journal of Surgical Education  Volume 71/Number 6  November/December 2014

e93

TABLE 3. Zwisch Levels By Rated Complexity* Zwisch Levels Complexity Easiest 1/3 Average Hardest 1/3 Totals

Show and Tell 47 109 98 254

(24.4) (12.2) (24.4) (17.0)

Active Help 35 351 213 599

(18.1) (39.2) (53.0) (40.2)

Passive Help 63 318 76 457

(32.6) (35.5) (18.9) (30.7)

Supervision Only 48 117 15 180

(24.9) (13.1) (3.7) (12.1)

*Number of procedures (percentage of total for the given level of complexity).

DISCUSSION Our results demonstrate the construct validity of the Zwisch instrument when used to measure faculty guidance. Specifically, the amount of guidance provided by the faculty decreases with an increasing level of training for all but the PGY 4 and PGY 5 levels. The lack of differentiation seen at the senior resident levels is similar to the results seen by Gofton et al with O-SCORE11 and may reflect increased case complexity for chief residents. There is also increased guidance when operative complexity increases and with increasing experience with a given operation. All of these results are as theoretically predicted. Validity is further supported by the high correlation between Zwisch ratings and the OPRS guidance item. Importantly, we have also shown that Zwisch levels are not only a measure of supervision and guidance but also correlate strongly with resident intraoperative performance. Although the relationships between Zwisch levels, PGY, case complexity, and procedure-specific experience suggest increasing resident performance with increased Zwisch levels, it is impossible to exclude an unmeasured confounding variable as the cause of these correlations. For this reason, we also scored all the procedures using instruments that have already been demonstrated to provide valid measurement of resident performance. The high correlation between Zwisch scores and scores from OPRS and OSCORE items can be interpreted to mean that Zwisch levels can be used to measure intraoperative performance.

Furthermore, these data empirically support the hypothesized link between faculty guidance and resident performance. In this study, we have also shown that the Zwisch scale can be reliably used in everyday practice by faculty, provided they have undergone a small amount of rater training. The high interrater agreement is particularly notable given the different perspectives of each of the raters—attending physician, in-room observer, and video rater. This suggests that Zwisch ratings can be accurately assessed via a variety of methods. For example, one does not need to be in the OR to accurately assign a Zwisch score as long as an audio-visual recording of the faculty-resident interaction is available for review. This further supports the conclusion that the Zwisch instrument does indeed measure the desired construct. Additionally the OPRS and O-SCORE forms were completed by raters different from those who completed the Zwisch ratings, thereby minimizing the possibility of bias. Deployed on an automated smartphone-based system, the Zwisch scale can be used to feasibly rate resident intraoperative performance for most operations in which they participate. Our response rate of 92% suggests that reducing the complexity of the evaluation instrument can, for summative assessment, provide nearly continuous evaluation of resident intraoperative performance. Although we have shown that the Zwisch instrument can be used to provide valid and reliable performance assessments, the global nature of this rating scale makes it

FIGURE 2. Zwisch levels by complexity. e94

Journal of Surgical Education  Volume 71/Number 6  November/December 2014

FIGURE 3. Zwisch levels grouped by the prior experience a resident has had with any rated procedure before assessment.

suitable only for summative assessment. Other instruments (especially OPRS) include more fine-grained evaluation questions that are more readily useful for formative assessment. We are actively working to address this situation in the next version of our software by providing a dictation feature. With this feature, faculty can optionally provide specific verbal (formative) feedback for the resident in a manner that is quick and intuitive. It remains to be seen if this unstructured feedback mechanism will adequately supplement the usual intraoperative feedback from faculty. Limitations Our large group evaluation asked raters to score just 2 items —Zwisch level and complexity. We used ACGME case log and the resident PGY data to supplement the prospectively collected data. However, one assumes that there are TABLE 4. Spearman Correlations Between Zwisch Ratings Assigned By the Operating Surgeon and Individual Operative Performance Rating System (OPRS) and Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) Item Scores Determined By Blinded Faculty Video Raters Item Operative Performance Rating System (OPRS) Degree of prompting or direction Instrument handling Respect for tissue Time and motion Operation flow Overall performance Ottawa Surgical Competency OR Evaluation (O-SCORE) Knowledge of procedural steps Technical performance Visuospatial skills Efficiency and flow Communication

ρ

p value

0.92 0.001 0.94 0.005 0.94 0.005 0.94 o0.001 0.95 o0.001 0.95 o0.001 0.94 o0.001 0.93 0.001 0.92 0.001 0.86 0.007 0.92 0.001

multiple other possible confounders for assessing faculty guidance. We chose not to try and measure those confounders given our focus on feasibility which, with increasing complexity, would necessarily be compromised. It is also unknown how much rater bias may influence these results owing to the limited number of procedures that were evaluated by multiple raters. The other major limitation of this study is that it represents the experience of a single institution. The case mix, the culture around teaching and resident operative autonomy, resident innate abilities, and faculty willingness to complete resident performance evaluations may not be representative of other surgical training programs. Therefore, it may be the case that the Zwisch instrument will not provide valid data in other settings. Potential Benefits There are multiple levels at which this quantity of data might be beneficial. Firstly, historical resident performance data provided to faculty in the immediate preoperative period provides faculty data by which they can individualize the supervision and teaching provided to each resident. For example, if a faculty member can see how much guidance was required by a resident in the last 5 times they performed a similar procedure, they can more confidently adjust how much autonomy is granted on the sixth repetition. The availability of risk-related resident performance data permits faculty to optimize resident autonomy while maintaining patient safety. In addition to the local benefits, widespread measurement of resident operative performance data also promises to support more far-reaching reform efforts. One could feasibly track the progression of individual residents and use these data to tailor a curriculum of progressive autonomy based on competency instead of fixed periods. Taken in aggregate, data on the progression of resident performance can be also used to predict how many procedures most residents must complete to achieve competency for any specific procedure. These data could be

Journal of Surgical Education  Volume 71/Number 6  November/December 2014

e95

compared with the actual exposure of residents to various procedures with the goal of identifying those areas in which simulation might be most beneficial. It might also be used to inform the ongoing revision of national training guidelines. Lastly, performance data can be used to quantify the effect of specific educational interventions at defined time points, investigations that are currently limited owing to a lack of intraoperative performance metrics. This might aid educational researchers as they work to identify best practices and to further delineate the strengths and limitations of existing programmatic curricula.

CONCLUSION This study demonstrates that a 1-dimensional global rating scale can be used to collect faculty guidance data that accurately and reliably measure resident operative performance. Furthermore, we have demonstrated an integrated smartphone-based method that makes it feasible to perform continuous resident intraoperative performance evaluation. We hypothesize that large-scale continuous collection of resident performance data could provide a quantitative basis for ongoing local and national efforts to improve the current system of surgical education.

ACKNOWLEDGMENTS The authors wish to thank Vanessa Viggiano of the School of Public Health at the University of California, Berkeley, for her help during the preparation of this manuscript.

3. Klingensmith ME, Lewis FR. General surgery resi-

dency training issues. Adv Surg. 2013;47(1):251-270. 4. Johns M. Ensuring an Effective Physician Workforce

for the US—Summary of a Conference sponsored by the Josiah Macy Jr. Foundation, held in Atlanta, GA, 2011. 5. Regenbogen SE, Greenberg CC, Studdert DM, Lipsitz

SR, Zinner MJ, Gawande AA. Patterns of technical error among surgical malpractice claims: an analysis of strategies to prevent injury to surgical patients. Ann Surg. 2007;246(5):705-711. 6. Podnos YD, Wilson SE. Threats to the surgical

residency in the academic medical center. Arch Surg. 2001;136(2):161. 7. Bhatti NI, Cummings CW. Viewpoint: competency in

surgical residency training: defining and raising the bar. Acad Med. 2007;82(6):569-573. 8. George BC, Teitelbaum EN, DaRosa DA, et al.

Duration of faculty training needed to ensure reliable OR performance ratings. J Surg Educ. 2013;70(6): 703-708. 9. DaRosa DA, Zwischenberger JB, Meyerson SL, et al.

A theory-based model for teaching and assessing residents in the operating room. J Surg Educ. 2013;70(1):24-30. 10. Larson JL, Williams RG, Ketchum J, Boehler ML,

REFERENCES 1. Mattar SG, Alseidi AA, Jones DB, et al. General

surgery residency inadequately prepares trainees for fellowship: results of a survey of fellowship program directors. Ann Surg. 2013;258(3):440-449. 2. Yeo H, Viola K, Berg D, et al. Attitudes, training

experiences, and professional expectations of us general

e96

surgery residents: a national survey. J Am Med Assoc. 2009;302(12):1301-1308.

Dunnington GL. Feasibility, reliability and validity of an operative performance rating system for evaluating surgery residents. Surgery. 2005;138(4):640-649. 11. Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra

SJ. The Ottawa Surgical Competency Operating Room Evaluation (O-SCORE): a tool to assess surgical competence. Acad Med. 2012;87(10):1401-1407.

Journal of Surgical Education  Volume 71/Number 6  November/December 2014

Reliability, validity, and feasibility of the Zwisch scale for the assessment of intraoperative performance.

The existing methods for evaluating resident operative performance interrupt the workflow of the attending physician, are resource intensive, and are ...
316KB Sizes 1 Downloads 8 Views