The Laryngoscope C 2015 The American Laryngological, V

Rhinological and Otological Society, Inc.

Identifying Quality Indicators of Surgical Training: A National Survey Nasir I. Bhatti, MD, MHS; Aadil Ahmed, MD; Sukgi S. Choi, MD Objectives/Hypothesis: Evidence shows a positive association between quality of surgical training received and patient outcomes. Traditionally, improved patient outcomes are linked with increased operative volume. However, generalizing this finding to surgeons in training is unclear. In addition, reduced exposure due to work-hour restrictions calls for alternative methods to determine the quality of training. The purpose of this study was to identify the indicators of high-quality training by surveying the trainees and trainers. Methods: A questionnaire was developed based on input from faculty and previous studies. The survey was divided into three sections asking about the indicators of quality training, methods to measure them, and interventions for improvement. The questionnaire was administered to program directors (PDs) and senior residents of otolaryngology training programs nationwide. Results: The strongest indicators of quality training that were agreed upon by both residents and PDs were having faculty development as an ideal trainer while having a balanced level of supervision and independence, logbooks for exposure to volume and variety of pathology, continuous evaluation and provision of feedback. However, structured teaching, simulation-based training, and trainee exam scores failed to reach an agreement as a metric of high-quality surgical training. Conclusion: Measuring quality of a residency training program is imperative to produce competent surgeons and ensuring patient safety. The results of this study will help the residency programs to better train their residents and improve the quality of their teaching. Key Words: Quality indicators, surgical training, otolaryngology residency. Level of Evidence: N/A. Laryngoscope, 125:2685–2689, 2015

INTRODUCTION Increased spotlight on patient safety and performance ratings in recent years has increased pressure on the health care industry to deliver high-quality patient care. To ensure high-quality care, a high quality of training is needed to produce competent physicians. A retrospective analysis of almost 5 million cases involving > 4,000 clinicians from 107 US obstetrics and gynecology residency programs provided an empirical support to the above notion by establishing an association between complications rate and the learning environment from which the treating physician graduated. The study showed that patients had an approximately 30% higher complication rate when treated by a gradu-

From the Department of Otolaryngology–Head and Neck Surgery, The Johns Hopkins School of Medicine (N.I.B., A.A.), Baltimore, Maryland; and the Division of Pediatric Otolaryngology, Children’s Hospital of Pittsburgh (S.S.C.), Pittsburgh, Pennsylvania, U.S.A. Editor’s Note: This Manuscript was accepted for publication February 19, 2015. Nasir I. Bhatti, MD, MHS, and Aadil Ahmed, MD, contributed equally to this work as first co-authors. This article was accepted for oral presentation at the Combined Sections Meeting of the Triological Section Meeting in San Diego, California, January 22–24, 2015. The authors have no funding, financial relationships, or conflicts of interest to disclose. Send correspondence to Sukgi S. Choi, MD, 4401 Penn Avenue, Faculty Pavilion, Pittsburgh, PA 15224. E-mail: [email protected] DOI: 10.1002/lary.25262

Laryngoscope 125: December 2015

ate from a bottom quintile residency program than those treated by graduates of a top quintile program.1 Although no official list of top-to-bottom quintile residency programs for any specialty actually exists, the ratings are generally based on different factors. In 2014, Doximity, an online network with more than 250,000 physician members, released a list of top performing programs across select specialties based on over 50,000 peer nominations from board-certified US physicians.2 Similarly, the eliteness of a program has also been attributed to its ability to recruit the most talented individuals as well as their residents’ national ranking on board exams. However, the volume of training through time in training or through caseload has traditionally served as a surrogate indicator of quality for both physicians and patients when selecting a surgeon or a hospital.3 It is true that increased operative volume can be correlated with improved patient outcomes,3 but how far these results can be applied to trainees is not clear. In addition, residents’ work-hour restrictions have already posed the problem of reduced exposure and case load.4 This calls for identification of alternative indices of quality of training in addition to greater volume. Currently, no defined standards for a high-quality surgical training in otolaryngology exist. In this study, we administered a survey to both trainers and trainees nationwide to identify the elements of high-quality surgical training. The aim of this study was to develop a set of consensus quality metrics that can be further tested and related to Bhatti et al.: Quality Indicators of Surgical Training

2685

TABLE I. Survey Statements and the Percentage of Respondents Who Agreed With Them as Quality Metrics.* Rating 4 or 5, % Combined

Program Directors

Residents

Quality Indicators of Surgical Training Ideal trainer Approachability, engagement, and willingness of attending surgeons to teach

97%

97%

98%

0.7

Exposure

a) Variety of cases

97%

95%

99%

0.1

b) Volume of cases A balance between supervision and independence and a step-wise increase in autonomy for trainees Continuous evaluation and provision of immediate and ongoing formative feedback

89% 94%

85% 94%

93% 94%

0.08 0.9

88%

91%

86%

0.4

Metric

Statement

Learner’s autonomy

Evaluation and feedback

P value

Organization

Balance between time constraints and training

82%

75%

90%

0.01

Structure Personalized training

Structured teaching program and didactics Training to meet educational needs of the trainees

69% 74%

64% 71%

75% 78%

0.1 0.2

Simulation

Opportunity to perform on simulators

38%

25%

52%

0.001

86%

88%

85%

0.6 0.7

Methods of Measuring Quality in Training Evaluation and feedback Both trainees and trainers’ evaluation and feedback to each other Exposure a) Trainee logbook for case load and variety

79%

81%

78%

b) Benchmark minimum number of cases

65%

54%

76%

0.004

a) Evidence of trainee improvement (in-service scores, board scores, OSATS) b) Fellowship placements and appointments of recent graduates

65%

66%

65%

0.8

63%

60%

66%

0.4

Faculty development a) Encouragement of trainees’ feedback

86% 83%

86% 88%

86% 79%

0.9 0.1

b) Ongoing and continuous methods of objective evaluation and provision of feedback

83%

83%

84%

0.8

Structure and organization

Implementing structured and organized methods of training; minimizing trainees’ fatigue

65%

55%

75%

0.01

Personalized training and simulation

a) Continuity of exposure to same attending

65%

61%

70%

0.2

b) Opportunity for deliberate practice on simulation and use of learner-centered approach for residents’ training and placement in different rotations

43%

28%

58%

0.0001

Progress

Interventions to Improve Quality of Training Train the trainers Evaluation and feedback

*Statements in boldface achieved a consensus threshold of 75% response rate. OSATS, Objective structured assessment of technical skills.

better outcomes, ultimately allowing hospitals and residency programs to provide evidence of training quality and take measures for further improvement.

MATERIALS AND METHODS A 20-item questionnaire (Table I) was developed based on faculty input and previous studies on quality indicators of surgical training.5,6 The survey was divided into three parts, collecting participants’ responses about their preference on quality indicators of surgical training, methods of measuring quality in training, and interventions that could be applied to improve the training quality. A Likert scale from 1 (strongly disagree) to 5 (strongly agree) was attached to each statement to quantify the level of agreement of the participants with that item as a measure of quality indicator. Any item that received a rating of 4 or 5 by at least 75% of the respondents was contemplated to reach a consensus threshold and considered an important metric of quality training.

Laryngoscope 125: December 2015

2686

The survey was sent as a Web link to the program directors (PDs) of 106 Accreditation Council for Graduate Medical Education (ACGME)-accredited otolaryngology residency programs via Survey Monkey (web-based survey tool). They were requested to complete the survey themselves and also forward it to their final-year residents. No identifying information about the participants was collected, and an automatic reminder was sent to the nonresponders every week to complete the survey during the months of November and December 2014. Participation in the study was voluntary, with no financial or other remuneration offered. An average time of 2 to 3 minutes was expected to complete the survey. This study was approved by the institutional review board of Johns Hopkins University School of Medicine, and completion of the survey was considered as consent for the study. The data were imported into Microsoft Excel 2010 (Microsoft Corp., Redmond, WA), and all the calculations and tables were performed using the same software. Chi-squared test was

Bhatti et al.: Quality Indicators of Surgical Training

TABLE II. Metrics of Quality in Order of Agreement. Quality Indicators

Strong

Ideal trainer Exposure

Measurement of Quality

Interventions

Evaluation and feedback Case volume and variety

Faculty development Evaluation and feedback

Benchmark minimum number of cases Improvement in scores and fellowship placements

Structure and organization Personalized training and simulation

Learner’s autonomy Evaluation and feedback Organization Weak

Personalized training Structured teaching Simulation

used to calculate P values ( 0.05 was considered statistically significant).

RESULTS Quality Indicators Sixty-five out of 106 (61%) program directors, and 88 out of 310 (28%) final-year residents responded. Table I shows a percentage of respondents who agreed with each of the quality metrics that were queried in our survey. For quality indicators, the two top-most ranked indicators agreed upon by both residents and PDs were the role of attending as an ideal trainer and exposure to a variety of cases during training, as shown in Table II. Ninety-five percent of residents also ranked volume of cases as a strong indicator of quality, which was also rated high by 85% of PDs. Despite the fact that the majority of participants supported the role of faculty as ideal trainers, this supervision should be balanced by increasing responsibility on the residents because 94% of both residents and PDs favored learner’s autonomy. An objective evaluation and provision of feedback was also considered a strong indicator of quality training supported by 88% of respondents (91% PDs, 86% residents). A significant difference (P 5 0.01) in response was noted between residents and PDs while considering an organized training schedule as a quality indicator. It was interesting to note that, whereas 90% of residents responded that a balanced training schedule and keeping the time constraints minimum was an indicator of quality training, PDs did not agree, with only 70% marking it as an indicator of quality training. Similarly, personalized training was supported by more residents (78%) than PDs (71%); however, the difference was not significant. Interestingly, structured teaching and training on simulation were not considered to be good quality indicators among the survey participants, as shown in Table II. Seventy-five percent (75%) of residents favored structured teaching and didactics in contrast to 69% of PDs. Similarly, simulation was marked as a quality indicator by 52% of residents as compared to only 25% of PDs, with a significant difference in their response (P 5 0.001). Somewhat surprisingly, 16% of residents and 28% of PDs rated simulation as 1 or 2 on the Likert Laryngoscope 125: December 2015

Fig. 1. Varied response to simulation-based training as a quality indicator by residents and program directors (PDs). More residents as compared to PDs consider simulation-based training to be a strong quality metric on a 5-point Likert scale. Each area of the pie corresponds to 1 to 5 rating of the Likert scale. The percentages mentioned are the response rate received for each rating. [Color figure can be viewed in the online issue, which is available at www.laryngoscope.com.]

Bhatti et al.: Quality Indicators of Surgical Training

2687

scale and strongly disagreed with it as a quality indicator of surgical training, as shown in Figure 1.

Measurement of Quality Objective, ongoing evaluation and feedback was again ranked high and suggested by both residents (85%) and PDs (88%) as a metric to measure quality of surgical training. This was followed by 79% of participants favoring a trainee logbook for case load and case variety to measure quality of training. This is in agreement with the above results also supporting case load and case variety as a quality indicator. Benchmark minimum number of cases received a varied response and was favored by 76% of residents, in contrast to only 54% of PDs (P 5 0.004). Exam scores and fellowship placements did not achieve a consensus threshold to be regarded as a strong metric for measuring performance of a program by both residents and PDs; however, 66% of residents did consider fellowship placements as important and ranked this as a strong quality metric.

Interventions for Improving Quality In responses regarding interventions for improvement of training quality, faculty development was ranked number 1, as evident by 86% of participants supporting it. Evaluation and feedback was also rated high and supported by 83% of respondents as an intervention to improve training quality. Here again, structured teaching and organized schedule to prevent time constraints were not considered as an important intervention to improve quality, and a significant difference in opinion was apparent (P 5 0.01) between the two groups. Seventy-five percent of residents supported it as a step toward better quality, in contrast to 55% of PDs. Personalized training in the form of continuity of assignment to the same attending also received a similar response; 70% of residents preferred working with the same attending as opposed to 61% PDs favoring it. Deliberate practice and opportunity for simulation training had only 43% of respondents (28% PDs, 58% residents, P 5 0.0001) favoring it as a strategy toward better training.

DISCUSSION In order to recruit capable trainees and maintain accreditation, residency programs need to establish high-quality training. Traditionally, training through case load has served that purpose of indicating highquality training. However, programs with smaller volumes and ACGME work-hour limitations require exploring additional avenues of quality training, which are applicable to all programs. We elected to survey PDs and residents because they are the main stakeholders in a training program and bear more responsibility than anyone else to ensure quality.6 The resulting feedback from trainers and trainees may not only help identify the metrics of what can be regarded as an index of quality in surgical training but also allow for a comparison of differences among the two groups. In addition to idenLaryngoscope 125: December 2015

2688

tifying standards of quality, metrics to measure it and subsequent focused interventions to improve quality were explored. It should be noted that it is very difficult to quantify quality in real-life settings, and the focus of this study was to discover alternative metrics of quality that are easy to measure and exportable to programs lacking larger volumes. Similar to the model of the Delphi technique, this study collected the opinions of both learners and teachers on what they felt was the most effective way to learn and teach, respectively. However, these consensus metrics need further validation and testing similar to the study by Asch et al.1 to prove better training outcomes (patient morbidity and mortality rates, patient satisfaction, board certification rates, etc). To our knowledge, this is the first study in otolaryngology addressing issues of quality of education and training. The results of this survey render support to the notion that high-volume hospitals are indicative of better quality. Both residents and PDs agreed that exposure to case load and diversity of pathology is one of the strongest quality indicators of surgical training. It is obvious that high-volume programs tend to have better facilities and more experienced surgeons,3 providing high-quality training. Both residents and PDs agreed that trainee logbooks for case load and variety should also serve as a metric to measure quality. Surprisingly, benchmarking the minimum number of cases was not supported by as many PDs as residents. It may be argued that the faculty surgeons are focused on timeliness and efficiency in the operating room, whereas the residents are primarily hoping to use these cases as learning opportunities. A possible solution for this discordance between the faculty and the residents may be to use a modular approach. With this approach, the faculty can work collaboratively with the resident in focusing on the steps of the procedures not yet mastered by the trainee. However, having experienced surgeons in a program may not suffice; those surgeons should also have the capability to transfer their knowledge and experience to trainees. This is probably why the most agreed upon metric of quality indicator in a program was to have attendings who are able to teach and engage with the residents. This may also the reason why the majority of respondents also asked for implementing faculty development programs as a major intervention that can enhance the quality of training. The respondents felt that supervision is necessary, but gradual increase in resident’s independence was also highlighted. Increased responsibilities challenge the resident to develop increased motivation and perceived competence in their assigned activities, which ultimately produce positive outcomes.7 This was supported empirically by another study in which otolaryngology residents showed improved performance on in-service exams when a resident-controlled curriculum was implemented.8 Learner’s autonomy is part of a learner-centered approach in which there is a focus on learner’s educational needs, and training is provided to cater those needs. Not surprisingly, this form of need-based training Bhatti et al.: Quality Indicators of Surgical Training

was found to be more agreeable by residents than PDs. A learner-centered approach or a personalized form of training is a rather new concept, and more evidence is needed to convince faculty of its efficacy, the majority of whom may be slow to adopt novel methods of teaching. A similar response was seen for the opportunity for simulation training, which failed to achieve an agreement to be an effective quality indicator or as an intervention to improve quality. This might be due to the fact that simulation-based training (SBT) is yet to be routinely incorporated into otolaryngology training. Although various simulators have been developed and validated for different otolaryngology procedures showing the acquisition of skills and their transfer to the operating room in a safe and time-effective manner, very few academic institutions have been able to adopt the curriculum.9 The main barriers cited are the cost, inadequate human resources, difficult integration of SBT into educational strategy, and logistical barriers.10 In addition, the lack of coordinated effort, flaws in study design, changes in simulator-validation concepts, and limited attention to skill retention are also obstacles to implementing the SBT.10 Those who already have acquired the simulators identify the lack of free time, financial consequences of missing work, lack of standardized courses, fear of an inaccurate reflection of one’s own clinical ability, and an absence of policy to promote this type of education as the main reasons to practice on simulators.11 This will be a slow process; however, once adopted, SBT will be a powerful tool for training and evaluation in a time- and resource-limited environment. This trend is already reflected in our results that twice as many residents as PDs favored SBT. It was not surprising to see that there was an agreement on objective evaluation and feedback as a quality indicator as well as a metric to evaluate and improve the training. The usefulness of objective evaluation and provision of formative feedback is wellestablished because a number of objective tools to measure surgical skills have been developed and are in practice.12,13 An ongoing assessment process keeps identifying the area where the trainee is lacking, and an immediate remediation is possible. It was also interesting to see that, whereas residents supported an organized and structured training curriculum to keep the fatigue and time constraints at a minimum, PDs failed to reach an agreement in identifying it as a way to improve the training quality. It should be kept in mind that almost all PDs had trained in the more traditional training systems, with long hours and an abundance of operative cases. Their main concern, which can be attributed to these results, is already reduced clinical exposure and training opportunities due to ACGME duty-hours rules. Although this study provided important information about quality metrics as perceived by otolaryngology community, it has certain limitations. This survey

Laryngoscope 125: December 2015

sampled views from both PDs and residents; however, the response rate was not high, especially of residents. It is understandable that PDs may have time constraints and did not have a chance to complete such surveys. The lower response from residents is probably due to the fact that they were invited to complete the survey via PDs, and no reminder could be sent directly to them. In addition, only final-year residents were invited to participate, and an opportunity to assess the effect of experience on opinions was missed. To keep the survey short and increase the feasibility, some of the metrics were combined together, which could have affected the opinions of the respondents. Finally, the results of the survey are perceived opinions; an actual correlation of these metrics with outcomes of training needs to be measured. Despite these limitations, the findings of this study provide a framework in further defining quality standards of training in otolaryngology.

CONCLUSION Reduced clinical exposure and a drive for increased competency across health care require enhanced quality of training. The results presented identify indicators of high-quality surgical training in otolaryngology that will permit and facilitate programs to take measures to maximize their quality of training.

Acknowledgment The authors wish to thank all the program directors and residents who took time to complete this survey.

BIBLIOGRAPHY 1. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA 2009;302: 1277–1283. 2. Harder B. Doctors name America’s top residency programs. U.S. News & World Report. February 20, 2014. http://health.usnews.com/health-news/ top-doctors/articles/2014/02/20/doctors-name-americas-top-residency-programs. Last accessed on January 12, 2015. 3. Birkmeyer JD, Stukel TA, Siewers AE, Goodney PP, Wennberg DE, Lucas FL. Surgeon volume and operative mortality in the United States. N Engl J Med 2003;349:2117–2127. 4. Rosenbaum L, Lamas D. Residents’ duty hours—toward an empirical narrative. N Engl J Med 2012;367:2044–2049. 5. Singh P, Aggarwal R, Zevin B, Grantcharov T, Darzi A. A global Delphi consensus study on defining and measuring quality in surgical training. J Am Coll Surg 2014;219:346–353.e7. 6. Singh P, Aggarwal R, Pucher PH, Duisberg AL, Arora S, Darzi A. Defining quality in surgical training: perceptions of the profession. Am J Surg 2014; 207:628–636. 7. Williams GC, Deci EL. The importance of supporting autonomy in medical education. Ann Intern Med 1998;129: 303–308. 8. Reh DD, Ahmed A, Li R, Laeeq K, Bhatti NI. A learner-centered educational curriculum improves resident performance on the otolaryngology training examination. Laryngoscope 2014;124:2262–2267. 9. Arora A, Lau LY, Awad Z, Darzi A, Singh A, Tolley N. Virtual reality simulation training in otolaryngology. Int J Surg 2014;12:87–94. 10. Stefanidis D, Sevdalis N, Paige J, et al. Simulation in surgery: what’s needed next? Ann Surg 2014. Epub ahead of print. 11. Savoldelli GL, Naik VN, Hamstra SJ, Morgan PJ. Barriers to use of simulation-based education. Can J Anaesth 2005;52:944–950. 12. Francis HW, Masood H, Chaudhry KN, et al. Objective assessment of mastoidectomy skills in the operating room. Otol Neurotol 2010;31:759–765. 13. Ahmed A, Ishman SL, Laeeq K, Bhatti NI. Assessment of improvement of trainee surgical skills in the operating room for tonsillectomy. Laryngoscope 2013;123:1639–1644.

Bhatti et al.: Quality Indicators of Surgical Training

2689

Identifying quality indicators of surgical training: A national survey.

Evidence shows a positive association between quality of surgical training received and patient outcomes. Traditionally, improved patient outcomes are...
137KB Sizes 2 Downloads 10 Views