Journal of Clinical Anesthesia (2015) 27, 290–295

Original Contribution

Feasibility of patient and peer surveys for Maintenance of Certification among diplomates of the American Board of Anesthesiology☆ David O. Warner MD (Professor of Anesthesiology; Director, the American Board of Anesthesiology)a,b,⁎, Huaping Sun PhD (Manager of Psychometrics and Research)b , Ann E. Harman PhD (Chief Assessment Officer)b , Deborah J. Culley MD (Associate Professor of Anesthesiology; Director, the American Board of Anesthesiology)b,c a

Department of Anesthesiology, Mayo Clinic, 200 1st Street SW, Rochester, MN 55905, USA The American Board of Anesthesiology, Inc, 4208 Six Forks Road, Suite 1500, Raleigh, NC 27609, USA c Department of Anesthesiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA b

Received 7 November 2014; revised 15 December 2014; accepted 2 March 2015

Keywords: American Board of Anesthesiology (ABA); Maintenance of Certification in Anesthesiology Program (MOCA); MOCA Patient Care Survey; MOCA Peer Survey

Abstract Study objective: The initial developmental standards for Maintenance of Certification programs proposed by the American Board of Medical Specialties included the administration of patient and peer surveys by the diplomate every 5 years. The aim of this pilot study was to determine the feasibility of Maintenance of Certification in Anesthesiology Program (MOCA) patient and peer surveys in a selected group of American Board of Anesthesiology (ABA) diplomates. Design: The design was a pilot test of survey instruments—MOCA Patient Care Survey and MOCA Peer Survey. Setting: The setting was the ABA, Raleigh, NC. Subjects: The subjects were ABA-certified anesthesiologists who were active examiners for the primary certification oral examination as of January 2013. Measurements: Fifty-one participating physicians in the patient survey group distributed brochures, which included a link to the MOCA Patient Care Survey, to up to 100 consecutive patients at the point of care. Fifty-one participating physicians in the peer survey group distributed invitations to MOCA Peer Survey via e-mail to 20 peers in a variety of roles. Participants developed and evaluated a practice improvement plan based on survey results. Participants were also surveyed on their opinions on the feasibility of implementing the piloted survey instrument in their practices.



This study was supported by the ABA, Raleigh, NC. ⁎ Corresponding author. Department of Anesthesiology, Mayo Clinic, 200 1st Street SW, Rochester, MN 55905, USA. Tel.: +1 507 205 4288; fax: + 1 507 255 7300. E-mail address: [email protected] (D.O. Warner). http://dx.doi.org/10.1016/j.jclinane.2015.03.002 0952-8180/© 2015 Elsevier Inc. All rights reserved.

Feasibility of MOCA patient and peer surveys

291

Main results: Response rates for the patient care and the peer surveys were 15% and 75%, respectively. Both surveys indicated a high level of satisfaction with the diplomates; approximately two-thirds of physicians could not identify practice areas in need of improvement. Conclusions: These results suggest that threats to the validity of these surveys include distribution bias for peer surveys and response bias for patient surveys and that surveys often do not provide actionable information useful for practice improvement. Alternative approaches, such as including anesthesiologists within an integrated institutional evaluation system, could be explored to maximize the benefits of physician assessments provided by peers and patients. © 2015 Elsevier Inc. All rights reserved.

1. Introduction

2. Materials and methods

The American Board of Medical Specialties (ABMS, Chicago, IL) and its 24 member boards have modified the certification process for physicians by requiring an ongoing program of lifelong learning and continual professional development in the 6 core competencies of patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice after initial certification. This process is known as Maintenance of Certification (MOC) and has 4 required components: part I, professional standing; part II, lifelong learning and self-assessment; part III, cognitive expertise; and part IV, practice performance assessment and improvement. For the American Board of Anesthesiology, Inc (ABA, Raleigh, NC), all certificates issued before 2000 were nontime limited, whereas all certificates issued in or after 2000 were time limited and require participation in the MOC in Anesthesiology Program (MOCA) to maintain certification. The initial developmental standards for MOC programs proposed by the ABMS included the administration of patient and peer surveys by the diplomate every 5 years; these standards are still under consideration and have not been mandated. The patient survey is intended to aid diplomates in assessing their patient communication skills and includes a group of “communication core” survey questions that are recommended by the standards in addition to specialty-specific survey questions. The peer survey is intended to aid diplomates in the evaluation of their communication skills with members of the health care team. The initial ABMS standards recommended a minimum of 45 completed patient surveys and 10 completed peer surveys per diplomate. However, there are several potential barriers to survey completion in anesthesiology practice, including the relatively brief encounters that most patients experience with their anesthesiologists during the stressful perioperative period and an ongoing decline in overall survey response rates related to “survey fatigue” and other factors [1]. Thus, it is not clear whether these developmental MOC standards are feasible in the practices of anesthesiologists. The aim of this pilot study was to determine the feasibility of MOCA patient and peer surveys in a selected group of ABA diplomates.

This study was deemed exempt from review by the Mayo Clinic Institutional Review Board.

2.1. Participants Invitations to participate in this pilot study were sent via e-mail to all 201 ABA-certified anesthesiologists who were active examiners for the primary certification oral examination as of January 2013. This population was chosen, as they are highly invested in the ABA certification process and thus motivated to improve the MOCA process. They were offered MOCA part IV credit (for a case evaluation) if they completed the study. Of the 201 invited anesthesiologists, 147 (73%) responded, with 102 (51%) agreeing to participate. Of these, 80 (78%) were enrolled in MOCA, and the other 22 had been recertified within the previous 10 years. Equal numbers of those who consented were randomized to distribute patient or peer surveys by using SPSS version 20.0's (IBM Corporation, Armonk, NY) random sampling procedure.

2.2. Survey instruments 2.2.1. MOCA Patient Care Survey The MOCA Patient Care Survey focuses on physicians' communication skills with the patients, which are at the core of interpersonal aspects of patient care. The rationale for assessing this domain is supported by the evidence that effective physician-patient communication is positively correlated with patient adherence to treatment regimens [2], improved patient health outcomes [3], and patient satisfaction [4]. This survey has not been specifically validated for anesthesiologists. The ABMS standards recommended 7 “common core” questions across the ABMS Member Boards (Q1-Q4 and Q7-Q9 in Table 1). The other questions were added by the ABA and were related to perioperative care. For example, if the patients had an outpatient procedure, they were asked whether the physician or a health care provider from the facility warned them of any signs or symptoms that would require immediate medical attention. All the questions were framed in the context of a specific visit to the physician. Wherever applicable, a 3-point rating scale of “yes, definitely,” “yes, somewhat,” and “no” was used. To accommodate the patients who do not speak English or only speak

292 Table 1

D.O. Warner et al. Patient assessments of 33 ABA-certified physicians, using MOCA Patient Care Survey a

Question number

Questions in MOCA Patient Care Survey

Mean (SD)

1 2 3 4

Explain things in a way that was easy to understand Listen carefully to you (Did you) talk with this physician about any health problems or concerns Talk with you about how they were going to manage your health problems or concerns that are of particular relevance to your care and management Adequately explain to you your options for care Talk with you about the risks and benefits of your options Seem to know important information about your medical history Demonstrate respect for what you had to say during your interactions with this physician Spend enough time with you Warn you about any signs or symptoms that would require immediate medical attention during your recovery period (Would you) recommend this physician to a family member, a colleague, or other people you know?

2.99 (0.14) 2.98 (0.14) 2.68 (0.68) 2.82 (0.52)

5 6 7 8 9 10 b 11

2.84 (0.47) 2.79 (0.54) 2.84 (0.41) 2.98 (0.17) 2.94 (0.25) 2.93 (0.26) 2.98 (0.13)

Patient was asked “Did this physician…” Ratings: 3, yes, definitely; 2, yes, somewhat; 1, no. This question only applies to those patients who had an outpatient procedure.

a

b

limited English, a Spanish version of the MOCA Patient Care Survey was made available to the patients. The survey translation was done by a certified translation agent. Patients were also given opportunities to provide their demographic information such as ethnicity, race, age, sex, highest education level completed, and their overall health status and were asked 2 proxy questions of whether they got help from others in completing the survey and if yes, what assistance they received. If the patients identified an area as problematic, they were given the opportunity to provide a free response on how the physician could improve in that area. Surveys were administered in a Web-based format (SurveyMonkey), with links to the survey provided in a brochure distributed to the patients by the participating physicians. Each physician in the patient survey group (n = 51) received 100 copies of an invitation brochure and 100 copies of a reminder brochure that could be distributed to patients. Both brochures included a link to the MOCA Patient Care Survey that was uniquely tied to the individual physician. Patient responses were not identifiable. The physicians were asked to distribute the invitation brochures to consecutive patients at the point of care and follow up with the reminder brochures in a week, until after 100 patients had been invited or after 3 months of distribution, whichever came first. Once the surveys were closed, the ABA sent each physician his or her aggregated survey results. The physicians were then asked to identify areas in need of improvement, develop a practice improvement plan (PIP), and provide their feedback on the survey distribution process. Three months after the physicians submitted their PIP, they were queried regarding whether they successfully implemented their PIP and completed a survey to gain their opinions on the feasibility of the patient survey.

alism, and coordination of care. An initial screening question asked whether the peer would recommend the physician to a family member, a colleague, or a peer, in addition to asking them to identify how they relate to the anesthesiologist in the profession. If they answered negatively, further questions probed 4 areas: whether the physician effectively communicates with the respondent, whether the physician effectively communicates with other professional staff, whether the physician treats the respondent with professional courtesy and respect, and whether the physician impairs coordination of patient care. If the peers identified an area as problematic, they were given the opportunity to provide a free response on how the physician could improve in that area. Each physician in the peer survey group (n = 51) was asked to e-mail a survey invitation to 20 peers and to follow up with their peers in a week via an e-mail reminder. These messages included a link to the MOCA Peer Survey uniquely tied to the physician. Physicians were asked to choose peers in a variety of roles, including anesthesiologists, nonanesthesiologist physicians, certified registered nurse anesthetists, perioperative nurses, anesthesiologist assistants, and technicians, with no more than 10 people in any group. After the survey period was completed, the physicians were provided aggregated survey results, asked to identify areas in need of improvement, develop a PIP, and provide feedback on the peer survey distribution process. Three months later, the physicians were asked to document whether they successfully implemented their PIP and completed a survey to gain their opinions on the feasibility of peer surveys.

2.2.2. MOCA Peer Survey The MOCA Peer Survey focuses on physicians' communication with members of the health care team, profession-

Descriptive statistics were calculated for the quantitative data. Similar categories or themes were grouped for the free-form comments from the participating physicians.

2.3. Data analysis

Feasibility of MOCA patient and peer surveys

3. Results 3.1. Patient survey ratings Six physicians dropped out of the study because their institutions did not permit the distribution of patient surveys or because their institutional review boards required patient enrollment and consent. In addition, 5 physicians were considered to have dropped out, as they stopped communicating with the ABA in the study process. Of the remaining 40 physicians, each distributed 61 ± 35 (mean ± SD) surveys and received 10 ± 10 completed surveys (range of 0-36). Seven physicians did not receive any patient responses; 6 of these confirmed to the investigators that they did distribute invitation brochures to patients (number distributed ranging from 3-65). Of the 2444 survey invitations distributed by the physicians, 374 responses were received, for an overall response rate of 15%. None of the physicians achieved the target number of 45 completed patient surveys. Three (1%) patients who responded did not remember their anesthesiologist and were excluded from further analysis. Most of the remaining 371 patients were White (85%) and female (58%), with a mean age of 50 ± 19 years. Self-rated patient health was generally good (24%, excellent; 35%, very good; 31%, good; 7%, fair; and 1%, poor). Overall, the patients were well educated, with 75% reporting at least some postsecondary education. Fifteen percent of the patients got help from others in completing the survey. Virtually, all patients (99%) would recommend their physician to their family member, colleague, or others. The average overall patient rating was 2.91 ± 0.09 (range of 2.68-3.00) on a 3-point scale. As shown in Table 1, responses to all items were strongly positive. Of the 33 physicians, only 1 received a mean rating more than 2 SDs below the overall mean. Four physicians received 4 free-text comments, with 2 mentioning not spending enough time with patients, 1 mentioning not getting enough information during the preoperative visit, and 1 complaining of the side effects of anesthesia. The aggregated survey results sent to participating physicians presented the frequencies of each option selected for the main survey questions (patients' demographic information not included) and the free comments, if applicable, from their patients. Thirty-seven physicians reviewed their aggregated survey results: 21 (57%) of them did not identify any area in need of improvement, 11 (30%) identified 1 area in need of improvement, 3 (8%) identified 2 areas in need of improvement and 2 (5%) identified 3 or more areas in need of improvement. The most frequently identified need was communication with patients (n = 17), followed by patient logistics (n = 3), patient perception of how much the physician knows about their medical history (n = 2), and communication with peers (n = 1). All the physicians who submitted their PIP based on these results considered themselves to have successfully implemented their plan.

293 Table 2 presents the physicians' opinions on the feasibility and utility of the MOCA Patient Care Survey. Although most of the physicians in the patient survey group felt that the number of survey invitations was reasonable and thought that their patients understood the purpose of the brochure and were willing to consider completing the survey, the process of brochure distribution interfered with the clinical flow of the physicians' practice. In addition, there was no optimal method of sending the reminders. Two-fifths of the physicians thought that the time and effort invested in the survey were excessive. Overall, only approximately one-quarter of the physicians distributing MOCA Patient Care Surveys found them useful for improving their practice, and only 1 in 7 thought that the patient survey would be feasible in their practice setting.

3.2. Peer survey ratings Five physicians did not send out any surveys to their peers and dropped out of the study. Of the remaining 46, 16 ± 4 completed surveys were received for each physician (range of 5-23). For the 973 survey invitations distributed by the physicians, 732 responses were received, for an overall response rate of 75%. Forty-three (93%) physicians achieved the target number of 10 completed peer surveys. Responses were received from anesthesiologist colleagues (28%), nurses (23%), clinical surgeon (or other procedural) physician colleagues (18%), office administrative staff (5%), trainees (5%), and anesthesiologist assistants (3%). Almost all peers (97%) would definitely recommend the physician they were evaluating to their family member, colleague, or peer. On a 3-point scale, the average rating for this question was 2.97 ± 0.08 (range of 2.50-3.00). Of the 46 physicians, only 2 received a mean rating more than 2 SDs below the overall mean. Six physicians received 16 free-text comments, mostly concerning communication issues. Forty-five (98%) physicians reviewed their aggregate results: 35 (78%) of them did not identify any area in need of improvement, 8 (18%) identified 1 area in need of improvement, and 1 (2%) identified 2 areas in need of improvement. The most frequently identified need was communication with peers (n = 6), followed by patient logistics (eg, waiting times and administrative procedures, n = 1), patient comfort (eg, nausea and vomiting and postoperative delirium, n = 1), difficult working environment (n = 1), and supervision of resident (n = 1). All but 1 physician who submitted their PIP considered themselves to have successfully implemented the plan after 3 months. The one who did not consider himself having successfully implemented the plan cited “hospital administration's unwillingness to consider alternative approaches” as the obstacle. Table 2 presents the physicians' opinions on the feasibility and utility of the MOCA Peer Survey. Similar to the patient survey group, most of the physicians in the peer survey group thought that the number of survey invitations was reasonable and that their peers understood the purpose of

294 Table 2

D.O. Warner et al. Physicians' opinions on the feasibility and utility of the MOCA Patient Care Survey and the MOCA Peer Survey Percentage of physicians in the Percentage of physicians in the patient survey group (n = 37) peer survey group (n = 45) agreed that… agreed that…

Feasibility of the surveys Patients (or peers) seemed to understand the purpose of the survey. Patients (or peers) seemed willing to consider completing the survey. The no. of survey invitations that I distributed was reasonable. The procedure used to distribute surveys in this study would be feasible in my practice. I devised a convenient mechanism to mail the reminder brochures. I devised a convenient mechanism to e-mail the reminders. It would be better to use phone reminders. Distribution of the brochure (or survey) did not interfere with the clinical flow of my practice. The amount of time and effort needed to complete the survey and practice improvement practice process was excessive. Utility of the surveys The survey results were useful in helping me improve my practice.

81% 73% 54% 14%

84% 87% 80% 87%

19% 6% 19% 41%

_ 62% 11% 87%

41%

13%

24%

33%

Note: _ this question is not asked in the peer survey group.

the survey. The survey and reminder distributions posed less of a problem for the peer surveys and did not interfere with the clinical flow of the physicians' practice. Although almost 90% of the physicians in the peer survey group considered the peer survey feasible in their practice, only one-third of them found the results useful in helping them identify areas in need of improvement in their practice.

4. Discussion Patient-centered care, which involves perceiving and evaluating health care from the patients' perspective and adapting care to meet the needs and expectations of patients [5,6], has become an essential component in efforts to improve health care quality. Several validated surveys are available to evaluate health care experiences from the patients' perspective (eg, Hospital Consumer Assessment of Healthcare Providers and SystemsI). Such surveys may be applied both to facilities and to individual physicians, nurses, etc, primarily in outpatient settings. Recognizing their potential utility, the ABMS desires to incorporate patient (and peer) surveys into the MOC process. However, compared with physicians who provide continuing care, anesthesiologists often have only a single encounter with patients in a time-pressured environment, which may pose challenges to patient surveys and motivated the current pilot study. A major finding was that the response rate to the peer surveys was higher than that to the patient surveys. It is not surprising that the peers who have a long-term relationship I http://www.hcahpsonline.org. Centers for Medicare & Medicaid Services, Baltimore, MD. Accessed June 20, 2014.

with the physicians would be more likely to complete the survey. Nonetheless, the response rate to the patient survey in this study was much lower than previously reported in the literature concerning other patient surveys (range of 30%-77% [7-11]), which again likely reflects that these patients generally only have brief interactions with their anesthesiologists while conscious. This low response rate threatens survey validity due to potential response bias; those patients who are extremely satisfied or upset with their physician encounter or hold extreme opinions may be more likely to respond. It is also interesting to speculate that the low response rate may be an indicator that patients may not view the anesthesiologists' role as pivotal to their episode of surgical care—which itself might be worthy of consideration as a topic for a PIP. The free response comments from the physicians detailed additional challenges to the patient survey. First, it is quite common for the patients to receive team care during their hospital stay and possibly be seen by an anesthesiologist, a certified registered nurse anesthetist, an anesthesiologist assistant, and an anesthesiology resident. It is difficult for patients to single out the anesthesiologist for an evaluation and not confound it with the outcome of the overall care. Second, patients may have been burdened with other surveys related to their hospital experience, not to mention that some patient populations such as the elderly, the indigent, or those who are critically ill have physical limitations, resource constraints, and/or limited literacy skills. Third, many of these selected and motivated physicians expressed the concern that handing out the surveys at the preoperative evaluation interferes with the clinical flow and that sending out the reminders is cumbersome and requires excessive administrative effort not supported in many practices. Finally, the questions asked in the patient survey may be

Feasibility of MOCA patient and peer surveys more useful for some settings such as outpatient surgery and pain clinics. Some of the challenges could be overcome with integrated institutional evaluation systems used by many facilities, which attempt to evaluate overall patient experience within an episode of care; however, this could make it more difficult to attribute results to individual physician performance. Compared with patient surveys, fewer logistic concerns were identified with peer surveys. Some physicians were concerned about the distribution bias (ie, only sending the surveys to those peers who will rate them well) and suggested a better sampling system such as having to include chairs/chiefs or having chairs/chiefs pick the peer sample. These concerns could also be addressed through an integrated institutional evaluation system. Another major finding of this pilot study was that those patients and peers who did respond were very satisfied with their physicians and colleagues, respectively, and most of the study participants who reviewed their results (56 of 82, 68%) could not identify areas of their practices in need of improvement based on these survey results. The high levels of satisfaction expressed both by patients and peers may reflect excellent care, but the lack of variability in the ratings raises questions regarding the survey's ability to discriminate among levels of performance and to provide useful information for practice improvement. Although it could be that no improvement is needed for this selected group of anesthesiologists, it is also possible that the surveys do not provide meaningful feedback due to the reluctance of colleagues to provide honest critiques. Practically, the high satisfaction levels make it difficult for physicians to devise improvement plans. Almost all of those who did identify areas for improvement reported that they successfully carried out their improvement plan. The primary domain for the improvement plans was in communication, a domain of particular concern in the design and conception of the survey process by the ABMS. This pilot study has at least 2 limitations. First, the physicians in this study were not representative of the general population of anesthesiologists. It is likely that these results are more favorable regarding the potential utility of surveys than results, which would be obtained from a more representative population and represent the best possible results. Second, these surveys were distributed on an ad hoc basis by individual physicians, rather than by using a systematic process embedded in the clinical flow of practice. If such a systematic approach is adopted by physician practices or health care systems, results may differ.

295 In summary, when selected diplomates of the ABA distributed surveys to individual patients and peers, the response rate was much higher for peer surveys compared with patient surveys. Reported satisfaction levels for physician performance were very high for both, indicating that the method did not consistently provide actionable information for these physicians, making it difficult for them to devise PIPs. These results suggest that threats to the validity of these surveys include distribution bias for peer surveys and response bias for patient surveys. These data can inform decisions by the ABMS and the ABA regarding whether these developmental standards for the role of surveys within MOC should be modified. Alternative approaches, such as including anesthesiologists within an integrated institutional evaluation system, could be explored to maximize the benefits of physician assessments provided by peers and patients.

Acknowledgments We thank those ABA oral board examiners who participated in this study and helped evaluate the feasibility of the MOCA patient and peer surveys.

References [1] Sprague S, Quigley L, Bhandari M. Survey design in orthopaedic surgery: getting surgeons to respond. J Bone Joint Surg Am 2009; 91(Suppl 3):27-34. [2] Zolnierek KB, Dimatteo MR. Physician communication and patient adherence to treatment: a meta-analysis. Med Care 2009:826-34. [3] Stewart MA. Effective physician-patient communication and health outcomes: a review. CMAJ 1995:1423-33. [4] Ong LM, de Haes JC, Hoos AM, Lammes FB. Doctor-patient communication: a review of the literature. Soc Sci Med 1995:903-18. [5] Gerteis M, Edgman-Levitan S, Daley J, Delbanco TL. Through the patient's eyes: understanding and promoting patient-centered care. San Francisco: Jossey-Bass Publishers; 1993. [6] Mead N, Bower P. Patient-centeredness: a conceptual framework and review of the empirical literature. Soc Sci Med 2000;51:1087-110. [7] White B. Measuring patient satisfaction: how to do it and why to bother. Fam Pract Manag 1999;6:40-4. [8] Campbell JL, Ramsay J, Green J. Practice size: impact on consultation length, workload, and patient assessment of care. Br J Gen Pract 2001;51: 644-50. [9] Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care 1998;10:311-7. [10] Spooner SH. Survey response rates and overall patient satisfaction scores: what do they mean? J Nurs Care Qual 2003;18:162-74. [11] Gayet-Ageron A, Agoritsas T, Schiesari L, Kolly V, Perneger TV. Barriers to participation in a patient satisfaction survey: who are we missing? PLoS One 2011;6:e26852.

©2015 Elsevier

Feasibility of patient and peer surveys for Maintenance of Certification among diplomates of the American Board of Anesthesiology.

The initial developmental standards for Maintenance of Certification programs proposed by the American Board of Medical Specialties included the admin...
160KB Sizes 0 Downloads 6 Views