Survey of Faculty Perceptions Regarding a Peer Review System Ronald L. Eisenberg, MD, JD, Meredith L. Cunningham, BA, Bettina Siewert, MD, Jonathan B. Kruskal, MD, PhD

Purpose: Virtually all radiologists participate in peer review, but to our knowledge, this is the first detailed study of their opinions toward various aspects of the process. Methods: The study qualified for quality assurance exemption from the institutional review board. A questionnaire sent to all radiology faculty at our institution assessed their views about peer review in general, as well as case selection and scoring, consensus section review for rating and presentation of errors, and impact on radiologist performance. Results: Of 52 questionnaires sent, 50 were completed (response rate, 96.2%). Of these, 44% agreed that our RADPEER-like system is a waste of time, and 58% believed it is done merely to meet hospital/regulatory requirements. Conversely, 46% agreed that peer review improves radiologist performance, 32% agreed that it decreases medical error, and 42% believed that peer review results are valuable to protect radiologists in cases referred to the medical board. A large majority perform all peer reviews close to the deadline, and substantial minorities frequently or almost always select more than one previous examination for a single medical record number (28%), consciously select “less time intensive” cases (22%), and intentionally avoid cases requiring more time to peer review (30%). Discussion: Almost one-half of respondents agreed that peer review has value, but as currently performed is a waste of time. The method for selecting cases raises serious questions regarding selection bias. A new approach is needed that stresses education of all radiologists by learning from the mistakes of others. Key Words: Peer review, performance improvement J Am Coll Radiol 2014;11:397-401. Copyright © 2014 American College of Radiology

INTRODUCTION

The Joint Commission guidelines [1] state that practitioners are expected to “demonstrate knowledge of established and evolving biomedical, clinical, and social sciences, and the application of their knowledge to patient care and the education of others.” At most institutions, ongoing professional practice evaluation of radiologist performance includes a process of peer review based on a template first described by Donnelly [2]. Peer review should provide an unbiased, fair, and balanced evaluation of radiologist performance to identify opportunities for additional education, error reduction, and self-improvement [3]. Ideally, it should be nonpunitive, have minimal effect on workflow, and allow easy participation [3]. Although one article [4] reported that a “significant percentage” of faculty members viewed peer review as a “time-consuming bureaucratic process Department of Radiology, Harvard Medical School, Beth Israel Deaconess Medical Center, Boston, Massachusetts. Corresponding author and reprints: Ronald Eisenberg, MD, JD, Department of Radiology, Harvard Medical School, 330 Brookline Avenue, Boston, MA 02215; e-mail: [email protected]. ª 2014 American College of Radiology 1546-1440/14/$36.00  http://dx.doi.org/10.1016/j.jacr.2013.08.011

to create more paperwork” rather than a means to improve medical care, to our knowledge there has been no detailed study of the opinions of radiologists toward various specific aspects of peer review. Therefore, we undertook a study to assess the views of radiologists at a large urban medical center toward our peer review discrepancy system, which has been mandatory for more than 6 years. METHODS

The institutional review board determined that this study qualified for the quality assurance exemption. A questionnaire was sent to 52 members of the radiology faculty to determine their views about our local peer review system. Very similar to the ACR’s RADPEER product, it has been in place for more than 6 years and mandates that each radiologist submit a number of cases equal to 2.5% of each radiologist’s prior year’s volume (with a maximum of 300 cases). Questions for the survey were generated by the authors to assess views related to peer review in general, as well as methods used for case selection and scoring, opinions regarding consensus section review conferences for 397

398 Journal of the American College of Radiology/Vol. 11 No. 4 April 2014

rating and presentation of errors, communication and management of detected errors, and effects or impact of peer review on individual radiologist performance. Anonymous responses were collected through Survey Monkey (http://www.surveymonkey.com). Many questions consisted of statements with possible ratings using a 5-point Likert scale (1 ¼ strongly disagree, 2 ¼ mildly/moderately disagree, 3 ¼ neither agree nor disagree, 4 ¼ mildly/moderately agree, 5 ¼ strongly agree; or 1 ¼ never, 2 ¼ rarely, 3 ¼ sometimes, 4 ¼ frequently, 5 ¼ almost always). The survey also included multiple-choice questions, several of which permitted checking all answers that apply, so that the totals add up to more than 100%; free-text answers; and optional questions seeking demographic information. Descriptive statistics were calculated for all questions in the survey. For Likert scales, calculations were made of the percentages of those who agreed (categories 1 and 2), disagreed (categories 4 and 5), or were neutral (category 3) regarding each statement. A mean rating was calculated for each statement by adding together the products of the number of each category and the number of respondents who selected it, and then dividing this by the total number of respondents. RESULTS

Of 52 questionnaires sent, 50 were completed (response rate, 96.2%). Of the 50 respondents, 44% agreed with the statement that peer review as performed using the RADPEER-like process in our department is a waste of time, 58% thought that peer review is done merely to meet hospital/regulatory requirements, and 42% believed that peer review results are valuable to protect radiologists when there is an issue requiring reporting to the local State Board of Registration in Medicine (Table 1). Also, 46% of respondents thought that peer review improves radiologist performance. However, only 32% agreed that peer review decreases medical error, whereas 40% disagreed with that statement. Smaller percentages agreed that peer review results could lead to decreased compensation (24%) or loss of a job (16%) or be used against them if they were ever a defendant in a malpractice suit (30%). Finally, 46% of

respondents agreed that they only participated in the peer review program because they were forced to do so, and only 8% thought that their section colleagues liked the current system. Case Submission

Only 34% of respondents were satisfied with the process of secure electronic case submission, with 52% thinking it not user-friendly. Just 12% have developed a personal reminder system to review cases, with 40% depending on an e-mail reminder and 36% admitting that they attend to peer review duties only after receiving a warning letter from the department chair to all faculty members who are not on a pace to meet their quotas. Although 20% perform peer review on a regular basis by looking at a small number of cases at the start of each day, 76% admit to peer reviewing a substantial number of cases only when close to the annual deadline for case submissions. A minority of respondents (28%) frequently or almost always select more than one previous examination for a single medical record number, consciously select certain types of “less time-intensive” cases (plain films, ultrasound, screening mammography) to peer review (22%), and intentionally avoid more timeconsuming cases, such as body MRI and torso CT (30%) (Table 2). When peer reviewing a cross-sectional study, 84% look at more than one series (one-third evaluate all series), and the same percentage looks at more than one window setting (one-quarter evaluate all window settings). Only 8% report frequently or almost always reviewing their own cases or entering a peer review of one (agree with interpretation and report/no discrepancy noted) without looking at the previous dictation. Although only 4% of respondents admitted to sometimes purposely targeting another faculty member to give a bad review, 24% thought that colleagues had submitted peer review cases to hurt them. Of the 35 respondents who had been informed of an error by someone peer reviewing one of their cases, 88.6% (n ¼ 31) reported that it had been done in an instructive manner, whereas 34.3% (n ¼ 12) related that they had been informed of an error at least once in an unprofessional way.

Table 1. Responses to general statements about peer review Statement About Peer Review Agreed (%) Waste of time Merely done to meet hospital and regulatory requirements Protective of radiologists before state medical board Improves radiologist performance Decreases medical error Could lead to decreased compensation Could lead to loss of job Could be used against me if a defendant in a malpractice suit Only participate because forced to Think my colleagues like the current system

44 58 42 46 32 24 16 30 46 8

Disagreed (%)

Neutral (%)

Mean Rating

22 30 24 28 40 60 56 48 26 62

34 12 34 26 28 16 28 22 28 30

3.30 3.42 3.30 3.23 2.94 2.38 2.54 2.65 3.24 2.14

Eisenberg et al/Faculty Perception of Peer Review 399

Table 2. Case selection Statement

Frequently/Almost Always (%)

Rarely/Never (%)

Sometimes (%)

Mean Rating

28

40

32

2.70

22 30 48 30 8

56 50 36 40 82

22 20 16 30 10

2.43 2.52 3.28 2.89 1.48

8

74

18

1.85

50 2 0

16 88 96

34 10 4

3.40 1.84 1.02

Select more than one exam for single medical record number Consciously select less time-intensive cases Consciously avoid time-consuming cases Review cases as thoroughly as if primary reader Evaluate/comment on official report Enter peer review rating of 1 without looking at case Review my own cases If detecting an error, I tend to err on side of Category 2 (undercall) Category 4 (overcall) I have purposely targeted someone to give a bad review

Case Scoring

Regarding the statement that there could be a better scoring system (we use one similar to RADPEER), 62% of respondents agreed and only 8% disagreed. Recommendations for a better system included the following: (1) an assessment based on clinical significance of an error, rather than the likelihood that other radiologists would interpret the study incorrectly; and (2) eliminating the inclusion of cases with differences in “management decisions” with those involving “misses” or “misinterpretations.” Consensus Section Review Conferences

Overall, 40% of respondents were satisfied with the process for section consensus conference review, with 60% agreeing that their section reviewed all category 3 and 4 cases and 34% relating that they frequently or almost always were able to see the name of the radiologist who originally read the case under discussion (Table 3). A majority (68%) of respondents reported that the radiologist presenting cases frequently or almost always discusses them primarily as a learning experience, with only 12% relating that the presenter often or usually criticizes the errors made. Equal numbers (36%) agreed and disagreed with the statement that peer review cases are generally organized by topic or body part. Personal Review of Peer Review Data and Effects on Individual Performance

Respondents did not often review their own peer review data. Although 20% do this on a monthly basis, the rest

were almost evenly split among reviewing their personal peer review data every 6 months, yearly, or not at all. An identical percentage (30%) agreed that the peer review process has improved both their performance and that of other radiologists. In free text, almost all of these stated that it forced them to pay more attention to specific areas in their personal interpretation process. Demographic Data

Only 64% of respondents were willing to answer the optional question regarding gender (21 men, 11 women). Fewer than 60% answered optional questions regarding academic rank, years since completing fellowship, and section within the department. Those who refused to divulge their gender or academic rank had a mean rating of agreement with the statement “peer review as performed in our department is a waste of time” that was substantially higher than those who were willing to provide this information (3.70 versus 2.84, and 3.60 versus 2.90, respectively). DISCUSSION

At most institutions, peer review is commonly used for assessing radiologist performance in terms of medical and clinical knowledge and for remediating any deficiencies that are detected. In our study, however, 44% of respondents agreed with the statement that peer review, as done in our institution using a system similar to RADPEER, is a waste of time; 58% agreed it was

Table 3. Case consensus conference and effect of peer review on performance Statement Agreed (%) Disagreed (%) Satisfied with current system Review all category 3 and 4 cases Can see name of radiologist who originally read case Presenter primarily discusses cases as learning experience Presenter frequently/almost always criticizes errors made Presenter frequently/almost always organizes cases by topic or body part Peer review feedback has improved my performance Peer review feedback has improved performance of others

Neutral (%)

Mean Rating

40 60 34 68 12 36

22 18 36 2 52 36

38 22 30 30 36 28

3.31 3.78 2.80 4.26 2.26 3.05

30 30

40 30

30 40

2.83 3.02

400 Journal of the American College of Radiology/Vol. 11 No. 4 April 2014

performed merely to meet hospital/regulatory requirements; and 46% participate in peer review only because they are forced to do so. Only 8% thought that their section colleagues liked the current system of peer review. Conversely, 46% agreed that peer review improves radiologist performance, and 42% thought it is valuable to protect radiologists when there is an issue requiring reporting to the local State Board of Registration in Medicine. Although 32% agreed that peer review decreases medical errors, 40% disagreed with this statement. More than 60% complained that the electronic system for peer review was not user-friendly and that there could be a better scoring system, though only 10% offered any recommendations for change. These and other faculty responses in our survey must be viewed in light of the avowed purposes of peer review. In assessing radiologist performance and the need for educational improvements, the fundamental question for determining competency is to develop a standard or threshold that defines the limits of acceptable performance, but these absolute criteria do not yet exist [5]. Selection bias is a major issue in the peer review process. What percentage of reported cases should be selected for review? Too few cases may not achieve statistical significance, but obtaining sufficient cases for statistical significance could cause such an arduous extra workload that radiologists might refuse to participate in the peer review process [4] or do so with little enthusiasm or effort. Although some institutions and organizations arbitrarily select 5% of the annual radiology workload for peer review purposes, we have reduced the required number of cases to 2.5% of each radiologist’s prior year’s volume, with a maximum of 300 cases for those who interpret large number of plain radiographs. Our study reveals that 28% of respondents consciously select certain “less time-intensive” cases (eg, plain radiographs, ultrasound, screening mammography) to peer review, whereas 38% intentionally avoid certain types of cases that require more time to peer review (eg, body MRI, torso CT). This raises serious questions regarding the randomness of the sample of cases subjected to peer review. Although few respondents reported purposely underreporting a case more than rarely, 44% tend to err on the lower category of error by rating a discrepancy as 2 rather than 3 (which would trigger presentation of the case to a section consensus conference). Moreover, physicians are often reluctant to document mistakes by colleagues [4]. Despite only 2 respondents admitting to targeting someone to give them a bad review, 24% thought that peer review had been submitted by colleagues to hurt them. Our department utilizes an online quality assurance reporting system, in which radiologists and referring clinicians can electronically enter apparent errors and complications. Although not random, this approach appears to collect far more relevant cases, as well as

valuable feedback from referring physicians. In an era of increasing performance evaluation and the requirement for multisource feedback, peer review systems of the future should be expanded to provide a more comprehensive and reflective evaluation of each radiologist. The results of our study question the randomness of the peer review process. Only 20% of respondents perform peer review on a regular basis by looking at a small number of cases at the start of each day. A large majority (76%) admit to peer reviewing a very large number of cases only when close to the annual deadline for case submissions. A substantial minority (28%) frequently or always select more then one previous examination for a single medical record number, which is especially unhelpful if the most recent prior examination is normal. Another example of sample bias occurs in patients who have undergone multiple studies within a few days or during a single hospital admission (eg, chest radiographs in ICU patients). Not infrequently, a retrospective review reveals that the same abnormality was missed by virtually all staff members. However, failure to detect a parenchymal lesion, thyroid mass, or sheared-off opaque tip of a vascular catheter will be attributed only to the one radiologist who made the error on the examination immediately before its being finally detected. All currently applied peer review methods assess interpretive disagreement between readers. However, variability is not always the consequence of error and may merely represent a genuine difference of opinion regarding the correct interpretation of an image or appropriate recommendation for follow-up [6]. Moreover, because most cases undergoing peer review do not have pathologic or surgical confirmation or adequate clinical follow-up, disagreements in interpretation can only be resolved on an unreliable subjective basis [4]. About one-quarter of respondents mistakenly believed that peer review data is used to determine faculty bonus incentives. A similar percentage was convinced that peer review results could be used against them if they were ever a defendant in a malpractice suit, despite federal and state laws providing legal privileges and immunities to performance-related data extracted or documented in the course of peer review and related proceedings [3]. More than one-third of radiology faculty refused to answer optional questions about gender, academic rank, years since completing fellowship, and section within the department, all presumably because of the fear that these data would permit their otherwise anonymized answers to be identified with them. Those individuals who refused to provide personal data had substantially higher agreement with the statement that “peer review as performed in our department is a waste of time.” This suggests that those with more negative views toward the peer review process were afraid that providing demographic information would permit their being

Eisenberg et al/Faculty Perception of Peer Review 401

personally identified with this opinion and possibly lead to adverse repercussions from the department administration. Currently, the focus of most peer review systems appears to be ensuring that each faculty member has viewed and submitted ratings on a predetermined number of cases and identifying any radiologist whose performance appears to deviate substantially from the norm. Relatively little attention is paid to analyzing the reasons for errors leading to efforts to improve performance. As Halsted [7] wrote, the intent of peer review should be to improve the performance of all radiologists, not merely the one who made an error. This can be achieved by openly discussing mistakes as a learning opportunity for all members of the department, so that they become more aware of the common diagnostic and technical errors that they are most likely to face in their daily practice. Rather than basing peer review on a set number of cases per radiologist, Halsted uses “the review of prior studies, whether for the interpretation of new cases, the review of prior cases at clinicians’ requests, or in preparing for or presenting case conferences, as potential peer review events. Any discrepancy in interpretation, whether based on imaging findings or follow-up clinical information, triggers the peer review process.” This system frees radiologists from the tedious task of peer reviewing randomly assigned and typically uninteresting cases, thus allowing them to focus their efforts solely on clinical care. Our study is limited by having all opinions and free text comments from a single institution. However, it does reflect what we believe to be a comprehensive peer review structure that is very similar to the popular and widespread RADPEER and has assessed more than 60,000 cases in over 6 years of operation. In summary, the results of our study support the suggestion in a recent article [8] of a need for a “natural evolution of peer review away from measurement and error identification toward the goal of performance

improvement for the entire profession by allowing everyone to learn from the mistakes of everyone else.” TAKE-HOME POINTS

 Almost half of radiologists agreed that peer review improves radiologist performance and is valuable to protect radiologists in cases referred to the Medical Board, while almost one-third agreed that it decreases medical error.  Conversely, almost half agreed that our RADPEERlike system is a waste of time, and a majority believe it is done merely to meet hospital and regulatory requirements.  The method for selecting cases raises serious questions regarding resulting selection bias.  A new approach to peer review is needed that stresses education of all radiologists by learning from the mistakes of others. REFERENCES 1. Joint Commission. Comprehensive accreditation manual for hospitals: The official handbook. Oakbrook Terrace, Ill. Joint Commission, 2007. 2. Donnelly LF. Performance-based assessment of radiology practitioners: promoting improvement in accordance with the 2007 Joint ommission standard. J Am Coll Radiol 2007;4:699-703. 3. Mahgerefteh S, Kruskal JB, Yam CS, Blachar A, Sosna J. Peer review in diagnostic radiology: current state and a vision for the future. RadioGraphics 2009;29:1221-31. 4. Lee JKT. Quality—a radiology imperative: interpretation accuracy and pertinence. J Am Coll Radiol 2007;4:162-5. 5. Landon BE, Normand S-L T, Blumenthal D, Daley J. Physician clinical performance: prospects and barriers. JAMA 2203;200:1183-9. 6. Alport HR, Hillman BJ. Quality and variability in diagnostic radiology. J Am Coll Radiol 2004;1:127-32. 7. Halsted MJ. Radiology peer review as an opportunity to reduce errors and improve patient care. J Am Coll Radiol 2004;1:984-7. 8. Butler GJ, Forghani R. The next level of radiology peer review: enterprise-wide education and improvement. J Am Coll Radiol 2013;10: 349-53.

Survey of faculty perceptions regarding a peer review system.

Virtually all radiologists participate in peer review, but to our knowledge, this is the first detailed study of their opinions toward various aspects...
88KB Sizes 0 Downloads 0 Views