Canadian Association of Radiologists Journal 65 (2014) 94e95 www.carjonline.org

 Guest Editorial / Editorial sollicite

Peer Review in British Columbia: A General Radiologist’s Perspective Errors in medicine have always occurred, but, with respect to radiologic studies, the public often expects only the correct answers, ‘‘This is reinforced by the media, describing every error in film interpretation, as a scandal’’ [1]. However, in many cases of perceived radiologic error, the true answer remains unknown, and such errors, therefore, are best described as ‘‘discrepancies.’’ In an attempt to deal with such discrepancies, many radiology departments are now required to have peer review (PR) as part of their quality improvement (QI) programs [2]. Due to similar circumstances in British Columbia, PR has recently been mandated across the province. Many radiologists believe that we are being unfairly singled out, but PR is here to stay, and the real question is how best to do the PR. We need a PR model that can achieve our goals. As in any QI initiative, PR should focus on quality, meeting a standard of care, and continuously improving. In radiology, the goal of PR is to increase mean diagnostic ability and improve patient care. Prior research in industry and medicine can help us select the right model [3]. In the past, 2 models have been evaluated [3,4]. One focuses on the retrospective measurement of individual errors and remediation. The other focuses on learning about errors and prospectively preventing them. Analysis of research reveals that QI the focused on measuring errors and punishing individuals never achieves dramatic improvements in quality because it is not designed to do so [3]. However, QI focused on the objective study of errors can improve quality. As the causes of errors are understood, solutions can be developed and used to improve individual results. Everyone can perform better, and by design, overall improvement is expected [3,5]. In other words, we can either try and learn from our past mistakes and prevent errors or focus on individuals who make the most mistakes [6].‘‘The later approach was tried for decades in aviation and failed, before it was realized that human errors are largely systematic problems and should be dealt with as such’’ [1,3,7,8]. Yet most PR programs focus on the retrospective measurement of individual errors, even though this strategy does not specify what needs to be improved, how to improve it, and whether it improved [3]. Furthermore, error reporting is inherently punitive and leads to the underreporting of errors. Reports that use such data are biased, flawed, and give a false impression of accuracy. Not surprisingly, individual performance is not reliably measured by this method [3,4,9e12].

It also is not surprising that errors occur in radiology, given the complex technology and high workload [1,2]. The causes of error are beyond the scope of this article, but research suggests that some errors are unavoidable and others can be prevented by changing systems and processes [2]. Major discrepancy rates in radiology have been reported to be in the range of 2%-20%, but results are conflicting, and although comparable with other medical specialties, further study is needed [1,2,13e15]. ‘‘The point is that some errors are inevitable and a system that punishes unavoidable errors will end up hiding the avoidable ones’’ [2]. A prospective system of PR offers the potential for substantial improvements in radiologic performance and patient care. It stresses a team approach but includes the individual feedback necessary to change behaviors that cause errors. Those who have never heard of their mistakes will repeat them [3]. Individuals can also learn from the mistakes of others and can use cases for practice and testing. Areas for study can be highlighted by identifying errors most likely to cause harm. Finding the cause of such errors can lead to behaviors that will mitigate the harm [3]. A nonpunitive system encourages the reporting of errors and will provide better data for research [3,4,6]. The choice among PR models is clear, and it is important. A proper PR program can provide assurance that radiologists are practicing quality radiology and continuing to improve. Ideally, PR should be an ongoing unbiased evaluation of randomly selected cases by a physician’s peers to identify opportunities for additional education, error reduction and self-improvement [4]. PR also should have a minimal effect on workflow and allow easy participation [4]. For this to happen, it will require the ‘‘buy in’’ of radiologists, but many are hesitant, fearing that the information obtained from error reporting might be used against them. Unfortunately, many administrators and politicians also have this in mind [2]. A critical challenge for radiologists is to educate these groups that such ideas will not improve performance or patient care and may put them in direct conflict with radiologists, with little idea what to do. The Screening Mammography Program of British Columbia provides a useful example of how to secure the cooperation of radiologists and how diagnostic performance can improve over time. It is also worth noting that government and administration do not receive individual radiologist data from this program but do get a detailed annual report of

0846-5371/$ - see front matter Ó 2014 Canadian Association of Radiologists. All rights reserved. http://dx.doi.org/10.1016/j.carj.2013.12.004

Peer review in BC / Canadian Association of Radiologists Journal 65 (2014) 94e95

overall program performance, along with the knowledge that their radiologists are meeting standard of care. This helps avoid many potential conflicts. Developing a province-wide PR program in British Columbia presents many hurdles. Major issues to be dealt with include standards of performance, definition of peer groups, method of peer review, governance, handling of errors and remediation, education, access to data, research, anonymity, and reporting requirements. Radiologists should lead the effort to address these issues. Only we can ensure that PR is done fairly and properly and that its benefits are realized [4]. We need to work with administration, government, and our peers. We must help them understand that radiologic PR is in its early days, we do not have all the answers, and that these answers must come from research [3,4]. PR can improve the public confidence in our ability, earn us greater respect within the medical profession, and add value to what we do [3,16]. It can be a win-win scenario. Experience tells us that we need to choose a PR model of cooperation and education and focus on what went wrong and how to correct it [17]. Individual error rate models do not improve quality or accurately measure individual performance. Time, effort, and money are best spent in developing methods to improve performance and prevent errors [1,6]. In British Columbia, a number of initiatives are underway, which focus on cost, error measurement, administrative structure, and information technology needs. Although much has been learned, we need a comprehensive plan and the critical commitment to support a modern PR program. It is time for radiologists in British Columbia to advocate for such a program, one that can succeed from the onset. We have history on our side, and if we choose correctly, history will not repeat itself. The public and our patients are counting on us to do so. David Coupland, MD Nanaimo Regional General Hospital Nanaimo, BC, Canada E-mail address: [email protected]

95

References [1] Brady A, Laoide RO, McCarthy P, et al. Discrepancy and error in radiology: concepts, causes and consequences. Ulster Med J 2012;81: 3e9. [2] Holt J, Goodard P. Editorial: discrepancy and error in diagnostic radiology. WEMJ 2012;111. [3] Larson DB, Nance JJ. Rethinking peer review: what aviation can teach radiology about performance improvement. Radiology 2011;259: 626e32. [4] Mahgerefteh S, Kruskal JB, Yam CS, et al. Peer review in diagnostic radiology: current state and a vision for the future. Radiographics 2009; 29:1221e31. [5] Guest CB, Regehr G, Tiberius RG. The life long challenges of expertise. Med Educ 2001;35:78e81. [6] Swanson JO, Thapa MM, Iyer RS, et al. Optimizing peer review: a year of experience after instituting a real-time comment-enhanced program at a children’s hospital. AJR Am J Roentgenol 2012;198:1121e5. [7] Reason JA. The nature and varieties of human error. In: The Human Contribution: Unsafe Acts; Accidents and Heroic Recoveries. Burlington, VT: Ashgate; 2008. pp. 29e47. [8] Park K. Human error. In: Salvendy G, editor. Handbook of Human Factors and Ergonomics. New York, NY: Wiley; 1997. p. 163. [9] Deker S. You can’t count errors. In: The Field Guide to Understanding Human Error. Burlington, VT: Ashgate; 2006. pp. 65e72. [10] Cascade PN. Comment on ‘‘RADPEER quality assurance program; a multifactorial study of interpretive disagreement rates.’’ J Am Coll Radiol 2004;1:295e6. [11] Farley DO, Haviland A, Champagne S, et al. Adverse event reporting practices by US hospitals: results of a national survey. Qual Saf Health Care 2008;17:416e23. [12] Fitzgerald R. Performance based assessment of radiology faculty [letter]. AJR Am J Roentgenol 2006;186:265. [13] Briggs GM, Flynn PA, Worthington M, et al. The role of the specialist neuroradiology second opinion reporting: is there added value? Clin Radiol 2008;63:791e5. [14] Goddard P, Leslie A, Jones A, et al. Error in radiology. Br J Radiol 2001;74:949e51. [15] Abujudch HH, Boland GW, Kaewlai R, et al. Abdominal and pelvic computed tomography (CT) interpretation: discrepancy rates among experience radiologists. Eur Radiol 2010;20:1952e7. [16] Kruskal JB, Eisenberg R, Sosna J, et al. Quality initiatives: quality improvement in radiology: basic principles and tools required to achieve success. Radiographics 2011;31:1499e509. [17] Murphy JF. Root cause analysis of medical errors. Ir Med J 2008;101:36.

Peer review in British Columbia: a general radiologist's perspective.

Peer review in British Columbia: a general radiologist's perspective. - PDF Download Free
113KB Sizes 0 Downloads 3 Views