The Laryngoscope C 2013 The American Laryngological, V

Rhinological and Otological Society, Inc.

Commentary on the Role of Expert Opinion in Developing Evidence-Based Guidelines David Eibling, MD; Marvin Fried, MD; Andrew Blitzer, MD; Gregory Postma, MD Evidence-based clinical practice guidelines (CPGs) help guide busy practitioners in clinical decision making. CPGs are evidence-based in that recommendations are based on available knowledge derived from published clinical trials. The challenges presented by the tasks of finding, assessing, interpreting, and assembling the information in these reports are herculean. Missing or imperfect evidence may lead to the publication of suboptimal guidelines, even when the other components of the development process have been flawlessly performed. This commentary highlights the requirement that expert opinion must be explicitly recognized by CPG authoring groups when the published evidence is missing or inadequate. Key Words: Evidence-based medicine, clinical practice guidelines. Laryngoscope, 124:355–357, 2014

INTRODUCTION

“Information is not knowledge … Knowledge is information that is experienced and interpreted by humans …” Erik Andriessen and Babette Fahlbruch.1 Decision making is the most critical cognitive task required for the healing arts. The uncertainty that pervades this human endeavor means that optimal choices for a specific patient are frequently unclear. Subtle and not-so-subtle differences exist among humans and situations, leading to wide variability in diagnostic and therapeutic strategies selected for seemingly identical disease processes.2 Implicit in this variability is the implication that some strategies chosen by practitioners are not just different, but likely to lead to better—or worse—outcomes for the patient, whether due to the disease itself or the selected intervention. The public, as well as policy makers, have focused ever-increasing attention on the effects of physician decision making on health care costs for society and our nation.3 The United States is now in From the Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, U.S.A.; Department of Otolaryngology, Albert Einstein College of Medicine, Bronx, New York, U.S.A.; Department of Otolaryngology, Columbia University College of Physicians and Surgeons, New York, New York, U.S.A.; Department of Otolaryngology, Georgia Regents University, Augusta, Georgia, U.S.A. Editor’s Note: This Manuscript was accepted for publication March 27, 2013. All authors are members of the American Broncho-Esophagological Association (ABEA). A.B. and G.P. are past presidents of the ABEA. A.B. and M.F. are members and past presidents of the American Laryngological Association. The authors have no funding, financial relationships, or conflicts of interest to disclose. Send correspondence to David Eibling, MD, Department of Otolaryngology, 200 Lothrop Street, Pittsburgh, PA 15213. E-mail: [email protected] DOI: 10.1002/lary.24175

Laryngoscope 124: February 2014

the midst of a paradigm shift in health care delivery, with one widely accepted goal being that of increasing the consistency (and predictability) of clinical decision making. It is widely assumed (without strong evidence) that achieving this goal of reducing variability will result in improved health benefits for individual patients, as well as reduced overall health care costs for society.

CLINICAL PRACTICE GUIDELINES Evidence-based clinical practice guidelines (CPGs) are viewed as integral to achieving the goal of practice improvement through reduction of variability. CPGs seek to guide decision making through the identification, articulation, and dissemination of “best practices.” CPGs are public documents that carry the imprimatur of major organizations or governmental agencies; hence, they convey broad effects on clinical practice, often indirectly, such as through their influence on payers. The American Academy of Otolaryngology–Head and Neck Surgery (AAO-HNS) has assumed a leadership role in the authoring and dissemination of CPGs, with the goal of improving the quality of care for patients with disorders of the ear, nose, and throat in the United States and around the world. The CPG authoring process detailed in a publication by Rosenfeld and Shiffman in 20094 has been widely quoted and is viewed as an authoritarian treatise on the topic. This “guideline for guidelines” emphasizes that recommendations within CPGs must be actionable, thereby translating specific knowledge into recommendations for specific actions. Despite careful attention to compliance with ideal CPG authoring processes, imperfections can slip into the document. Responding to stakeholder concerns regarding CPGs, Congress instructed the Agency for Health Policy Research and Quality to evaluate the processes used in identifying the evidence and forming recommendations. In his foreword to the resultant 2011 Institute of Medicine (IOM) report Clinical Practice Guidelines We Eibling et al.: Role of Expert Opinion

355

Can Trust, committee chair Sheldon Greenfield, acknowledged that “…there has been considerable concern expressed by physicians, consumer groups, and other stakeholders about the quality of the processes supporting development of CPGs, and the resulting questionable validity of many CPGs….”5 More recently, Kung et al. reviewed 130 CPGs and noted that only 44% of the specific standards recommended by the IOM were satisfied in the sampled CPGs,6 with the implication that there remains room for improvement. One evidence-based CPG, the Hoarseness CPG published by the AAO-HNS in 2009,7 illustrated several challenges encountered in the creation of such guidelines when insufficient evidence exists to address critical questions. Other authors have commented on the specific issues posed by this guideline,8 and in the intervening years since its publication the AAO-HNS has substantially improved the authoring process for future CPGs.

IDENTIFYING BEST PRACTICE Strategies to identify “best practice” depend on examining correlations between specific actions and associated outcomes, correlations which drive (nearly) all human endeavors. Inherent defects in human cognition, such as limitations of memory and attention, inevitably introduce biases into the conclusions—biases that degrade the likelihood that such “new knowledge” actually is the truth, or “gold standard.” Sophisticated strategies to objectively examine correlations, the paragon of which is the well-designed and well-conducted randomized clinical trial (RCT), most effectively free this knowledge from the constraints of human cognition. Conceptualizing, designing, performing, and reporting clinical trials is not a new endeavor. Such investigations date from biblical times,9 and the findings of these trials fill our professional journals. The challenge of how to identify “best practice” hidden in this plethora of information remains elusive because collecting, categorizing, and quantifying the reliability of information is not straightforward. Reporting the statistical measures of information reliability is integrated into the ethos of science itself and is a requirement for publication in all scientific domains. Assessment of information reliability is a core requirement of evidence-based medicine (EBM), and represents an initial step in the CPG authoring process. Even well-designed and well-conducted RCTs fail to guide many (in fact, most) clinical practice decisions, because many clinical questions have not been investigated in a rigorous manner, or even at all. Moreover, RCT findings often do not translate easily into practice, because study subjects typically fail to be representative of actual patients. We posit that the greatest challenge faced by authors of evidence-based CPGs is the failure of published reports to include all available knowledge. Knowledge generation is more complex than simply accumulating, collating, evaluating, and stringing together packets of information. These processes represent Laryngoscope 124: February 2014

356

some of the most perfect, and paradoxically, some of the most imperfect, characteristics of human performance. Translation of domain-specific experiential human knowledge into a form useable by others challenges all collaborative human activity, and medicine is no exception.1 EBM seeks to compensate for experiential knowledge, since even when derived from the entirety of one’s experiences, is intrinsically flawed by its reliance on human cognition. Experiential knowledge is understood to be subject to higher levels of bias and inaccuracy than published information,10 but in the absence of high-quality published reports (“evidence”) may represent the best knowledge available. The challenge lies in how to assess and report the accuracy of this knowledge. Within the highly structured domain of EBM, expert opinion is assigned a lower level of evidence in rating schemata than peer-reviewed published reports. EBM ratings, however, fail to acknowledge potential differences between varying levels of expertise, employing a “one size fits all” stratagem. “Expert knowledge” has served our profession well over the years, and even in 2013 constitutes the basis of most medical practice, and hence cannot be summarily dismissed. The opinions of experts are based not only on their personal clinical experiences, but also on their accumulated knowledge from a wide range of sources. These include the expert’s personal assessment of the validity of published reports, new knowledge learned at meetings and symposia, awareness of unpublished studies with “negative” results, and knowledge of the (often unreported) practice styles of colleagues in their field of expertise. The breadth and depth of such knowledge are often difficult to capture and may not be appreciated by those outside the field of expertise, but are typically recognized by other domain experts. As in any human endeavor, fundamental conflicts often exist between the opinions of experts due to differences of interpretation. In healthy organizations, these conflicts lead to more indepth exploration, hopefully including efforts to seek objective data to support one contention over another. Modern society expects experts to possess a greater ability to recognize uncertainty than nonexperts.11 Moreover, experts are expected to embrace uncertainty and to articulate their assessment of its significance when stating their opinions (although admittedly this openness may lead to misinterpretation as “theory” by nonscientists). Implicit is the responsibility that experts request and seek objective data when uncertainty exists. Paradoxically, when uncertainty is nonexistent (as in the often-cited parachute EBM paper12), it is unlikely that resources will be invested in the search for objective evidence. Congruent with their role as surrogates for busy practitioners, CPG authoring groups must acknowledge and embrace uncertainty when it exits. In an editorial commenting on the Kung report, entitled “In Guidelines We Cannot Trust,” Shaneyfelt observed that uncertainty, as evidenced by disagreement within the guidelines committee, was rarely reported in the final document, leading to the illusion that uncertainty or disagreement did Eibling et al.: Role of Expert Opinion

not exist.13 The IOM report also emphasized that evidence-based guidelines should acknowledge uncertainties where they exist.5 By identifying and prioritizing critical unanswered questions, CPG authoring groups can provide a valuable service by highlighting opportunities for future investigation. An effective strategy to improve the value of expert opinion is by combining opinions from multiple experts through consensus panels. Accessing the collective opinions through senior societies is another strategy to assist in the validation of expert opinion. Membership in senior societies provides opportunities to share information, build collective knowledge, and concomitantly serve the purpose of internal peer review. Introducing collective expert opinion strengthens the credibility of the resultant CPG by acknowledging that the final document is based on both an exhaustive evaluation of the published literature (“evidence”) and the wealth of experience possessed by expert members of senior societies. Resultant guidelines should perhaps be termed as “hybrid” rather than pure “evidenced-based” guidelines. The AAO-HNS has introduced a process to involve senior societies (with their subject matter experts) through representation on the Specialty Societies Advisory Council for current and future CPGs. This strategy ensures greater transparency and expert review of the CPG throughout the authoring process, and inevitably will lead to improvements in the resultant document. In conclusion, the authors of this commentary salute the AAO-HNS leadership and the CPG Task Force for their contributions in defining the CPG authoring process, as well as their willingness to revise the process as the need to do so became evident. We recommend

Laryngoscope 124: February 2014

that future CPGs explicitly clarify areas of uncertainty and disagreement in the final document. Expert opinion should be considered in CPG recommendations (and clearly identified as such), particularly when high-level evidence is nonexistent. Finally, as recommended by the IOM report,5 we recommend that CPGs be regularly reviewed every 3 to 5 years and revised as indicated.

BIBLIOGRAPHY 1. Erick Andriessen JH, Fahlbruch, B, eds. How to Manage Experience Sharing: From Organisational Surprises to Organisational Knowledge. Bingley, UK: Emerald Group Publishing; 2004. 2. Dartmouth Atlas of Health Care. Available at: www.Dartmouthatlas.org Accessed March 21, 2013. 3. Sutherland J, Fisher E, Skinner J. Getting past denial—the high cost of health care in the United States. N Engl J Med 2009;361:1227–1230. 4. Rosenfeld R, Shiffman R. Clinical practice guideline development manual: a quality-driven approach for translating evidence into action. Otolaryngol Head Neck Surg 2009;140:S1–S43. 5. Greenfield S, Steinberg EP, Institute of Medicine Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press; 2011. 6. Kung J, Miller RR, Mackowiak PA. Failure of clinical practice guidelines to meet Institute of Medicine standards. Arch Intern Med 2012;172:1628–1633. 7. Schwartz SR, Cohen SM, Dailey SH, et al. Clinical practice guideline: hoarseness (dysphonia). Otolaryngol Head Neck Surg 2009;141(3 suppl 2):S1–S31. 8. Johns MM, Sataloff RT, Merati AL, Rosen CA. Shortfalls of the American Academy of Otolaryngology-Head and Neck Surgery’s clinical practice guideline: Hoarseness (Dysphonia). Otolaryngol Head Neck Surg 2010;143:175–177. 9. Daniel 1. Holy Bible. New International Version; 1984:vv8–16. 10. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003;798:775–780. 11. Rosenfeld R. Uncertainty-based medicine. Otolaryngol Head Neck Surg 2003;128:5–7. 12. Smith G, Pell J. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ 2003;327:1459–1460. 13. Shaneyfelt TM. In guidelines we cannot trust. Arch Intern Med 2012;72:1633–1634.

Eibling et al.: Role of Expert Opinion

357

Commentary on the role of expert opinion in developing evidence-based guidelines.

Evidence-based clinical practice guidelines (CPGs) help guide busy practitioners in clinical decision making. CPGs are evidence-based in that recommen...
66KB Sizes 0 Downloads 0 Views