SURGICAL PERSPECTIVE

Ensuring Excellence in Centers of Excellence Programs Ateev Mehrotra, MD, MPH,∗ and Justin B. Dimick, MD, MPH†

I

mproving patient outcomes by directing patients to hospitals designated as “Centers of Excellence” has intuitive appeal. However, several recent empirical evaluations of such programs have found that designated hospitals are, at best, only modestly better than nondesignated hospitals. In this perspective, we explain these surprising findings and propose an alternative method of designating Centers of Excellence. The premise behind Centers of Excellence programs is logical: prior research has documented a wide variation in outcomes across hospitals. Other studies have found associations between better outcomes and a variety of structural and process criteria including higher volume of cases,1 computerized physician order entry,2 intensivist staffing,3 and high nursing-to-patient ratios.4 Together, this evidence raises the possibility that hospitals selected on some or all of these structural and process criteria will have better outcomes. On the basis of this foundation, a number of Centers of Excellence programs have been developed by private health plans, federal and state payers, and specialty societies. To note a few examples, Aetna has identified Aetna Institutes, United designates United Health Premium providers, the Blue Cross Blue Shield Plans distinguish Blue Distinction Centers, and the American College of Surgeons and the American Society for Metabolic and Bariatric Surgery accredit bariatric surgery centers. Designation criteria vary across programs but typically include volume of cases or local clinical resources such as having an intensive care unit. The hope is that patients will voluntarily seek care at designated hospitals, but in some cases, financial incentives are used to encourage their use. For example, WalMart waives all copayments if its employees receive care for selected high-cost surgical procedures at certain hospitals. Despite designation criteria that should result in better outcomes, evaluations of these programs have yielded disappointing results. Comparisons of bariatric surgery, hip and knee replacement, and spine surgery at designated hospitals and nondesignated hospitals find little differences in costs and outcomes (Fig. 1). On the basis of this evidence, Medicare recently reversed its decision to only reimburse for bariatric surgery at Centers of Excellence.

WHY DO CENTER OF EXCELLENCE PROGRAMS FAIL TO IDENTIFY BETTER HOSPITALS? The lack of discrimination is largely driven by the selection criteria. Hospital structural measures, the most commonly used selection criteria, are relatively fixed attributes of the hospital, including case volume, intensivist staffing, use of electronic health records, and

From the ∗ Department of Health Care Policy, Harvard Medical School, Boston, MA; and †Centers for Health Outcomes and Policy, University of Michigan, Ann Arbor, MI. Disclosure: The authors declare no conflicts of interest. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.annalsofsurgery.com). Reprints: Ateev Mehrotra, MD, MPH, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115. E-mail: mehrotra@ hcp.med.harvard.edu. C 2014 Wolters Kluwer Health, Inc. All rights reserved. Copyright  ISSN: 0003-4932/14/26102-0237 DOI: 10.1097/SLA.0000000000001071

Annals of Surgery r Volume 261, Number 2, February 2015

availability of advanced clinical capacity (eg, intensive care units). These measures are attractive, as they are easy and inexpensive to evaluate, but they have important limitations. First, structural measures may have a weak or nonexistent relationship with outcomes. Although volume-outcome relationships are strong for rare, highrisk surgical conditions, they are much weaker for the common conditions that are often the focus of programs (eg, bariatric surgery). Moreover, any relationship between structural variables and outcomes may wane over time. The niche capabilities necessary for performing a particular surgery might spread quickly and become less important, as training and experience change and care becomes generally safer. Second, using a single cutoff on a structural measure as a designation criterion limits the difference between the hospitals. With hospital volume, for example, when comparing the highest and lowest quintile volume centers, there are often large differences but a much smaller difference exists between hospitals just above and below a single cutoff.1 Third, the ecological fallacy is also evident. For example, although hospitals with computerized physician order entry may have superior outcomes, many hospitals with poor outcomes also have such systems, and vice versa. Finally, the structural criteria used for designation (eg, intensivist staffing of intensive care units) are associated with better outcomes in cross-sectional studies, but whether hospitals that adopt intensivist staffing improve their outcomes is less clear. Processes of care are the other main criteria for Centers of Excellence programs. Processes of care are the details of clinical care provided to the patient, such as adherence to recommended medications postsurgery. Using processes of care for designation has clear advantages, especially when it comes to data collection, as there is no need to collect detailed patient data for risk adjustment. Perhaps, most importantly, process measures create a prescriptive recipe on how to provide optimal care. However, problems also exist with using processes of care to identify top-performing hospitals; there is a growing body of evidence, for both medical and surgical conditions, suggesting that hospital variations in important outcomes are not explained by differences in compliance with processes of care. For example, hospital adherence to the Surgical Care Improvement Program measures has only a modest relationship to postoperative infections.5 Therefore, using adherence to processes of care as a criterion for Centers of Excellence programs may not be as useful as once hoped.

HOW TO MOVE FORWARD WITH CENTERS OF EXCELLENCE PROGRAMS? It is important to recognize the circularity in the current system of using structural and process criteria to designate Centers of Excellence. Structural and process criteria such as volume of care or performance on process measures were chosen because of their perceived relationships to outcomes. Why not just select on outcomes? The common barriers to using outcomes are statistical concerns such as controlling for differences in patient severity of illness, but in many cases, these barriers can be overcome. Methodology for risk adjustment is increasingly well-developed and can be refined with more clinical data. The growing use of all-payer state databases and clinical registries and the exchange of data across electronic medical records allow for easier capture of all cases and for richer clinical data. Moving forward, such efforts will also facilitate the capture of important www.annalsofsurgery.com | 237

Copyright © 2014 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.

Annals of Surgery r Volume 261, Number 2, February 2015

Mehrotra and Dimick

Clinical Condion

Outcome*

Mortality Rate (%)†

0.5

RR/HR (95% CI) 1.0

2.0

RR/HR (95% CI)

Favors COE

Bariatric Surgery Livingston et al,1 2009

Short-term complicaons

0.09

1.76 (0.73-4.25)‡

Nguyen et al,5 2012

Overall Complicaons

0.21

0.96 (0.77-1.20)

Dimick et al,2 2013

Any complicaons

0.11

0.98 (0.90-1.06)

Favors non-COE

Orthopedic Surgery Mehrotra et al,3 2013 3

Mehrotra et al, 2013

Cervical Spinal Fusion, Short-term complicaons Lumbar Spinal Fusion, Short-term complicaons

1.38

0.90 (0.72-1.12)‡

4.14

0.98 (0.86-1.12)‡

Mehrotra et al,3 2013

Lumbar Discectomy, Short-term complicaons

3.42

0.95 (0.83-1.07)‡

Mehrotra et al,4 2013

Knee Replacement, Short-term complicaons

2.45

0.90 (0.80-1.00)‡

Mehrotra et al,4 2013

Hip Replacement, Short-term complicaons

3.12

0.81 (0.71-0.92)‡

Cancer Surgery Birkmeyer et al,6 2005

Colectomy, Operave mortality rates

5.50

0.81 (0.66-0.98)‡

Merkow et al,7 2013

Colorectal Cancer Surgery, 30-d mortality

1.90

0.76 (0.61-0.95) 0.60 (0.41-0.88)‡§

Paulson et al,8 2008

Colerectal Cancer Surgery, Operave mortality

6.70

Birkmeyer et al,6 2005

Cystectomy, Operave mortality rates

3.60

0.81 (0.59-1.09)‡

Birkmeyer et al,6 2005

Esophagectomy, Operave mortality rates

12.30

0.73 (0.54-0.97)‡ 0.88 (0.55-1.41)

Merkow et al,7 2013

Esophagogastric Surgery, 30-d mortality

3.70

Birkmeyer et al,6 2005

Gastrectomy, Operave mortality rates

10.50

0.66 (0.50-0.79)‡

Merkow et al,7 2013

Pancreac Surgery, 30-d mortality

2.60

0.79 (0.60-1.05)

Birkmeyer et al,6 2005

Pancreac Resecon, Operave mortality rates

7.10

0.87 (0.63-1.19)‡

Birkmeyer et al,6 2005

Lung Resecon, Operave mortality rates

5.60

0.79 (0.65-0.94)‡

Paulson et al,8 2008

Rectal Surgery, Operave mortality

5.00

0.51 (0.26-1.00)‡§

FIGURE 1. Studies comparing outcomes at COE and non-COE (Supplemental Digital Content Reference list for references in this figure is available at http://links.lww.com/SLA/A737). ∗ Studies often reported multiple outcomes. For consistency across studies we focus on short-term or 30-day outcomes. †Short-term mortality rate reported among non-COE. ‡Studies varied in how results reported across odds ratios, hazard ratios, and risk ratios. In those studies where odds ratios were reported, odds ratios converted to rate ratios to ease comparison using the method outlined in Zhang and Yu,10 recognizing controversy over this method. §Paulson et al do not report confidence intervals. Confidence intervals estimated using reported P values and the common method for estimating the P value from an odds ratio standard error. COE indicates Centers of Excellence; HR, hazard ratio; RR, risk ratio. outcomes beyond short-term complications and mortality, including patient-reported outcomes such as functional status. In using outcomes to designate hospitals, a key statistical concern is the random year-to-year variation in outcomes, particularly among low-volume hospitals.6 However, newer statistical methods using empirical Bayes techniques help address this variation and are more predictive of future outcomes at a hospital.7 The ability to predict future performance is the key consideration, as the goal here is to encourage patients to switch to hospitals with better outcomes.

at hospitals with higher quality will lead to fewer costly hospitalizations. However, the weak relationship between quality and costs in general makes this unlikely.8 Therefore, if reducing costs is a goal, it is important to explicitly include costs as another outcome in the designation criteria. For example, the Blue Cross Blue Shield plans have created a new designation in their Centers of Excellence program, Blue Distinction Plus, where the “plus” indicates the hospital is a low-cost center.

OTHER CONSIDERATIONS

CONCLUSIONS

We recognize that outcomes cannot be used as designation criteria for low-volume procedures. Here, structural and process criteria themselves are more appropriate designation criteria for several reasons: because the number of cases might be very small at many hospitals, the knowledge and expertise associated with more volume may be particularly important. Also, the higher morbidity with many low-volume procedures (eg, pancreatic resection) might mean that availability of other hospital capacities such as intensive care may be critical. This is supported by the limited evidence. In evaluations of National Cancer Institute–designated centers for some types of cancer surgery, designated hospitals seem to have lower mortality (Fig. 1). Another potential goal for Centers of Excellence programs is to lower costs. The hope is that encouraging patients to receive care

Improving Centers of Excellence programs is critical because such programs can also be harmful. The goal of these programs is to encourage patients to change hospitals. Having patients travel far from their homes and their social support systems may be costly in terms of patient time and emotional well-being. Separating a patient from his or her usual physicians could have a negative impact on coordination of care. Finally, such programs could have unintended consequences. A recent evaluation of Medicare’s policy to only reimburse for care at Centers of Excellence for bariatric surgery seems to have had a negative impact on access for racial minorities.9 To outweigh these potential harms, the outcomes at Centers of Excellence must be clearly superior. Moving forward, we need to change how hospitals are designated and most importantly provide evidence that the Centers of Excellence are truly excellent.

238 | www.annalsofsurgery.com

 C 2014 Wolters Kluwer Health, Inc. All rights reserved.

Copyright © 2014 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.

Annals of Surgery r Volume 261, Number 2, February 2015

Ensuring Excellence in Centers of Excellence Programs

REFERENCES 1. Halm EA, Lee C, Chassin MR. Is volume related to outcome in health care? A systematic review and methodologic critique of the literature. Ann Intern Med. 2002;137:511–520. 2. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med. 2003;163:1409–1416. 3. Pronovost PJ, Angus DC, Dorman T, et al. Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review. JAMA. 2002;288:2151–2162. 4. Aiken LH, Clarke SP, Sloane DM, et al. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002;288:1987–1993. 5. Stulberg JJ, Delaney CP, Neuhauser DV, et al. Adherence to surgical care improvement project measures and the asso-

 C 2014 Wolters Kluwer Health, Inc. All rights reserved.

6.

7. 8. 9.

10.

ciation with postoperative infections. JAMA. 2010;303:2479– 2485. Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA. 2004;292:847– 851. Dimick JB, Staiger DO, Birkmeyer JD. Ranking hospitals on surgical mortality: the importance of reliability adjustment. Health Serv Res. 2010;45:1614–1629. Hussey PS, Wertheimer S, Mehrotra A. The association between health care quality and cost: a systematic review. Ann Intern Med. 2013;158:27–34. Nicholas LH, Dimick JB. Bariatric surgery in minority patients before and after implementation of a Centers of Excellence program. JAMA. 2013;310:1399– 1400. Zhang J, Yu KF. What’s the relative risk? A method of correcting the odds ratio in cohort studies of common outcomes. JAMA. 1998;280:1690–1691.

www.annalsofsurgery.com | 239

Copyright © 2014 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.

Ensuring excellence in centers of excellence programs.

Studies have found associations between better outcomes and a variety of structural and process criteria that help explain the wide outcome variations...
124KB Sizes 2 Downloads 8 Views