Patient Outcomes Research Teams and the Agency for Health Care Policy and Research Marcel E. Salive, M.D., Jennifer A. Mayfield, M.D., and Norman W Weissman For over 20 years the Agency for Health Care Policy and Research (AHCPR) and its predecessor, the National Center for Health Services Research and Health Care Technology Assessment (NCHSR), have supported research on the quality of health care. The Agency for Health Care Policy and Research was established by Congress as the eighth agency of the U.S. Public Health Service (PHS), to highlight a new emphasis on medical effectiveness research (PL 101-239). In carrying out the law, the Department of Health and Human Services created the Medical Treatment Effectiveness Program (MEDTEP), which is coordinated by AHCPR but also involves other agencies of the PHS, the Health Care Financing Administration (HCFA), and other governmental entities. MEDTEP expands upon and joins several earlier programs: NCHSR's (1988) Patient Outcome Assessment Research Program, HCFXs Medical Treatment Effectiveness Initiative (Roper et al. 1988), and the congressional emphasis on the development of practice guidelines (Agency for Health Care Policy and Research 1990). MEDTEP consists of four elements: medical treatment effectiveness research, development of data bases for such Marcel E. Salive, M.D., M.P.H. is Medical Officer, Center for Medical Effectiveness Research, Agency for Health Care Policy and Research (AHCPR); Jennifer A. Mayfield, M.D., M.P.H. is Medical Officer, Center for General Health Services Extramural Research, AHCPR; and Norman W. Weissman, Ph.D. is Director, Center for General Health Services Extramural Research, AHCPR. Address correspondence and requests for reprints to Dr. Salive, Epidemiology, Demography and Biometry Program, National Institute on Aging, National Institutes of Health, 7550 Wisconsin Avenue, Room 612, Bethesda, MD 20892.

698

HSR: Health Services Research 25:5 (December 1990)

research, development of clinical guidelines, and the dissemination of research findings and clinical guidelines. This article focuses on the background and evolution of one element of the effectiveness research program, the Patient Outcomes Research Teams (PORTs). AHCPR, specifically the newly established Center for Medical Effectiveness Research, has recently awarded grants to several multidisciplinary PORTs (described in the accompanying articles). The teams, which include health services researchers together with community and academic physicians, will identify and analyze the outcomes of alternative practice patterns and will develop and test methods to reduce inappropriate variations (National Center for Health Services Research and Health Care Technology Assessment 1988; Bowen and Burke 1988). Medical effectiveness research is a direct extension of the work previously supported by AHCPR and NCHSR on quality of care, variations in medical practice, health status measures, decision analysis, and technology assessment. This research holds great potential to improve the quality of health care and the allocation of health care resources.

QUALITY OF CARE Donabedian (1988) has provided a widely accepted definition of quality of medical care: "the ability to achieve desirable objectives [states of health] using legitimate means [various aspects of health care]." He has also developed one of the most widely accepted systems for assessing the quality of care, based on the measurement of structure, process, and outcome of care (Donabedian 1966). The first method, structural evaluation of medical care quality, deals with stable resources needed to provide care, such as provider qualifications, administrative organization, and facilities. Examples include school accreditation, licensure, specialty board certification, and continuing medical education credits. Although structural elements form the basis of the health care system, research has noted a weak correlation between structural assessment and other measures of the quality of care a physician or hospital provides. In the late 1960s, the federal government sought better methods to evaluate the quality of care being provided through the newly established Medicare and Medicaid programs. NCHSR, after its establishment in 1968, developed a research and demonstration program called the Experimental Medical Care Review Organization (EMCRO) (Sanazaro, Goldstein, Roberts, et al. 1972). This program used volun-

POR Ts and AHCPR

699

teer physicians to develop methods of evaluating physician performance that would meet scientific and technical standards of objectivity and reliability. The majority of these centers developed process measures of quality of care, which compared actual care delivered with relevant standards. NCHSR funded 16 EMCRO centers, many of which later became professional standards review organizations

(PSROs). The quality assurance systems using process measures developed in the past two decades are now widespread throughout the medical care system, and theoretically function both to control costs and ensure quality care. Nevertheless, the PSROs and subsequent peer review organizations (PROs) have focused primarily on the former (Dans, Weiner, and Otter 1985). Although the criteria they use were developed carefully, thoroughly, and using scientifically sound methods, some of the criteria made little sense to clinicians. Differences in physician practice by specialty, among patient populations, and among regions also worked to thwart the attempt to develop a valid and generalizable set of rules for patient management. A major criticism of process criteria is that they could increase care delivery that does not affect the ultimate outcome (Brook and Appel 1973). Furthermore, following optimal process does not always assure an optimal outcome. Studies have shown no relationship or only a weak association between process of care and outcome for some common medical problems (Brook and Appel 1973). An example of a successful process measure with a failed outcome is the statement "the surgery was a success, but the patient died." Obviously, if we want to improve the outcome of health care, we need to measure it directly in terms of death, disability, and quality of life, rather than measuring the structure or process of care. A healthy patient outcome, since it is the general goal of medical care, has face validity as a quality measure. Some limitations remain since, for example, an adverse outcome may have multiple causes, possibly including the care delivered. Florence Nightingale, who kept careful records of her charges, their care, and the outcomes, was one of the first to conduct outcomes research. Ernest Codman, a Boston surgeon, was one of the earliest proponents of quality evaluation of medical care. In the early twentieth century, Codman developed the End Result System, "a medical records system which allowed for a medical audit to measure surgical and medical outcomes" (Reverby 1981, 158). Many modern leaders in medical records, quality assurance, and outcomes research view Codman as a major figure and precursor to current work. Although Cod-

700

HSR: Health Services Research 25:5 (December 1990)

man proposed outcome-based quality measures in 1910, structural measures gained ascendancy until the 1960s when the "health accounting" system of quality assurance was begun (Williamson 1971; Williamson, Aronovitch, Simonson, et al. 1975). Health accounting initially sets priorities, then assesses outcomes, determines and implements actions to ensure improvement, and finally reassesses outcomes. Another method of outcome measurement uses the "sentinel event" as described by Rutstein and co-workers (1975) and recently developed for in-hospital deaths (Hannan et al. 1989). This system flags clearly adverse outcomes for further review to determine if process problems might have led to the event. Brook and others (1977) reviewed a number of these methods and developed outcome criteria for eight conditions.

EFFECTIVENESS RESEARCH Two words often used in describing evaluations of medical practice outcomes are "efficacy" and "effectiveness." As generally used, efficacy refers to what a method can accomplish in expert hands when correctly applied to an appropriate patient, while effectiveness refers to its performance in customary practice (Institute of Medicine 1985). Although this may appear to be a semantic quibble (Bunker 1988), the difference between initial use by experts and diffusion into daily practice is a major focus of both patient outcomes research and quality assurance activities. The most unequivocal method of evaluating efficacy is a controlled clinical trial. The experimental design involves selecting representative subjects, randomizing them to treatment and control groups, and following the group for the outcomes of interest, such as prevention, cure, or death. These studies often require a large sample size for conclusive results, are very expensive and time-consuming to mount, and may not contain a widely representative sample. "Doubleblinding" the study subjects and physicians through the use of placebos is possible with studies of medications but much more difficult with procedures. Ethical problems may also arise, such as giving a placebo in a severe illness or when a technology or procedure has already become the standard method of care prior to rigorous evaluation. Alternatives to the controlled clinical trial have been developed to address such problems, but designs such as cohort studies, case control, surveys, and case reports are less persuasive. Direct evidence of efficacy, such as a randomized clinical trial is

PORTs and AHCPR

701

not available for many types of treatments. Some evidence of efficacy comes from reviewing available data. This may be done through group judgment (e.g., consensus development) or, more quantitatively, through meta-analysis. Consensus development appears to be better at identifying grossly inappropriate care, such as outliers (e.g., Chassin, Kosecoff, Park, et al. 1987), than defining appropriate or optimal treatment (Park, Fink, Brook, et al. 1989). Meta-analysis is a formal, quantitative, and uniform evaluation of dimensions across multiple studies, useful in clinical research, epidemiology, and health services research (for example, Louis, Fineberg, and Mosteller 1985). Large data bases and management information systems have facilitated the development of patient outcomes research. Claims data from the Medicare program and other payers, statewide hospital discharge abstracts, and other large sources of data have been used to describe variations in medical practices (Connell, Diehr, and Hart 1987). Linking data in these data bases has facilitated the longitudinal study of the outcomes of medical care. Computerized medical records, such as the COSTAR system, have facilitated the gathering of large, clinically relevant data bases; these have also been analyzed longitudinally to assess outcomes of care (Barnett 1984; Pryor, Califf, Harrell, et al. 1985). Decision analysis is a quantitative modeling technique for evaluating alternative treatment options for clinical problems when uncertainty is present. A key issue in these analyses is the value of outcomes to the patients and physicians (utility), which may vary depending on the framing of the question (Tversky and Kahneman 1981). Outcomes can also be compared for an individual patient; for example, some patients may feel that three months of home confinement may be approximately equivalent to eight years of home dialysis (Sackett and Torrance 1978). This type of analysis has been used to analyze treatments for asymptomatic prostatic hypertrophy (Barry et al. 1988) and gallstones (Ransohoff et al. 1983). Although many clinical trials have used mortality, morbidity, or physiologic changes as outcome measures, health services research has led to the development of broader measures of health. A number of these functional status measures have been proved reliable and valid, for example, the Sickness Impact Profile (Bergner, Bobbitt, Pollard, et al. 1976), the Index of Activities of Daily Living (Katz, Ford, Moskovitz, et al. 1963), and the General Well-Being Schedule (Monk 1981). These measures allow clinical trials to determine if functioning can be improved by an intervention, and are now widely employed.

702

HSR: Health Services Research 25:5 (December 1990)

SMALL-AREA VARIATIONS RESEARCH Small-area variation studies (Wennberg and Gittelsohn 1973; Wennberg, Bunker, and Barnes 1980; Paul-Shaheen, Clark, and Williams 1987) have played a leading role in the move to outcomes evaluation. Since the early 1970s, Wennberg and colleagues have reported a wide variation in the numbers and types of procedures performed by physicians on apparently similar patients, even within small and comparable communities. Even when the researchers attempted to control for differences in age, race, and other demographic features, major differences appeared in the medical care received (Wennberg, Freeman, and Culp 1987). The economic impact of practice variations may be considerable. The comparison of health care in Boston and New Haven revealed that the per capita costs were 50 percent lower in New Haven, although the outcomes of care seemed to be equivalent. If the New Haven costs were extrapolated to the population of Boston, a savings of $300 million in hospital expenditures could be predicted (Wennberg, Freeman, and Culp 1987). However, the practice changes required to achieve these savings must be based on more rigorous assessment of outcomes, to determine that such practices do not represent the denial of effective care. Physician uncertainty about which of two or more treatment patterns is optimal may be the factor that underlies the regional and smallarea variations in rates of performance of surgery and medical admission to the hospital (Eddy 1984). For example, change in tonsillectomy rates was found after variation studies were conducted and their results disseminated. This change in rates may support the notion that treatment differences are due to physician practice style, rather than innate differences of the two populations (Wennberg et al. 1977). The widespread documentation of the variations in medical care led Wennberg to ask the question: Which rate is right? That is, which rate leads to the best possible patient outcomes? For a given condition in a given patient, is one type of operation more effective than another? When is it appropriate to choose medical treatment rather than surgery? When is it appropriate to watch and wait? In 1987, NCHSR and HCFA joined together to award Wennberg a grant to extend his methodology through an investigation of the treatment of prostatic hypertrophy using claims data and primary data collected to determine outcomes of alternative treatments for this condition (Wennberg, Mulley, Hanley, et al. 1988; Barry et al. 1988; Fowler, Wennberg, Timo-

POR Ts and AHCPR

703

thy, et al. 1988; Roos, Wennberg, Malenka, et al. 1989). This project demonstrated the feasibility and productivity of forming multidisciplinary teams to address areas of uncertainty through patient outcomes research.

PATIENT OUTCOMES RESEARCH TEAMS (PORTS) In 1986, Congress established NCHSR's Patient Outcome Assessment Research Program through the Omnibus Reconciliation Act of 1986 (PL 99-509) with funding from the Medicare Trust Fund. Following a program announcement (National Center for Health Services Research 1986), NCHSR funded studies of the treatment for prostatic hypertrophy, heart disease, hypertension, diabetes, and rheumatoid arthritis; coronary artery bypass surgery; and intensive care therapy. Methodology research is being supported on development of better outcome measures; meta-analysis; small-area variation; market-area determination; and the evaluation methods for effective dissemination. The applications for research funds were evaluated for scientific merit by the usual peer review process (National Center for Health Services Research 1988). The Patient Outcomes Research Teams (PORTs) represent the next phase of medical treatment effectiveness research at AHCPR. Four PORTs were funded in September 1989 to study acute myocardial infarction, benign prostatic hyperplasia, and locally invasive prostatic carcinoma, low back pain, and cataracts. Seven planning grants were awarded at that time for preliminary work to assist in further development of PORTs to study the treatment of biliary tract disease, colon cancer, hip fracture, peripheral vascular disease, chronic ischemic heart disease, and stroke. Through August 1990, additional PORT grants were awarded for the study of total knee replacement, chronic ischemic heart disease, and biliary tract disease. Additional PORTs will be awarded as applications and funding permit. Congressional authorizations through 1994 provide for increases of the Medical Treatment Effectiveness Program up to $185 million in fiscal year 1994. Other PORT activity under MEDTEP will include the study of health care treatment effectiveness issues for the general population (i.e., not limited to the Medicare population), for example cesarean delivery, otitis media, dental implants, lens extraction, and some procedures that, although used on patients of all ages, might have different risks and benefits in younger populations. About one-third of

704

HSR: Health Services Research 25:5 (December 1990)

the awards will focus on non-Medicare concerns and two-thirds of them on Medicare concerns, reflecting the division between general revenue and Medicare trust funds in AHCPR's authorizing legislation. AHCPR intends to expand the research programs to include other investigator-initiated assessment projects, controlled trials and prospective studies as required, data development and maintenance, training research manpower, and demonstrations of the effectiveness of the research products. Controlled clinical trials will be supported to answer important questions raised by the research that cannot be answered using less rigorous methods.

DISSEMINATION As research is completed under MEDTEP, the results will be widely disseminated. A key component of the projects funded is the demonstration of new methods of informing clinicians of the results of patient outcomes research, and assessment of their effect on clinical practices and patient outcomes. A recent review highlighted the need for evaluation of better strategies to improve clinical practice and bring it closer to the highest quality of care (Lomas and Haynes 1988). The results from the research teams will be transmitted to the Health Resource and Services Administration, for implementation through medical education. It is anticipated that this knowledge base of patient outcomes research will be useful to practicing physicians, patients, and those who pay for health care services, including the Health Care Financing Administration (Roper et al. 1988) and private third party payers. All groups could utilize these results to provide the highest quality of care with the optimal outcome in a cost-effective manner.

CONCLUSIONS Relman (1988) has described these recent developments as three revolutions in medicine. First came the Era of Expansion, from World War II through the late 1960s; then the Era of Cost Containment; and, just beginning, the Era of Assessment and Accountability, whereby we refocus on the quality and effectiveness of health care. He describes the goal of the current era as "to achieve an equitable health care system, of satisfactory quality, at a price we can afford" (p. 1222). Health services

POR Ts and AHCPR

705

researchers will be part of the leading edge of Relman's third era, and must expand their role in performing and disseminating communitybased and academic research on patient outcomes when controversy exists.

ACKNOWLEDGMENTS We are indebted to James McAllister, Larry Patton, Ira E. Raskin, Lawrence Rose, and Steven Woolf for helpful comments.

REFERENCES Agency for Health Care Policy and Research. "Medical Treatment Effectiveness Research." Agency for Health Care Policy and Research Program Note. Rockville, MD: Department of Health and Human Services, Public Health Service, March 1990. Barnett, G. 0. "The Application of Computer-Based Medical Record Systems in Ambulatory Practice." New EnglandJournal of Medicine 310, no. 25 (21 June 1984):1643-50. Barry, M. J., A. G. Mulley, F. J. Fowler, and J. W. Wennberg. "Watchful Waiting vs. Immediate Transurethral Resection for Symptomatic Prostatism: The Importance of Patients' Preferences."Journal of the American Medical Association 259, no. 20 (27 May 1988):3010-17. Bergner, M., R. A. Bobbitt, W. E. Pollard, D. P. Martin, and B. S. Gilson. "The Sickness Impact Profile: Validation of a Health Status Measure." Medical Care 14, no. 1 (January 1976):57-67. Bowen, 0. R., and T. R. Burke. "New Directions in Effective Quality of Care: Patient Outcome Research." Federation of American Health Systems Review 21, no. 5 (September-October 1988):50-53. Brook, R. H., and F. A. Appel. "Quality-of-care Assessment: Choosing a Method for Peer Review." New England Journal of Medicine 288, no. 25 (21 June 1973):1323-29. Brook, R. H., A. Davies-Avery, S. Greenfield, L. J. Harris, T. Lelah, N. E. Solomon, and J. E. Ware, Jr. "Assessing the Quality of Medical Care Using Outcome Measures: An Overview of the Method." Medical Care 15, no. 9 (1977, Supplement): 1-65. Bunker, J. P. "Is Efficacy the Gold Standard for Quality Assessment?" Inquiry 25, no. 1 (Spring 1988):51-58. Chassin, M. R., J. Kosecoff, R. E. Park, C. M. Winslow, K. L. Kahn, N. J. Merrick, J. Keesey, A. Fink, D. H. Solomon, and R. H. Brook. "Does Inappropriate Use Explain Geographic Variations in the Use of Health Care Services? A Study of Three Procedures."Journal of the American Medical Association 258, no. 18 (1987):2533-37. Connell, F. A., P. Diehr, and L. G. Hart. "The Use of Large Data Bases in Health Care Studies." Annual Review of Public Health 8 (1987):51-74.

706

HSR: Health Services Research 25:5 (December 1990)

Dans, P. E., J. P. Weiner, and S. F. Otter. "Peer Review Organizations: Promises and Pitfalls." New England Journal of Medicine 313, no. 18 (31 October 1985):1131-37. Donabedian, A. "Evaluating the Quality of Medical Care." Milbank Memorial Fund Quarterly 44, no. 3, Part 2 (1966):166-206. Donabedian, A. "Quality Assessment and Assurance: Unity of Purpose, Diversity of Means." Inquiry 25, no. 1 (Spring 1988):173-92. Eddy, D. M. "Variations in Physician Practice: The Role of Uncertainty." Health Affairs 3, no. 2 (Summer 1984):74-89. Fowler, F. J., Jr., J. E. Wennberg, R. P. Timothy, M. J. Barry, A. G. Mulley, and D. Hanley. "Symptom Status and Quality of Life following Prostatectomy." Joumal of the American Medical Association 259, no. 20 (27 May 1988):3018-22. Hannan, E. L., H. R. Bernard, J. F. O'Donnell, and H. Kilburn, Jr. "A Methodology for Targeting Hospital Cases for Quality of Care Record Reviews." AmericanJournal of Public Health 79, no. 4 (April 1989):430-36. Institute of Medicine. Assessing Medical Technologies. Washington, DC: National Academy Press, 1985. Katz, S., A. B. Ford, A. W. Moskovitz, B. A. Jackson, and M. W. Jaffe. "Studies of Illness in the Aged. The Index of ADL: A Standardized Measure of Biological and Psychosocial Function."Journal of the American Medical Association 185, no. 12 (21 September 1963):914-19. Lomas, J., and R. B. Haynes. "A Taxonomy and Critical Review of Tested Strategies for the Application of Clinical Practice Recommendations: From 'Official' to 'Individual' Clinical Policy." In Impkmenting Preventive Services. Edited by R. N. Battista and R. S. Lawrence. American Journal of Preventive Medicine 4, no. 4 (1988 Supplement):77-94. Louis, T. A., H. V. Fineberg, and F. Mosteller. "Findings for Public Health from Meta-Analysis." Annual Review of Public Health 6 (1985): 1-20. Monk, M. "Blood Pressure Awareness and Psychological Well-Being in the Health and Nutrition Examination Survey." Clinical and Investigative Medicine 4, no. 3/4 (1981): 183-89. National Center for Health Services Research and Health Care Technology Assessment. "NCHSR Solicits Proposals for Research in Medical Practice Variations and Patient Outcomes." National Centerfor Health Services Research and Health Care Technology Assessment Program Note. Rockville, MD: Department of Health and Human Services, Public Health Service, September 1986. . "Patient Outcome Assessment Research Program Extramural Assessment Teams." National Center for Health Services Research and Health Care Technology Assessment Program Note. Rockville, MD: Department of Health and Human Services, Public Health Service, November 1988. Park, R. E., A. Fink, R. H. Brook, M. R. Chassin, K. L. Kahn, N. J. Merrick, J. Kosecoff, and D. H. Solomon. "Physician Ratings of Appropriate Indications for Three Procedures: Theoretical Indications vs Indications Used in Practice." Americnm Journal of Public Health 79, no. 4 (April 1989):445-47. Paul-Shaheen, P., J. D. Clark, and D. Williams. "Small Area Analysis: A Review and Analysis of the North American Literature."Journal ofHealth Politics, Policy and Law 12, no. 4 (Winter 1987):741-809.

POR Ts and AHCPR

707

Pryor, D. B., R. M. Califf, F. E. Harrell, Jr., M. A. Hlatky, K. L. Lee, D. B. Mark, and R. A. Rosati. "Clinical Data Bases: Accomplishments and Unrealized Potential." Medical Care 23, no. 5 (May 1985):623-47. Ransohoff, D. F., W. A. Gracie, L. B. Wolfenson, and D. Neuhauser. "Prophylactic Cholecystectomy or Expectant Management for Silent Gallstones: A Decision Analysis." Annals of Intenal Medicine 99, no. 2 (February 1983):199-204. Relman, A. S. "Assessment and Accountability: The Third Revolution in Medical Care." New EnglandJournal of Medicine 319, no. 18 (3 November 1988): 1220-22. Reverby, S. "Stealing the Golden Eggs: Ernest Amory Codman and the Science and Management of Medicine." Buletin of the History of Medicine 55, no. 2 (Summer 1981): 156-71. Roos, N. P., J. E. Wennberg, D. J. Malenka, E. S. Fisher, K. McPherson, T. F. Anderson, M. M. Cohen, and E. Ramsey. "Mortality and Reoperation after Open and Transurethral Resection of the Prostate for Benign Prostatic Hyperplasia." New EnglandJournal of Medicine 320, no. 17 (27 April 1989):1120-24. Roper, W. L., W. Winkenwerder, G. M. Hackbarth, and H. Krakauer. "Effectiveness in Health Care: An Initiative to Evaluate and Improve Medical Practice." New EnglandJournal ofMedicine 319, no. 18 (3 November 1988):1197-1202. Rutstein, D. D., W. Berenberg, T. C. Chalmers, C. D. Child, III, A. P. Fishman, and E. B. Perrin. "Measuring the Quality of Medical Care: A Clinical Method." New EnglandJournal of Medicine 294, no. 11 (11 March 1976):582-88. Sackett, D. L., and G. W. Torrance. "The Utility of Different Health States as Perceived by the General Public."Journal of Chronic Disease 31, no. 11 (November 1978):697-704. Sanazaro, P. J., R. L. Goldstein, J. S. Roberts, D. B. Maglott, and J. W. McAllister. "Research and Development in Quality Assurance. The Experimental Medical Care Review Organization Program." New EnglandJournal of Medicine 287, no. 22 (30 November 1972):1125-31. Tversky, A., and D. Kahneman. "The Framing of Decisions and the Psychology of Choice." S&ience 211 (30 January 1981):453-58. Wennberg, J., and A. Gittelsohn. "Small Area Variations in Health Care Delivery." &cience 182 (14 December 1973):1102-1108. Wennberg, J. E., L. Blowers, R. Parker, and A. M. Gittelsohn. "Changes in Tonsillectomy Rates Associated with Feedback and Review." Pediatrics 59, no. 6 (June 1977):821-26. Wennberg, J. E., J. L. Freeman, and W. J. Culp. "Are Hospital Services Rationed in New Haven or Over-Utilised in Boston?" Lancet 1, no. 8543 (23 May 1987):1185-89. Wennberg, J. E., J. P. Bunker, and B. Barnes. "The Need for Assessing the Outcome of Common Medical Practices." Annual Review of Public Health 1 (1980):277-95. Wennberg, J. E., A. G. Mulley, D. Hanley, R. P. Timothy, F. J. Fowler, Jr., N. P. Roos, M. J. Barry, K. McPherson, E. R. Greenberg, D. Soule, T. Bubolz, E. S. Fisher, and D. J. Malenka. "An Assessment of Prosta-

708

HSR: Health Services Research 25:5 (December 1990)

tectomy for Benign Urinary Tract Obstruction." Journal of the American Medical Association 259, no. 20 (27 May 1988):3027-30. Williamson, J. W. "Evaluating Quality of Patient Care. A Strategy Relating Outcome and Process Assessment."Journal of the American Medical Association 218, no. 4 (25 October 1971):564-69. Williamson, J. W., S. Aronovitch, L. Simonson, C. Ramirez, and D. Kelly. "Health Accounting: An Outcome-Based System of Quality Assurance: Illustrative Application to Hypertension." Bulletin of the New York Academy of Medicine 51, no. 6 (June 1975):727-38.

Patient Outcomes Research Teams and the Agency for Health Care Policy and Research.

Patient Outcomes Research Teams and the Agency for Health Care Policy and Research Marcel E. Salive, M.D., Jennifer A. Mayfield, M.D., and Norman W We...
1MB Sizes 0 Downloads 0 Views