GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

and data analysis. M. Stefanak served as the practitioner principal investigator of the project and was involved in the conceptualization of the project, the development of the study design, qualitative data collection, and interpretation of results, as well as the drafting of the article. J. Filla served as the project coordinator and was involved in the development of the study design, qualitative data collection, interpretation of results, and drafting of the article. R. Pradhan was involved in the development of the study design, administrative data collection, analysis of data, and interpretation of results, as well as drafting of the article. S. A. Smith was involved in the development of the study design, administrative data collection, preparation of data for analysis, and review and editing of the article.

Acknowledgments Funding supporting this project was provided through the Ohio Research Association for Public Health Improvement by the Practice Based Research Network National Coordinating Center at the University of Kentucky College of Public Health and the Robert Wood Johnson Foundation.

We also acknowledge Ken Slenkovich, Michelle Menegay, Scott Frank, Tegan Beechey, and Joe Mazzola of the Ohio Department of Health, all of whom provided key assistance at various points in this project. We also acknowledge and thank the Ohio local health officials who shared their time and insights with us and, in so doing, assisted in critical ways with the research underlying this article.

Human Participation Protection This study was approved by the institutional review boards of Kent State University and the University of Arkansas for Medical Sciences.

References 1. National Association of County and City Health Officials. Local Health Department Job Losses and Program Cuts: Findings From the 2013 Profile Study. Washington, DC: National Association of County and City Health Officials; 2013. 2. National Association of County and City Health Officials. National Profile of

Local Health Departments. Available at: http://www.naccho.org. Accessed October 24, 2014.

9. Heckman JJ. Sample selection bias as a specification error. Econometrica. 1979;47(1):153---161.

3. Santerre RE. Jurisdiction size and local public health spending. Health Serv Res. 2009;44(6):2148---2166.

10. Cameron AC, Trivedi PK. Microeconometrics Using Stata. Vol. 5. College Station, TX: Stata Press; 2009.

4. Mays GP, McHugh MC, Shim K, et al. Institutional and economic determinants of public health system performance. Am J Public Health. 2006;96(3):523---531.

11. Mays GP, Smith S, Ingram R, Racster L, Lamberth C, Lovely E. Public health delivery systems: evidence, uncertainty, and emerging research needs. Am J Prev Med. 2009;36(3): 256---265.

5. Hoornbeek J, Budnik A, Beechey T, Filla J. Consolidating Health Departments in Summit County, Ohio: A One Year Retrospective. Kent, OH: Kent State University Center for Public Administration and Public Policy; 2012. 6. Ohio State Auditor’s Office. Audits of city and county governments. Available at: https://ohioauditor.gov/auditsearch/search. aspx. Accessed February 16, 2013. 7. Ohio Municipal League. List of all Ohio municipalities including counties and Ohio municipalities with charters. Available from: http://www.omlohio.org. Accessed October 26, 2012. 8. US Census Bureau. State and county facts. Available from: http://quickfacts. census.gov/qfd/states/39000lk.html. Accessed January 20, 2013.

12. Kodrzycki Y. The Quest for CostEfficient Local Government in New England: What Role for Regional Consolidation? New England Public Policy Center Research Report 13-1. Boston, MA: Federal Reserve Bank of Boston; 2013. 13. Mays GP, McHugh MC, Shim K, et al. Getting what you pay for: public health spending and the performance of essential public health service. J Public Health Manag Pract. 2004;10(5):435---443. 14. Erwin PC, Greene SB, Mays GP, Ricketts TC, Davis MV. The association of changes in local health department resources with changes in state-level health outcomes. Am J Public Health. 2011; 101(4):609---615.

Measuring Public Health Practice and Outcomes in Chronic Disease: A Call for Coordination Deborah S. Porterfield, MD, MPH, Todd Rogers, PhD, LaShawn M. Glasgow, DrPH, and Leslie M. Beitsch, MD, JD

A strategic opportunity exists to coordinate public health systems and services researchers’ efforts to develop local health department service delivery measures and the efforts of divisions within the Centers for Disease Control and Prevention’s National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) to establish outcome indicators for public health practice in chronic disease. Several sets of outcome indicators developed by divisions within NCCDPHP and

intended for use by state programs can be tailored to assess outcomes of interventions within smaller geographic areas or intervention settings. Coordination of measurement efforts could potentially allow information to flow from the local to the state to the federal level, enhancing program planning, accountability, and even subsequent funding for public health practice. (Am J Public Health. 2015; 105:S180–S188. doi:10.2105/ AJPH.2014.302238)

IN 2004, THE CURRENT DIrector of the Centers for Disease Control and Prevention (CDC) published a commentary sounding an alarm about the lack of activity in and measurement of chronic disease prevention and control in local public health practice.1 Since then, several small studies have examined practice in local health departments (LHDs) for specific chronic diseases, such as diabetes control and obesity prevention.2---4 Other large-scale studies

S180 | Government, Law, and Public Health Practice | Peer Reviewed | Porterfield et al.

have examined associations between aspects of practice such as health department spending and varying sets of outcome indicators for chronic disease practice.5---7 More recently, investigators in public health systems and services research (PHSSR) have initiated the development of sets of comprehensive service delivery and activities measures in maternal and child health, infectious disease, and chronic disease for LHDs.8,9 These activities have evolved within the

American Journal of Public Health | Supplement 2, 2015, Vol 105, No. S2

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

TABLE 1—Definitions of Key Terms Term

Definition

Practice

Structures and processes in public health agencies, including service delivery and program implementation. Also includes outputs of public health practice.

Outcomes

Short-, intermediate-, and long-term outcomes of public health practice.

Construct

A theoretical or hypothetical concept that is not directly observable. In most logic models, which are graphic representations of the theory of changing that links antecedent and consequent constructs, the box labels are typically constructs (e.g., proper nutrition).

Indicator

Most commonly defined as specific, observable, and measurable characteristics or changes.

Measure

The attribute being observed; such responses to an item on a survey or a biometric characteristic, used to assess indicator status.

Note. The terms indicator and measure are often used interchangeably. In referencing other work, we have attempted to use the terminology used in the original documents.

context of several national efforts to develop and catalog community health indicators more generally, such as Healthy People 2020, Chronic Disease Indicators, County Health Rankings and Roadmaps, Community Health Status Indicators, and the Health Indicator Warehouse.10---15 A strategic opportunity exists for closer coordination among researchers and practitioners conducting individual studies, as well as those developing sets of “dashboard” measures at the local level, and the efforts of divisions within

the CDC’s National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) to establish outcome indicators for public health practice in chronic disease. These indicators, although developed by divisions within CDC and intended for use by funded state programs, can be tailored to assess outcomes of interventions in smaller geographic areas (e.g., county) or intervention settings (e.g., health system or worksite). The Institute of Medicine has commented on the proliferation of indicator sets, which can “cause

confusion, is inefficient, and impairs valid comparisons.”16(p55) For PHSSR researchers and practitioners developing service delivery and activities measures for LHDs, we provide an overview of the federal efforts in chronic disease indicator development, highlight methods used, and catalog relevant resources, some of which are only found in the fugitive literature. Alignment and coordination of activities to measure public health practice and outcomes could lead to the development of more

Note. NCCDPHP = National Center for Chronic Disease Prevention and Health Promotion. Public health practice indicators include measures of structure, process, and outputs; outcome indicators include short-term, intermediate, and long-term outcomes. In comparison, the service delivery and program implementation indicators also include structures, processes, and outputs, and the outcomes indicator sets of the National Center for Chronic Disease Prevention and Health Promotion described in this article include outputs as well as outcomes.

FIGURE 1—Relationship between public health practice indicators and outcome indicators.

Supplement 2, 2015, Vol 105, No. S2 | American Journal of Public Health

standardized measurement, which in turn could allow information and data to flow from the local to the state to the federal level to enhance program planning, program impact, accountability, and even subsequent funding for chronic disease prevention and control. It may also allow for a common use of language that could facilitate greater collaboration and resource sharing during this era of resource constraints.

MEASUREMENT OF PUBLIC HEALTH PRACTICE AND OUTCOMES Broadly speaking, measurement of public health practice can be at the level of structure, process, or outcome17; in this article, we use the term practice to denote structures and processes in public health agencies, including service delivery and program implementation, and use outcomes to denote the short-, intermediate-, and long-term results of public health practice. Table 1 displays definitions of these and other key terms used. We distinguish between the terms indicator and measure, as noted in Table 1, but because the terms are often used interchangeably, for the work described here we attempted to use the terminology used in the referenced documents. Figure 1, a schematic of measurement of public health practice and outcomes, displays the potential overlap and relationship among the sets of indicators discussed in this article. In particular, overlap can exist between indicators considered

Porterfield et al. | Peer Reviewed | Government, Law, and Public Health Practice | S181

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

outputs of a program versus short-term outcomes. Appropriate indicators vary with the purpose of the measurement activity, whether research, evaluation, or performance measurement.18 Indicators used in research and evaluation tend to be outcomes, whereas performance measures usually include both processes and outcomes. Because performance measurement can be considered a subset of evaluation studies,19,20 performance measures in a specific content area in public health practice are likely to be a subset of a larger set of indicators developed for evaluation, research, performance management, or all 3. Although all types of indicators share the requirements for validity and reliability, several authors have detailed additional desirable characteristics of performance measures, many of which seem useful to consider for other types of measurement as well. Derose et al.17 specified that if the measure is an outcome, it should be linked (logically, if not with scientific evidence) to processes (public health services or programs) and vice versa. Performance measures should also be properly calibrated or sensitive enough to pick up important changes in public health processes. Roper and Mays,18 though discussing characteristics of performance measures that may make these measures more suitable for research, added several useful items to consider for all types of measurement: measures should reflect a structure, process, or outcome with a large expected impact on health; should reflect a process or condition within the

organization’s control (not that of other organizations or outside forces) and specific to public health; and should demonstrate substantial variation reflecting meaningful underlying differences. Lichiello and Turnock21 delineated the following characteristics of performance measures, emphasizing the practicality of performance measurement, distinct from research or evaluation: validity, reliability, responsiveness, functionality, credibility, understandability, availability, and being abuse proof. McDavid and Hawthorne19 also emphasized the need, specifically for performance measurement, for accessible, ongoing sources of data. These criteria provide useful context for the following review of recent research and evaluation efforts to measure chronic disease practice and outcomes and relevant measurement efforts at the federal level.

EXAMPLES OF CHRONIC DISEASE MEASUREMENT EFFORTS IN RESEARCH Recent research and evaluation studies of chronic disease public health practice have used varying indicators. Most of the work described in this section has attempted to measure practice (structure or processes) and associated LHD or community characteristics, although 1 group of studies has examined relationships between indicators of public health structure or processes and chronic disease outcomes. The first set of related studies has measured processes in public health practice in chronic diseases

and explored associated variables. Two studies3,4 used single-item questions from the National Association of County and City Health Officials’ Profile Survey to assess the outcome of services for diabetes or obesity versus the absence of services, exploring trends in these services as well as health department characteristics associated with offering services. Another study2 developed indicators of LHD performance related to diabetes based on the 10 essential services22 and explored associated LHD and community characteristics. A second set of related studies used chronic disease outcomes as indicators of public health performance, examining levels of public health spending as predictors.5---7 These studies included the following chronic disease outcomes: age-adjusted mortality rates for heart disease, cancer, and diabetes; 6 coronary heart disease and colon cancer death rates from the Community Health Status Indicators project;7 and smoking prevalence, obesity prevalence, and cardiovascular disease and cancer mortality. 5 In addition, at least 2 new related large-scale efforts to measure public health practice in standardized national samples will incorporate chronic disease measures as dashboard sets of measures. The Multi-Network Practice and Outcome Variation Examination Study, funded by the Robert Wood Johnson Foundation and led by the University of Kentucky, is a measure development and research activity that will identify service delivery measures for selected high-value

S182 | Government, Law, and Public Health Practice | Peer Reviewed | Porterfield et al.

public health services in 3 domains: communicable disease control, chronic disease prevention, and environmental health protection.8 Its goals are to create a standardized registry of these measures and to examine the associations between these delivery measures and population health outcomes. The project has established criteria to rate and select measures. In addition to criteria for the 3 domains and the measure’s dimension (such as capacity, reach, or volume of a service), other rating criteria are relevance and control (defined as the degree to which the measure reflects an activity that local public health agencies, their partners, or both have the authority and organizational responsibility to implement), expected health impact, expected economic impact, expected variation, feasibility to collect, expected validity, and expected reliability. A second project, the Public Health Activities and Services Tracking Study, also funded by the Robert Wood Johnson Foundation, is collecting multistate administrative data from LHDs with a similar goal of measuring practice variation and change and its impact.9 It has focused on maternal and child health initially, but plans are to collect and analyze environmental health and chronic disease data.

CHRONIC DISEASE MEASUREMENT EFFORTS WITHIN THE NCCDPHP Over the past decade, divisions within the NCCDPHP have initiated and begun implementation of several distinct evaluation and performance

American Journal of Public Health | Supplement 2, 2015, Vol 105, No. S2

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

measurement projects. We highlight 3 major efforts to develop comprehensive sets of evaluation indicators for tobacco, cardiovascular disease, and diabetes programs.

Overview Work in this area predates the publication of the referenced sets

of indicators11,23 and includes current work in other divisions of the CDC; however, we focus on efforts to which we have directly contributed. The 3 sets of indicators covered in this article share similar methods and products because both the Division for Heart Disease and Stroke Prevention

(DHDSP) and the Division of Diabetes Translation (DDT) explicitly modeled their development methods on those first developed by the Office on Smoking and Health (OSH), with modifications benefiting from lessons learned in the earlier process. Coordination of

these efforts was facilitated by internal communication among NCCDPHP staff, the use of staff from OSH or DHDSP on advisory groups for the subsequent efforts, and continuity in the contractors involved in development and implementation of the indicators.

TABLE 2—Evaluation and Performance Measure Efforts Within the National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention Division Office of Smoking and Health24

Where Indicators Can Be Found

Purpose

Logic Model

No. and Characteristics of Indicators

Published document available on CDC Web site. Created as a companion to Best

To help state and territorial health departments plan and evaluate state

Practices for Comprehensive Tobacco

tobacco control programs. Primary

secondhand smoke, and promoting

Control Programs, Introduction to

audiences are (1) planners, managers,

quitting). Represent only outcomes

Program Evaluation for Comprehensive

and evaluators of state programs to

(short term, intermediate, long term).

Tobacco Control Programs, and

prevent or control tobacco use and (2)

Presented with Consumer Reports–style

Surveillance and Evaluation Data

CDC’s national partners in the fight

ratings on criteria as rated by expert

Sources for Comprehensive Tobacco

against tobacco use.

panel.a

Yes

120; organized by 3 goals (preventing initiation, eliminating exposure to

24–26

Division for Heart Disease and Stroke Prevention27,28

Control Programs. Documents describing sets of indicators

To identify outcome evaluation indicators

Yes

63 (high blood pressure); organized by 4

organized by priority areas, all under

that states and DHDSP can use to

priority areas and, as applicable, 4

development (high blood pressure,

monitor activity in priority areas relevant

settings (health care providers, health

cholesterol, quality of care, ER). Within

to heart disease and stroke prevention

care systems, worksites, communities).

each priority area are a full set of

and to identify a set of core outcome

57 (cholesterol); represent outcomes

indicators for program planning and

indicators that DHDSP will use to assess

(short-term, intermediate, long-term)

evaluation and a subset of core

the impact of the national HDSP

only. Short-term outcomes represent

indicators to allow for assessment of the national HDSP program.

program.

policy and system change. Intermediate outcomes represent behavior change and risk reduction. 42 (ER); presented with Consumer Reports–style ratings on criteria as rated by expert panel.

Division of Diabetes Translation

Not applicable. Indicators used internally

To support evaluation and program

Yes

74; represent outcomes (short-term,

to inform the development of some

improvement at the grantee level as well

intermediate, long-term, and impact)

indicators contained in the subsequently

as evaluation of the collective impact of

only. Presented with Consumer-Reports-

released funding opportunity announcement, State Public Health Actions

state-based diabetes prevention and control programs and the National

style ratings on criteria as rated by expert panel.

to Prevent and Control Diabetes, Heart

Diabetes Prevention Program.

Disease, Obesity and Associated Risk Factors and Promote School Health.29 Note. CDC = Centers for Disease Control and Prevention; DHDSP = Division for Heart Disease and Stroke Prevention; ER = emergency response; HDSP = heart disease and stroke prevention. a With the exception of the strength-of-evidence criterion.

Supplement 2, 2015, Vol 105, No. S2 | American Journal of Public Health

Porterfield et al. | Peer Reviewed | Government, Law, and Public Health Practice | S183

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

TABLE 3—Details of Methods Used in Evaluation and Performance Measure Efforts Within the National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention

Author

Formal Literature Review to Assess Link to Distal Outcomes

Review by Expert Panel

Yes

Yes

Office of Smoking and Health30

Subset Identified as Core Indicators

Expert Review Criteria Strength of the evaluation evidence

Yes

Resources needed for data collection Utility Face validity Uniqueness Conformity with accepted practice Summary rating Overall quality

Division for Heart Disease

Yes

Yes

and Stroke Prevention27,29

Strength of the evaluation evidence

Yes

Intensity of resources to collect and analyze data Utility of indicator to answer key evaluation questions Face validity Use of indicator in real-world practice Overall quality

Division of Diabetes Translation

No

Yes

Strength of the evaluation evidence

Not as of date of publication

Intensity of resource utilization to collect and analyze data Utility Face validity Conformity with accepted practice Overall quality

As shown in Table 2, these efforts to develop chronic disease indicators have similarly stated purposes: to help CDC grantees, local communities, or both plan and evaluate their chronic disease programs. In addition, the indicators are also meant to facilitate evaluation of the national program through identification of a subset of core indicators and development of data collection methods.

Description of Methods Table 3 provides an overview of the methods used by divisions within NCCDPHP to develop indicators. All 3 programs relied

heavily on logic models to develop indicators and to organize guidance documents. More important, the sets of indicators all focus on outcomes and are termed outcome indicators by these divisions (although a small number of them may be considered outputs rather than outcomes of public health practice, as indicated by overlapping squares in Figure 1). OSH, DHDSP, and DDT indicators were all reviewed by expert panels, which rated each indicator on a set of similar criteria (Table 3). The total number of indicators in each set ranges from 74 to more than 160. With few exceptions, the respective

Divisions present all indicators that underwent expert panel review; there was no cutoff for an acceptable rating, no weighting of criteria, and summary rating and overall quality criteria were also rated by the expert panelists and not derived from other ratings. Consumer Reports---style ratings for each set of criteria are provided for indicators to help users choose the indicators that best fit their needs. OSH. OSH and its contractor first reviewed and refined 3 goal area logic models (preventing initiation, eliminating exposure to secondhand smoke, and promoting quitting), then reviewed the

S184 | Government, Law, and Public Health Practice | Peer Reviewed | Porterfield et al.

published and fugitive literature to identify candidate indicators for each outcome component (or box) in each goal logic model.30 For each candidate indicator, the team reviewed the scientific evidence for an association between the indicator and the more distal outcome components in the logic models. Where possible, the team identified data sources and survey questions for each indicator. A panel of 16 experienced content experts from academia, state tobacco control programs, nonprofit research organizations, and the CDC evaluated 136 candidate indicators on 8 equally weighted criteria (see Starr et al.,30

American Journal of Public Health | Supplement 2, 2015, Vol 105, No. S2

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

Appendices B and C, for a detailed description of the methods). Indicators were merged when the uniqueness ratings suggested redundancy (n = 31), dropped (n = 4) because expert reviewers suggested that the indicators did not align with the goal area logic model, or added (n = 7) when experts identified gaps and offered new indicator suggestions. In the final document, 2 criteria the expert panel rated were not presented: uniqueness, which was used to identify redundancy in the original set of indicators, and summary rating, intended to be a reviewer’s “opinion of how essential a particular indicator is for the evaluation of comprehensive, statewide tobacco control programs” but that turned out to be highly correlated (r = .75; P £ .001) with overall quality, defined as “a global rating that reflects the reviewer’s opinion of the overall quality of the indicator.”30(p280-- 281) Also, instead of using the experts’ ratings of strength of the evaluation evidence, the team presented data from an additional literature review to rate the strength of evidence linking each indicator to a downstream outcome in the logic model. 30 Of the efforts described in this article, OSH was the first division to develop the core indicator concept, that is, using a subset of indicators for the purpose of national program evaluation. 31 DHDSP. Methods for DHDSP indicator development32---37 were modeled on those of OSH; here, we highlight only differences as well as information about the final

output of DHDSP efforts. The DHDSP indicators team reviewed practice and research literature to generate candidate indicators for the 4 priority areas (high blood pressure, cholesterol, quality of care [hospital based], and emergency response). The literature reviews also supported a logic link process to identify and document the degree of association between candidate indicators and downstream logic model components (boxes). DHDSP convened expert panels for each priority area, including state and federal program staff and content experts; criteria for the review are shown in Table 3. Final sets of comprehensive indicators were produced after DHDSP review and are in varying stages of CDC clearance before release to grantees and the public. DHDSP leadership selected subsets of core indicators from the full set of indicators related to high blood pressure control and high cholesterol control, which may be used to assess the impact of the National Heart Disease and Stroke Prevention Program. DHDSP convened communities of practice to operationalize the selected core indicators, which involved refining indicator language and definitions, identifying data sources, and resolving measurement issues.34,36,37 DHDSP has also conducted extensive trainings and prepared guidance materials for grantees on using and operationalizing indicators for program planning and evaluation.32,33 DDT. Methods for the DDT indicator development38 were modeled on the OSH and DHDSP indicator development process; here we highlight only

Supplement 2, 2015, Vol 105, No. S2 | American Journal of Public Health

differences as well as information about the output of DDT’s work. Candidate indicators were identified through literature reviews for the recommended evidence-based interventions for Diabetes Prevention and Control Programs, review of other chronic disease measurement systems, and interviews with these programs. Candidate indicators were mapped to the DDT Program Logic Model for Diabetes Prevention and Control Projects and screened to ensure alignment with the core interventions and grantee funding requirements. An expert panel of content and measurement experts and practitioners rated 100 indicators on 6 criteria (Table 3). Two internal CDC working groups (primary prevention and diabetes care) reviewed expert panel comments and recommended changes to the indicators (deleting, merging, adding). The DDT indicators development workgroup made final decisions on indicator inclusion and wording. Notably, this set of indicators has not been published; DDT is now focused on indicators contained in a recently released funding announcement, which build on the indicators described in the article but are considered a different set.29

COORDINATION OF MEASUREMENT EFFORTS We have described sets of chronic disease indicators developed by divisions within the NCCDPHP to inform researchers and practitioners engaged in similar or related work. Grounded in scientific evidence and capturing

opinions from content experts in the field, these measurement sets represent perspectives of divisions within the NCCDPHP on outcomes for public health practice measurement in the areas of tobacco, cardiovascular disease, and diabetes and demonstrate the great strides achieved by the CDC in the past decade to address the gaps in chronic disease public health measurement. However, we did not provide a comprehensive assessment of these sets of outcome indicators, their implementation, or their impact. Little work has been done to understand indicator adoption by practitioners or researchers; 1 limited assessment of indicator utilization has been reported by OSH for its indicator set.39 For example, respective divisions within the NCCDPHP will be interested in having a more complete understanding of the extent to which the dissemination of outcome indicators has enhanced grantees’ capacity to conduct evaluations, implement evaluations, or improve their programs. Further assessment of these indicators should include whether practitioners and researchers have identified gaps in the indicator sets—specific indicators been proven less useful than others—and whether new data sources have been developed to support use of individual indicators. Although similarities likely exist among measures in use or being developed by PHSSR researchers and those of NCCDPHP divisions described here, given that all efforts have involved literature review and have benefited from comprehensive sets of community health

Porterfield et al. | Peer Reviewed | Government, Law, and Public Health Practice | S185

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

indictors,10--15 an opportunity for coordination exists, and coordination may be imperative because of resource limitations, time constraints, and the risk of confusing end users of these materials. Alignment is necessary in several critical areas. When both service delivery measures and outcome indicator sets include outputs of public health practice (Figure 1), examining the overlap of indicators in use may be beneficial. If research efforts to validate service delivery measures examine associations between those measures and outcomes, having knowledge of those outcome indicators proposed by divisions in the NCCDPHP and, where feasible, using them is beneficial. To the extent that NCCDPHP divisions’ indicators can inform research and evaluation efforts being conducted by PHSSR, research will be more credible and findings more easily synthesized across individual studies. Use of standardized outcome indicators may eventually facilitate data collection if data sources become more available. Beyond the indicators themselves, some features of the methods used by NCCDPHP divisions may be of interest to PHSSR researchers as they develop and implement their own methods. No gold-standard method exists for developing evaluation or performance measures, although several authors have outlined steps.19,21,40 The 3 NCCDPHP divisions described here used nearly identical methods to produce their sets of indicators; future work should capture the lessons learned from these and other efforts to further

researchers’ and practitioners’ understanding of best practices. The criteria for expert panel rating of indicators (Table 3) are generally similar across these initiatives and reflect what several authors have listed as key features of good evaluation or performance measures.17---19,21 Of note, the MultiNetwork Practice and Outcome Variation Examination Study will move beyond these more standard criteria to use estimations of health and economic impact in indicator rating, thereby advancing the field by elevating the evidence base underlying the metrics. Additional aspects of the methods of the divisions in NCCDPHP may be of interest to researchers engaged in similar efforts. These 3 initiatives provide to grantees essentially all the indicators reviewed by the experts, omitting only those judged irrelevant to the program goals during expert review. Indicators were not eliminated on the basis of the expert panel ratings but are presented with the full range of ratings with the expectation that practitioners are informed consumers who will choose indicators on the basis of relevance to their programs as well as the indicator ratings. Notably, none of the 3 sets of indicators is restricted to known or existing data sources. One objective has been to stimulate development of measurement and data sources to measure outcomes for which no data sources currently exist. If an indicator could be mapped to a construct in the relevant program logic model, OSH, DHDSP, and DDT included it regardless of measurement concerns. In some instances, the lack of data on the

indicator has stimulated development of measurement methods (e.g., new items developed and added to population surveys, such as the National Adult Tobacco Survey).41 The absence of data sources from published sets of outcome indictors highlights a very real difference between these initiatives and current efforts to develop service delivery measures, which are capturing real-world experiences in, and real data from, LHDs. Yet, the absence of data sources also presents an opportunity for collaboration because practitioners and researchers can help divisions in the NCCDPHP operationalize these indicators to make them practical and feasible at all levels of practice. Federal agencies, PHSSR researchers, and local or state-level practitioners may certainly operate with different purposes (enhancing evaluation of state-level program grantees and national programs or collecting a standardized data set from real-world settings to understand practice variation and study the effect of variation on health outcomes); however, more coordination may benefit all. Ideally, within each content area in public health (diseases, risk factors, or other areas such as administrative domains), a broadly agreed-on larger pool of indicators will be established from which practitioners, researchers, and evaluators can draw for individualized studies. This will enable more of a menu, with selection of the actual indicators driven by intent of the study, data availability, and a variety of other considerations. In addition, coordination will need to happen across content areas, as suggested by recent actions of the NCCDPHP to

S186 | Government, Law, and Public Health Practice | Peer Reviewed | Porterfield et al.

fund coordinated chronic disease efforts in all states.29,42 In this need for coordination, public health can learn from the field of quality measurement in health care, which progressed from isolated research studies to the existence and preeminence now of the National Quality Forum.43 Looking within public health for guidance, there is also the example of recent coordination by state, local, and national governance organizations to harmonize their data dictionaries and data collection efforts for their respective surveys of state and local public health agencies. Ideally, more widespread use of standardized indicators will allow information and data to flow bidirectionally, from the local to the state to the federal level,16 to enhance program planning, program impact, accountability, and even subsequent funding for public health practice. If the field is to achieve these long-term goals of establishing how best to measure public health practice and outcomes, a priority is to maximize communication and coordination among federal staff responsible for evaluation and performance measurement and PHSSR researchers working with state and local agencies on measurement projects. Coordination can be facilitated by foundations (Robert Wood Johnson Foundation, Public Health Foundation), stakeholder organizations (Association of State and Territorial Health Officers, National Association of County and City Health Officials, National Network of Public Health Institutes), and the PHSSR Interest Group of

American Journal of Public Health | Supplement 2, 2015, Vol 105, No. S2

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

AcademyHealth. There is also a need to reach consensus on such basic concepts as the terminology of measurement and definitions, an area with which we struggled. Strategies to achieve coordination could include measurement summits in which researchers, federal staff, and local staff participate or a development of a process (with a responsible body, similar to the National Quality Forum) for review and approval of measures to be used in public health performance measurement and evaluation. Benefits of more active communication and coordination with divisions within the NCCDPHP could include engaging PHSSR researchers to pilot test and validate indicators, to develop data sources, to study indicator dissemination and implementation, and to generate the evidence for associations between indicators and more distal outcomes for which those studies are needed. Finally, an understanding of the efforts of PHSSR researchers and practitioners to develop measures of public health practice, based on real data that LHDs are collecting and that are suitable for routine measurement, can inform future iterations of outcome indicators developed by divisions of the NCCDPHP. j

About the Authors Deborah S. Porterfield, Todd Rogers, and LaShawn M. Glasgow are with RTI International, Research Triangle Park, NC. Deborah S. Porterfield is also with the School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill. Leslie M. Beitsch is with the Florida State University College of Medicine, Tallahassee. Correspondence should be sent to Deborah S. Porterfield, MD, MPH, RTI International, 3040 East Corwallis Road, PO Box 12194, Research Triangle Park,

NC 27709-2194 (e-mail: dporterfield@ rti.org). Reprints can be ordered at http://www.ajph.org by clicking the “Reprints” link. This article was accepted July 22, 2014.

Contributors D. S. Porterfield conceptualized the study with input from coauthors and drafted the article. D. S. Porterfield, T. Rogers, and L. M. Glasgow contributed to the efforts described in the article. All authors contributed to editing and revisions and approved the final version of the article.

Acknowledgments We acknowledge the work of the respective divisions described in this article and, in particular, the following individuals: Erika Fulmer, MHA, and Terry Pechacek, PhD (Office on Smoking and Health); Susan Ladd, MS, Hilary Wall, MPH, Michael Schooley, MPH, and Eileen Chappelle, MPH (Division for Heart Disease and Stroke Prevention); and Barbara Park, MPH, Pat Shea, MPH, Bina Jayapaul-Philip, PhD, and Stephanie Gruss, PhD, MSW (Division of Diabetes Translation). Note. D. S. Porterfield, T. Rogers, and L. M. Glasgow have worked under contract for the Centers for Disease Control and Prevention on activities described in this work; however, the views expressed here are solely those of the authors.

Human Participant Protection Institutional review board approval was not needed because this work did not involve human participants.

References 1. Frieden TR. Asleep at the switch: local public health and chronic disease. Am J Public Health. 2004;94(12):2059---2061. 2. Porterfield DS, Reaves J, Konrad TR, et al. Assessing local health department performance in diabetes prevention and control—North Carolina. Prev Chronic Dis. 2009;6(3):A87. 3. Stamatakis KA, Leatherdale ST, Marx CM, Yan Y, Colditz GA, Brownson RC. Where is obesity prevention on the map? Distribution and predictors of local health department prevention activities in relation to county-level obesity prevalence in the United States. J Public Health Manag Pract. 2012;18(5):402---411. 4. Zhang X, Luo H, Gregg EW, et al. Obesity prevention and diabetes

Supplement 2, 2015, Vol 105, No. S2 | American Journal of Public Health

screening at local health departments. Am J Public Health. 2010;100(8):1434---1441.

and Accountability. Washington, DC: National Academies Press; 2011.

5. Erwin PC, Greene SB, Mays GP, Ricketts TC, Davis MV. The association of changes in local health department resources with changes in state-level health outcomes. Am J Public Health. 2011;101(4):609-- 615.

17. Derose SF, Schuster MA, Fielding JE, Asch SA. Public Health Quality Measurement: Concepts and Challenges. Annual Review of Public Health. 2002:23:1---21.

6. Mays GP, Smith SA. Evidence links increases in public health spending to declines in preventable deaths. Health Aff (Millwood). 2011;30(8):1585---1593.

18. Roper WL, Mays GP. Performance measurement in public health: conceptual and methodological issues in building the science base. J Public Health Manag Pract. 2000;6(5):66---77.

7. Ingram RC, Scutchfield FD, Charnigo R, Riddell MC. Local public health system performance and community health outcomes. Am J Prev Med. 2012;42(3):214-- 220.

19. McDavid JC, Hawthorn LRL. Program Evaluation and Performance Measurement: An Introduction to Practice. Thousand Oaks, CA: Sage; 2006.

8. Mays GP. Developing Research-Tested Measures of Quality in Public Health Practice: Novel Methodological Approaches in the MPROVE Study. Orlando, FL: AcademyHealth Public Health Services and Systems Interest Group; 2012.

20. DeGroff A, Schooley M, Chapel T, Poister TH. Challenges and strategies in applying performance measurement to federal public health programs. Eval Program Plann. 2010;33(4):365---372.

9. Bekemeier B; PHAST Team. Constructing the Public Health Activities and Services Tracking (PHAST) Database. Orlando, FL: AcademyHealth Public Health Services and Systems Interest Group; 2012. 10. US Department of Health and Human Services. Healthy People 2020. Available at: http://www.healthypeople.gov/2020. Accessed October 6, 2012. 11. Centers for Disease Control and Prevention, Council of State and Territorial Epidemiologists, Association of State and Territorial Chronic Disease Program Directors. Indicators for chronic disease surveillance. MMWR Recomm Rep. 2004;53(RR-11):1---6. 12. Kindig DA, Booske BC, Remington PL. Mobilizing Action Toward Community Health (MATCH): metrics, incentives, and partnerships for population health. Prev Chronic Dis. 2010;7(4):A68. 13. Robert Wood Johnson Foundation, University of Wisconsin Public Health Institute. County health rankings and roadmaps. Available at: http://www. countyhealthrankings.org/about-project. Accessed January 17, 2013. 14. Metzler M, Kanarek N, Highsmith K, et al. Community Health Status Indicators Project: the development of a national approach to community health. Prev Chronic Dis. 2008;5(3):A94. 15. National Center for Health Statistics. Health indicator warehouse. Available at: http://healthindicators.gov. Accessed October 6, 2012. 16. Institute of Medicine. For the Public’s Health: The Role of Measurement in Action

21. Lichiello P, Turnock BJ. The turning point guidebook for performance measurement. Available at: http://www. turningpointprogram.org/Pages/pdfs/ perform_manage/pmc_guide.pdf. Accessed September 14, 2012. 22. Public Health Functions Steering Committee. Public health in America. Available at: http://web.health.gov/ phfunctions/public.htm. Accessed September 14, 2012. 23. Pluto DM, Phillips MM, MatsonKoffman D, Shepard DM, Raczynski JM, Brownstein JN. Policy and environmental indicators for heart disease and stroke prevention: data sources in two states. Prev Chronic Dis. 2004;1(2):A05. 24. Centers for Disease Control and Prevention. Best Practices for Comprehensive Tobacco Control Programs. Atlanta, GA: Centers for Disease Control and Prevention; 1999. 25. MacDonald G, Starr G, Schooley M, Yee SL, Klimowski K, Turner K. Introduction to Program Evaluation for Comprehensive Tobacco Control Programs. Atlanta, GA: Centers for Disease Control and Prevention; 2001. 26. Yee SL, Schooley M. Surveillance and Evaluation Data Resources for Comprehensive Tobacco Control Programs. Atlanta, GA: Centers for Disease Control and Prevention; 2001. 27. Division for Heart Disease and Stroke Prevention. Draft policy and system outcome indicators for controlling high cholesterol. Available at: http://c.ymcdn.com/ sites/www.chronicdisease.org/resource/ resmgr/CVH/draft-high-cholesterol-indic. pdf. Accessed January 18, 2013.

Porterfield et al. | Peer Reviewed | Government, Law, and Public Health Practice | S187

GOVERNMENT, LAW, AND PUBLIC HEALTH PRACTICE

28. Rogers T, Fulmer E. Policy and system outcome indicators for state heart disease and stroke prevention. Priority area: high blood pressure control. Paper presented at: National Heart Disease and Stroke Prevention Grantee Meeting; September 9, 2008; Atlanta, GA.

37. Jernigan J, Ladd S, Rogers T, Fulmer E, Matson-Koffman D. Identifying core evaluation indicators to support strategic program needs. Paper presented at: Annual Meeting of the American Evaluation Association; November 8, 2008; Denver, CO.

29. Centers for Disease Control and Prevention. State and public actions to prevent and control diabetes, heart disease, obesity, and associated risk factors and promote school health. Available at: http://www.cdc.gov/chronicdisease/ about/statepubhealthactions-prevCD. htm. Accessed May 5, 2013.

38. Division of Diabetes Translation. Proximal indicators/effective strategies overview. Available at: http://www. nacddarchive.org/nacdd-initiatives/diabetes/ monthly-conference-calls/2011-conferencecalls/DC_DPCP_DDT_Proximal_Indicators _Jan_2011.pdf. Accessed September 10, 2012.

30. Starr G, Rogers T, Schooley M, Porter S, Wiesen E, Jamison N. Key outcome indicators for evaluating comprehensive tobacco control programs. Available at: http://www. cdc.gov/tobacco/tobacco_control_ programs/surveillance_evaluation/key_ outcome/index.htm. Accessed September 11, 2012.

39. Hunting P. Outcome indicators for planning and evaluating state and national tobacco control programs. Paper presented at: National Conference on Tobacco or Health; June 10, 2009; Phoenix, AZ.

31. Porter S, Rogers T, Jamison N, Engstrom M. Core outcome indicator measurement development for the National Tobacco Control Program. Paper presented at: Annual Meeting of the American Evaluation Association; November 6, 2008; Denver, CO.

41. King BA, Shanta RD, Michael AT. Current tobacco use among adults in the United States: findings from the National Adult Tobacco Survey. Am J Public Health. 2012;102(11):e93---e100.

32. Division of Heart Disease and Stroke Prevention. Indicator spotlights. Available at: http://www.cdc.gov/dhdsp/evaluation_ resources.htm. Accessed January 25, 2013. 33. Rogers T, Chappelle EF, Wall HK, Barron-Simpson R. Using DHDSP Outcome Indicators for Program Planning and Evaluation. Atlanta, GA: Centers for Disease Control and Prevention; 2011. Available at: http://www.cdc.gov/dhdsp/ programs/nhdsp_program/evaluation_ guides/docs/Using_Indicators_ Evaluation_Guide.pdf. Accessed February 6, 2015.

40. Poister TH. Measuring Performance in Public and Nonprofit Organizations. San Francisco, CA: Wiley; 2003.

42. Centers for Disease Control and Prevention. Coordinated Chronic Disease Prevention and Health Promotion Program. Available at: http://www.cdc.gov/ coordinatedchronic/index.htm. Accessed January 25, 2013. 43. Kizer KW. The National Quality Forum enters the game. Int J Qual Health Care. 2000;12(2):85---87.

34. Wall H, Rogers T, Ladd S. Using a modified community of practice approach to operationalize indicators for heart disease and stroke prevention. Paper presented at: Annual Meeting of the American Evaluation Association; November 12, 2010; San Antonio, TX. 35. Rogers T, Fulmer E. HDSP evaluation indicators: how and why to use them. Paper presented at: National Heart Disease and Stroke Prevention Grantee Meeting; September 10, 2008; Atlanta, GA. 36. Wall HK. I’ve chosen my HDSP indicators, now what? Paper presented at National Heart Disease and Stroke Prevention Grantee Meeting; September 15, 2010; Atlanta, GA.

S188 | Government, Law, and Public Health Practice | Peer Reviewed | Porterfield et al.

American Journal of Public Health | Supplement 2, 2015, Vol 105, No. S2

Measuring public health practice and outcomes in chronic disease: a call for coordination.

A strategic opportunity exists to coordinate public health systems and services researchers' efforts to develop local health department service delive...
935KB Sizes 0 Downloads 7 Views