Quality Indicators for Primary Health Care: A Systematic Literature Review Effie Simou, PhD; Paraskevi Pliatsika, MSc; Eleni Koutsogeorgou, MA; Anastasia Roumeliotou, PhD rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
ata have indicated that countries with a strong system of Primary Health Care (PHC) are more likely to have efficient health systems and better health outcomes than countries that focus strongly on hospital services. The aim of the article was to systematically review implemented quality projects used for evaluation of quality in PHC services. A systematic literature review was conducted via MEDLINE to identify papers referring to international or national PHC quality assessment projects, published in English from 1990 to 2010. Projects were included if they had been implemented, had a holistic approach, and reported specifications of the quality indicators used. Sixteen publications were considered eligible for further analyses, referring to 10 relevant projects and a total of 556 indicators. Number and content of indicators and their domains varied across projects. Regarding raw data, lack of standardization of collection tools between projects could lead to invalid comparisons. In areas that international projects operate in parallel to national initiatives, there may be problems regarding expenses and burden of data collection, which might create competing interests and low quality of information. Further actions for alignment of quality projects on primary health care are required, for future results to become comparable.
D
services, and the integration of biomedical, psychological, and social dimensions of the presentation and management of presenting problems. Moreover, PHC focuses on health promotion and disease prevention, alongside cure and care for established health problems and the provision of palliative and end-of-life care. Continuity of care at the level of personal and longitudinal relationship between patients and their health providers and between delivery settings is also a highly valued principle.1-3 A well-designed and functioning PHC system is instrumental for improving health outcomes and cost performance,4 reducing disparities in care, and increasing population’s opportunities to lead healthy and productive lives. International observational data have indicated that countries with a strong system of PHC are more likely to have efficient health systems and better health outcomes than countries which have a strong focus on hospital services.2 Nevertheless, even countries with long-established and reputed systems of PHC have identified problems with the quality of these services. In particular, there is a growing body of evidence of wide variations in the quality of clinical care provided in the primary care sector.5,6 Within the past several decades, considerable efforts had been made toward improving systems of accountability and quality
KEY WORDS: health information systems, primary health care,
Author Affiliation: Department of Epidemiology and Biostatistics, National School of Public Health, Athens, Greece.
quality indicators, systematic review
This work was part of the “Health Monitoring Indicators System: Health Map” project funded by the European Social Fund in the framework of the Axis 5.1, 5.2, 5.3 of the European Project “Development of Human Resources” (2007-2013). The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the sponsor.
The Primary Health Care (PHC) stands at the center of medical care systems, although this term has various meanings across countries. The key elements of PHC include the provision of patient-centered rather than disease-centered, coordinated, and accessible J Public Health Management Practice, 2013, 00(00), 1–9 C 2013 Wolters Kluwer Health | Lippincott Williams & Wilkins Copyright
No conflicts of interest have been reported by the authors or by any individuals in control of the content of this article. Supplemental digital content is available for this article. Direct URL citation appears in the printed text and is provided in the HTML and PDF versions of this article on the journal’s Web site (http://www.JPHMP.com). Correspondence: Effie Simou, PhD, Department of Epidemiology and Biostatistics, National School of Public Health (ESDY), 196, Leoforos Alexandras, 11521, Athens, Greece (
[email protected]). DOI: 10.1097/PHH.0000000000000037
1 Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
2 ❘ Journal of Public Health Management and Practice within health care systems in a number of countries. More recently, increased emphasis on PHC funding and service delivery have produced a greater focus on the development of performance indicators in PHC. Conceptually, the rationale for the introduction of performance indicators assumes that their presence in an organization will foster change in the quality of processes within that organization, which will ultimately produce better outcomes at either population or costsaving level. Health indicators, to be considered as important, need to refer to conditions that are important for health status or for health expenditure, be policyimportant, and deal with topics that can directly be affected by the health system. Health indicators also need to deliver scientific soundness in terms of validity, reliability, and clarity, while their implementation and cost for collecting—internationally—comparable data must be feasible.7 Quality health indicators that evaluate PHC system performance focus on evaluating accessibility of health services, continuity in provided care, comprehensiveness and holistic approach in care with a family and community-based orientation and coordination, as those principles—once fulfilled—lead to successful primary care provision and better health outcomes, while human and technical resources are valued highly to provide the best possible quality of care.2 Most of the leading health and social organizations, such as the World Health Organization, the Organisation for Economic Co-operation and Development (OECD), and the European Commission, have developed and implemented systems of health monitoring and quality health indicators to assess the performance of health services provided at regional, national, and international level.8-11 Many quality-related projects, studies, and frameworks have reported specifically on issues relating to the assessment of performance of PHC systems, each according to specific aims and challenges at hand, but up to date no universal classification, listing, or review of such efforts has been published. On the basis of above-mentioned facts, the purpose of the current study was to present a systematic overview of quality health indicators—and their domains—that have been included in implemented international and national projects (between 1990 and 2010) aiming to assess the quality of PHC resources and services provided.
● Methods A systematic literature review was conducted to identify international and national PHC quality assessment projects, from which, afterwards, quality indicators were extracted.
Search strategy and selection criteria An electronic literature search was conducted via MEDLINE to identify articles referring to international and national PHC quality assessment projects. Articles in peer-reviewed journals were searched, published in English, from January 1990 to December 2010. The search was performed using various combinations and forms of the following search terms: “quality indicator(s)” AND “primary healthcare” AND [“assess” OR “evaluate”]. The references list of selected studies were also reviewed for finding relevant information on the projects selected, and online search via a search engine (ie, Google) was also performed to identify additional relevant projects and grey literature. The official Web sites of all projects selected were later on checked, while extracting data, to retrieve updated information on the definitions and lists of indicators of the respective projects. All publication types were included in the search of studies, apart from editorials and letters, because they were considered to lack detailed information on implemented projects. Studies that referred to projects related to PHC settings were further analyzed. In specific, studies were included if they explicitly referred to projects that had been broadly implemented at least at national level and had a holistic approach toward provided PHC services. Only implemented projects that used sets of quality indicators were included and only if the definitions and specifications of indicators were available to the reader. Studies were excluded if they did not explicitly relate to primary care settings, if they related to disease-, discipline-, sector-, or population-specific quality projects, or if they referred to regional or jurisdictional projects (not covering at least the national-level criterion). The assessment of identified studies based on the aforementioned criteria was conducted by 2 reviewers (E. S. and P. P.) independently. For the studies in which there was a disagreement or nondefinite decision between the 2 reviewers, a third assessor (A. R.) was consulted, and when necessary, discussion among the 3 researchers took place to reach final decision.
Data extraction, analysis, and synthesis A data extraction form was designed and developed by the authors considering the aim of the current review. The data extraction form included the geographical and institutional characteristics of selected quality measures, country and year of their first implementation, domains/indicator sets of each selected project, and number of indicators (per project). Given that data extraction needs to be as unbiased and reliable as
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
Quality Indicators for Primary Health Care
possible because often subjective decisions are required, 2 reviewers performed the data extraction independently (E. S. and P. P.). Any disagreement between data extracted from the 2 reviewers was resolved by consensus with a third reviewer (A. R.). For the selection of indicators and domains, there was an initial pair-wise interrater agreement between authors of 80% to 90%, with overall interrater agreement being 70% to 80%; this was deemed acceptable. The indicators selected were judged later on the basis of the 6 domains of the Institute of Medicine (IOM).12 In specific, the IOM has outlined the 6 main domains in the quality of health care for Crossing the Quality Chasm (2001). These main domains were safety, effectiveness, patientcenteredness, timeliness, efficiency, and equitability. All but the latter domains were identified within the selected projects and additional domains were identified as described by the projects selected (such as continuity, health promotion, immunizations, etc) Because of heterogeneity among the extracted indicators, along with their features and domains used by each of the quality frameworks, a descriptive approach for analyzing the selected projects was preferred. The reviewers compared each project’s volume and dimensional composition to the overall trend for quality assessment in PHC services, while discussing the possibilities of data sources and scopes of each project, which might have led to the variety in preferences in quality assessment.
● Results The electronic search process in MEDLINE yielded in total 403 citations, plus 2 relevant publications, which were identified via online search engines and screening of reference lists. After applying the inclusion criteria, 16 studies were considered eligible for further analyses (as shown in Figure 1 ), which referred to 10 major and distinct quality indicator projects with accompanying quality indicator sets in the field of performance of PHC systems and providers (Table 1). A total of 556 indicators were selected from the identified projects that met the criteria of the current literature review (see Supplemental Digital Content, available at: http://links.lww.com/JPHMP/A63, for a full list of the 556 indicators selected).. All selected indicators were extracted from implemented measurements from the globally identified quality frameworks, which were considered of importance and scientific soundness. Regarding the domains of the indicators selected, it was found that most indicators belonged to the “Quality/Safety” group of domains (13.85%), followed by the indicator sets of “Access/Facilities/Care Redesign” (9.89%) and “Comprehensiveness or Compre-
❘ 3
FIGURE 1 ●
Process of Inclusion of Studies in the Systematic Literature Review (PRISMA Flow Chart, 2009)a qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq Records idenfied through database searching – MEDLINE (n = 403)
Addional records idenfied through other sources (ie, online search engines and reference lists) (n = 2)
Records aer duplicates removed (n = 341)
Records screened (n = 341)
Full-text arcles assessed for eligibility (n = 55)
Records excluded (did not match selecon criteria) (n = 286)
Full-text arcles excluded, due to lack of specificaons for indicators or referred to the same project as other arcles (duplicaon) (n = 38)
Studies included in qualitave synthesis (n = 17) a Adapted
from Moher et al.13 (Available at: http://www.plosmedicine.org/ article/info%3Adoi%2F10.1371%2Fjournal.pmed.1000097)
hensive care or Equipment for Comprehensive care” (5.76%) (Table 2). Only one of the identified projects (the OECD’s Health Care Quality Indicator [HCQI] project30 ) was developed and implemented at international level.31 Two projects—the quality indicators for general practice management introduced by a 6 European Countries expert panel and the European Primary Care Monitoring System introduced by the European Commission— were applied at European level.32,33 The rest of the projects were designed and implemented at national level: 1 in Canada (CIHI),18 2 in the United States,34,14-16 3 in Australia,17-20 and 1 in New Zealand.24 In the United States, since 2003, the Agency for Healthcare Research and Quality has been responsible for monitoring the field of public health services, while for consecutive years the National Healthcare Quality Report19,20 paired with the National Healthcare Disparities Report20 had reported data on services from public sources, such as the Medicaid and Medicare Centers. The 2007 National Healthcare Quality Report reported a shortened set of 41 core indicators, which referred to PHC services, but also to other sectors of public health services. In contrast, one of the most useful systems for collecting information particularly on PHC services in the United States lies in the private sector; the Practice Partner Research Network (PPRNet), comprised of practices which utilize a common Electronic Medical Record tool, in their ‘Accelerating the Translation of Research into Practice’ (A-TRIP) quality-improving
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
4 ❘ Journal of Public Health Management and Practice TABLE 1 ● Selected Primary Health Care Quality-Assessing Frameworks, Country and Year of their First
Implementation, and Number and Domains of Indicators qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq Publication (First author, Year) Marshall et al, 200614
Organization and Framework
Country and Year of First Implementation
Number of Indicators (Total N = 556)
Domains of Indicators
OECD Health Care Quality Indicators Project: The expert panel on primary care The 6 European Countries expert panel: Quality indicators for general practice management The European Commission: The European Primary Care Monitoring System (PC Monitor) CIHI: Pan-Canadian Primary Health Care Indicators
International, 2004
27
Health promotion; Preventive Care; Diagnosis and Treatment: Primary Care
Europe, 2003
62
Infrastructure; Staff; Information; Finance; Quality and Safety
Europe, 2009
99
Canada, 2005
105
Agency for Healthcare Research and Quality (AHRQ): National Healthcare Quality Report (NHQR)
USA, 2003a
31
Governance; Economic Conditions; Workforce Development; Access; Continuity; Coordination; Comprehensiveness; Quality; Efficiency Access; Quality; Supports; Continuity; Population Orientation; Patient-centeredness; Comprehensive care Effectiveness; Safety; Timeliness; Patient-centeredness
PPRNet: Accelerating the Translation of Research into Practice (A-TRIP)
USA, 2003
54
New Zealand Ministry of Health: Perera et al, 200723 ; New Zealand Ministry PHO Performance of Health, 200624 Management Program Sibthorpe and Gardner, The Australian Primary Health 200725 ; Gardner et al, Care Research Institute: 200826 ; Primary Health National Performance Care Research and Indicators Information Service, 200827 Royal Australian College of RACGP, 200928 General Practitioners (RACGP): Standards for General Practices (4th edition)
New Zealand, 2002
14
Australia, 2008
11
Australia, 2005b
139
Ford and Knight, 201029
Australia, 2004
14
Engels et al, 200515 ; Engels et al, 200616 Kringos et al, 201017
Sullivan-Taylor et al, 200918
Agency for Healthcare Research and Quality, 200719 ; Agency for Healthcare Research and Quality, 201020 Feifer et al, 200621 ; Wessell et al, 200822
The Improvement Foundation (Australia): Australian Primary Care Collaboratives Program
Diabetes Mellitus; Heart Disease and Stroke; Cancer Screening; Immunizations; Respiratory/Infectious Disease; Mental Health and Substance Abuse; Nutrition and Obesity; Inappropriate Prescribing Clinical; Administrative
Access; Prevention; Chronic Disease Management; National initiatives
Access; Information; Health Promotion and Prevention; Specific Health Problems; Continuity; Coordination; Patient Health Records; Collaborating with Patients; Safety and Quality; Education and Training; Practice Systems; Management of Health Information; Facilities and Access; Equipment for Comprehensive care; Clinical Support processes Diabetes Mellitus; Coronary Heart Disease; Access and Care Redesign
Abbreviation: PPRNet, Practice Partner Research Network. a The NHQR has been released annually since 2003 by the AHRQ; however, its compact form of quality indicators was first released in 2007. b Initial year of RACGP’s Standards’ reviewed indicators was 2005; however, the latest (4th) edition was published in 2010. In the 4th edition of RACGP’s Standards, although the domain of prevention and promotion is valued for accreditation, no specific indicator was proposed.
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
Quality Indicators for Primary Health Care
TABLE 2 ● Frequencies (N; %) of Indicators per Domain
and Group of Domains (or Indicator Sets) Found in the Selected Studies/Projects qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq Number of Indicators Domain(s) and Group of Domains/Indicator sets Quality/Safety Access/Facilities/Care redesign Comprehensiveness or comprehensive care or equipment for comprehensive care Infrastructure Patient health records or management of health information Continuity Information Effectiveness Health promotion/Prevention or preventive care Workforce development or staff Governance or coordination Heart disease and stroke or coronary health disease Supports Diabetes mellitus Clinical support processes Economic conditions or finance Collaborating with patients Clinical Education and training Practice systems Diagnosis and treatment: primary care Patient-centeredness Population orientation Immunizations Respiratory/Infectious disease Administrative Efficiency Mental health and substance abuse Specific health problems Cancer screening Chronic disease management Inappropriate prescribing National initiatives Nutrition and Obesity Timeliness Total
N
%
77 55 32
13.85 9.89 5.76
27 26
4.86 4.68
24 24 23 23
4.32 4.32 4.14 4.14
23 22 22
4.14 3.96 3.96
21 18 16 15 13 9 9 9 8 8 7 6 6 5 5 5 4 3 3 2 2 2 2 556
3.78 3.24 2.88 2.70 2.34 1.62 1.62 1.62 1.44 1.44 1.26 1.08 1.08 0.90 0.90 0.90 0.72 0.54 0.54 0.36 0.36 0.36 0.36 100.0
project,21 starting in 2003, used a set of quality indicators computed using the Electronic Medical Record data, based on evidence-based treatment, preventive care guidelines, relevance to primary care, and acceptance among health professionals. In 2006, the ATRIP quality indicators list was a set to 54 indicators
❘ 5
for the years to follow.22 Data from the network are methodically collected, analyzed, and presented in report form, providing clinical benchmarking information useful to primary care providers, on a quarterly basis and at national scale. At the opposite side of the Pacific and at national level, considerable attention had been given in standards of accreditation for primary care practices and organizations by small sets of mandatory quality indicators, addressing basic administrative and clinical subjects, mostly regarding preventive and chronic disease management care, and emphasizing on unmet needs of indigenous populations. In New Zealand, in 2003, the Ministry of Health proposed a national performance set of quality indicators for public health organizations (PHOs), which was later reduced to a set of 14 indicators; this was recommended as only the minimum, and PHOs were prompted to include more measurements accordingly.23,24 In Australia, a quality set of compulsory National Performance Indicators (NPIs) was proposed and implemented through the National Quality and Performance System for Divisions of General Practice, for the years 2005 to 2008,25 along with a quality system leading to accreditation (Gardner et al, 2008). A shorter list of 11 compulsory quality NPIs was employed in 2008 for future use.27 Other existing sets of quality indicators focusing on primary care in the region of Australia included the Royal Australian College of General Practitioners (RACGP) Standards for General Practice set of accreditation criteria, based on evidence-based medical practice, with an extensive, multilevel, voluntary set of 139 indicators aiming on quality improvement, accreditation, and evaluation of national-wide clinical practices,28 and the Australian Improvement Foundation’s set, based on the organization’s aim for developing tools for quality improvement in systems, which produced a list of 14 quality clinical measures for the Australian Primary Care Collaboratives Program.29 The inclusion criteria employed for the development of indicator sets varied across these projects, with importance and feasibility being the main and most commonly used criteria. For instance, in the HCQI project, the set of indicators was developed on the basis of their impact on health status, policy relevance and susceptibility to the health system, scientific soundness, and feasibility.31 In the quality indicators for general practice management, the 6 European countries’ expert panel selected indicators according to their clarity and usefulness, while in the European Primary Care Monitoring System (PC monitor), indicators were selected according to relevance, precision, flexibility, and discriminating power.33 In any case, quality indicators were validated after initial selection and utilized through a supporting network of interlacing PHC
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
6 ❘ Journal of Public Health Management and Practice practices and services. All indicator sets were developed by using expert groups, for example, different stakeholders (PHC staff, PHC management, etc). The expert groups contributed to the scientific analysis as well as systematic literature reviews and rating/consensus methods, such as Delphi method or nominal group techniques. The number of indicators for each project ranged from 11 [NPI] to 139 [RACGP] (Table 1). With respect to domains covered in each project, a high variability was detected. The European Primary Care Monitoring System, the Pan-Canadian PHC Indicators, and the RACGP covered the most domains compared to the rest of projects. For example, the indicators proposed by the HCQI expert panel focused on functional and clinical aspects, regardless of the setting or presence of general practitioners, while disease-specific indicators (eg, cardiac and diabetes care, cancer screening) were decided by other expert panels, and, potentially, to be used in community settings where data from patients’ medical records would be universally obtainable, while emphasis was given on health promotion and prevention (the latter strongly dominated by prenatal care). The 6 European countries’ set of indicators focused on organization of primary practice rather than clinical performance, as improved management was considered a priority for provision of proper clinical care and because of the apparent overrepresentation of clinical indicators versus practice management indicators, worldwide, emphasis was given to accessibility and infrastructure readiness, to information flow within and between practices, and to safety of procedures. At a later time was implemented the European Primary Care Monitoring System (PC monitor),17 where health indicators were rated according to relevance, precision, flexibility, and discriminating power and a final set of 99 indicators was selected through expert consensus, with attention given on 3 levels of domains (ie, structure, process, and outcome). The Canadian set of indicators summarized overall domains of PHC services; through its extensive list of proposed indicators, it resulted in a multilevel PHC assessment tool, with the superior aim of ensuring equity in use of primary care services.
● Discussion Given the fact that evaluation of quality in provided health care services has received a lot of international attention in the past few years, primary health care, being the foundation of efficient health care provision, was expected to receive a large portion of this consideration. Indeed, while reviewing current literature, we noticed that a substantial number of
quality-monitoring studies and projects have been used since 2000 to investigate and promote possible benefits and expectations from the implementation of recommended practices in primary care, with the help of quality performance indicators. Quality-assessing organizations and committees usually function as the link between evaluation or benchmarking demands of key stakeholders or a greater community call for improvement in the health care system and health care providers or professionals; therefore, previously unmet needs have been recognized and the selected form of standards are then spread for implementation, resulting in a variety of quality indicators sets. It was encouraging the fact that during the last decade, in technologically advanced countries, at least 1 method or system of assessing primary care performance has been developed, even if regionally. Also, encouraging was the fact that large international organizations (OECD, European Commission, etc) have been implementing evaluation tools at international level, highlighting the broader role of well-organized primary health care. Furthermore, extended indicator sets that cover most domains of primary care seem to bear flexibility, permitting further alterations according to specific requirements, regionally or according to major subjects at hand. Of all the efforts in the area of evaluation of quality in provision of PHC services, 10 implemented projects seemed to be the most universal, in terms of overall coverage of PHC services, wide acceptance, and the possibility of (at least) nation-wide comparison results; these were the projects reviewed in our study. These projects considerably varied in terms of geographical areas and settings of implementation, of volume and specific measures preferred, and, most importantly, in theoretical basis of development and quantification of domains evaluated. Overall, number and specific selection of indicators was found to largely depend on the focus (clinical, managerial, and multilevel), available data sources (administrative data, medical records, or surveys), and general purpose (certification, health screening, and quality improvement incentives) of each quality project regarding PHC services. In general, number and content of indicators included in the projects varied, according to specific aims of implementation, with some projects aiming at overall, multilevel assessment of PHC services and involving flexibility in selection of available indicators, such as the 3 most extensive indicator sets: the RACGP’s Standards’ set (139 indicators), the Canadian set (105 indicators), and the European PC Monitor set (99 indicators). These 3 extensive sets were also the ones that evaluated all recognized domains in primary care provision. Small indicator sets were developed for accreditation purposes, by projects usually aiming
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
Quality Indicators for Primary Health Care
at mandatory participation, the 3 smallest being: the Australian set of NPIs (11 indicators), the New Zealand PHO set (14 indicators), and the (voluntary) Australian Improvement Foundation set for the APCC program (14 indicators); these were the sets that mostly used structure indicators and administrative data. Overall, number and specific selection of indicators was found to largely depend on the focus (clinical, managerial, and multilevel), available data sources (administrative data, medical records, or surveys), and general purpose (certification, health screening, and quality improvement incentives) of each quality project regarding PHC services. The identified projects, which could be used for evaluation of quality in PHC services, were generally developed on the basis of different scopes, and following a different strategy in motivation of health professionals to participate and in dissemination of results. Therefore, it is no surprise that the sets of quality indicators employed did not align to a common conceptual framework. However, basic domains of primary care services, such as access, comprehensiveness (including promotion and prevention health plans), and continuity (including chronic disease management), were evaluated in most quality projects, even if classification of corresponding indicators might have been altered and domains alternatively phrased or definitions not being identical. In particular, access was evaluated, even in slightly different terms (ie, timeliness), across all quality projects except for OECD’s HCQI project for primary care and PPRNet’s A-TRIP quality indicators, because OECD’s project referred to different settings internationally, and was aiming at prevention and promotion strategies, as already mentioned; therefore, priority was given to assessing clinical effectiveness in these domains, while A-TRIP indicators were meant for internal use by private practices, assuming patients were able to require care at these practices, and therefore evaluating access was probably out of focus. Evaluation of comprehensiveness in care was evident in all but one of the projects, as indicators covering a wide range of clinical entities were present in most sets, usually referring to domains of promotion and prevention, either primary or secondary. The only set that would not evaluate comprehensiveness of care in clinical terms was the one proposed by experts in the 6 European countries which, as mentioned previously, aimed at managerial purposes only, but even in this set, needs were managed to meet global and holistic health care requirements (eg, presence of “essential” equipment, “completeness” of the content in the “doctor’s bag,” etc) Continuity, mostly in terms of chronic disease management and informational continuity, was assessed in all projects, although it received different attention de-
❘ 7
pending on focus and scale of set. For instance, use of medical records and information in these was the main focus of continuity in the managerial set proposed by the 6 European countries, while, at the opposite direction, PPRNet’s A-TRIP restrained evaluation of continuity in appropriate care of chronic conditions, as all patients were expected to have the same system for medical records. Many projects, mainly those that approached quality either widely or administratively, included indicators dedicated to resources of personnel, finance, infrastructure, or all 3 categories.16-18,26,28 Data for such indicators are theoretically easiest to obtain, although their correlation to success of health care processes and health outcomes is not always attainable.31 Another topic addressed globally was safety within primary care, with many quality indicator sets evaluating protection of patients and staff from harmful methods,16,20,22,28 reflecting a global trend in hierarchy of stakeholders’ needs for safe practices. Also, although all sets have been proposed as “quality” indicator sets, many included quality improvement as a distinct domain,16-18,28 manifesting the importance of initiatives directed toward progress in quality of procedures, as opposed to simple adoption of techniques. As mentioned earlier, the IOM’s 6 domains have been considered of great importance in the current study, and the indicators referring to the IOM domains are suggested by the authors to be most useful for evaluation purposes and were later on included in the final version of the pilot study. A limitation of the current article was that relevant projects to the aim of the study, implemented within the time range explored, might have been omitted because their information could have been not published yet at the time that the current review was conducted. Another limitation is that for comparability reasons the indicators’ domains were grouped by the authors of this article (as presented in Table 2) on the basis of the description of indicators provided by each project, although heterogeneity (even to a small degree) existed across several of the definitions of the grouped indicators; however, the authors considered the grouped indicators to represent the same meaningful concepts. During the investigation of international literature on methods and projects of evaluation of provided PHC services, we encountered limitations in the extent that distinct projects could compare and associate with each other. As already mentioned, number and content of domains evaluated and measurements used varied between projects. Even similarly termed indicators, dedicated to the same subject and even described with similar phrases, did not always correspond to the same measurement method among projects, as target populations (eg, regarding age, specific conditions, etc), duration and time periods implied (eg, outcomes
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
8 ❘ Journal of Public Health Management and Practice measured within different time periods after acute care, different periods of follow-up, prevalence indicators similarly named with rate indicators, etc), and objectives sets’ (eg, hypertension cutoff limits, etc) definitions of the indicators differed. Another inherent problem, regarding raw data in general, was the lack of standardization of collection tools which could in turn lead to gathering of insufficient or invalid measurements; even administrative data depth and quality differed between regions, and this should be accounted for when implementing broadly effective quality projects. Therefore, this lack of standardization may lead to difficulties in comparing projects and results from different countries, while, in areas with international projects operating in parallel to national initiatives (eg, Europe), this might cause problems regarding expenses and burden on personnel in collecting data suitable for all projects, might create competing interests,35 and, most importantly, might result into misunderstanding, improper duplication, and, subsequently, low quality of information gathered. It is therefore important that quality projects provide suitable tools for data gathering, along with official specifications, but also align to a more generic-common framework and stride to overcome difficulties deriving from differences in cultural, technical, and nationwide system-related principles.
● Conclusion Many frameworks have dealt with matters of quality evaluation in primary health care services, during the last few years and across different countries and organizations. Up to the year 2010, no methodological and universal assessment was published for the values and measurements embraced by the quality projects for PHC services; therefore, the purpose of this study was to systematically review existing projects targeting on performance of primary care services by use of quality health indicators. We identified several contemporary, widespread projects; therefore, we sought to indicate individual traits, but also to associate characteristics featured. The latter proved to be a difficult task, as theoretical background and realization of each quality framework, even in the specified field of primary health care performance, varied to a great level, while the denoted lack of standardization of data collection and of indicators’ definition alignment across projects could bear risks in attempts for appraisal between areas of alternative quality-assessing projects or in cases of parallel operation of different projects in the same area. Further discussion and actions for alignment among quality projects focusing on primary care are required, for future efforts of evaluation to become comparable, and to substantially and systematically
contribute to the mapping of progress in primary care giving, internationally.
REFERENCES 1. World Health Organization (WHO). Declaration of Alma-Ata. http://www.who.int/publications/almaata_ declaration_en.pdf. Published 1978. Accessed June 21, 2013. 2. Vanselow NA, Donaldson MS, Yordy KD. From the Institute of Medicine. JAMA. 1995;273:192. 3. World Organization of National Colleges, Academies and Academic Association of General Practitioners/Family Physicians (WONCA). The Role of the General Practitioner/Family Physician in Health Care Systems. Victoria, Australia: WONCA; 1991. 4. Starfield B. Primary Care: Balancing Health Needs, Services, and Technology. New York, NY: Oxford University Press; 1998. 5. Starfield B, Shi L. Policy relevant determinants of health: an international perspective. Health Policy. 2002;60:201-218. 6. Starfield B, Shi L, Macinko J. Contribution of primary care to health systems and health. Milbank Q. 2005;83:457-502. 7. Macinko J, Starfield B, Shi L. The contribution of primary care systems to health outcomes within Organization for Economic Cooperation and Development countries 1970-1998. Health Serv Res. 2003;38:831-865. 8. Shi L, Macinko J, Starfield B, Wulu J, Regan J, Politzer R. The relationship between primary care, income inequality and mortality in the United States 1980-1995. J Am Board Fam Pract. 2003;16:412-422. 9. World Health Organization (WHO). Health Promotion Glossary. Geneva, Switzerland: WHO. http://whqlibdoc.who .int/hq/1998/WHO_HPR_HEP_98.1.pdf. Published 1998. Accessed June 21 2013. 10. Ibrahim JE. Performance indicators from all perspectives. Int J Qual Health Care. 2001;13:431-432. 11. Donabedian A. An Introduction to Quality Assurance in Health Care. Oxford, UK: Oxford University Press; 2003. 12. Institute of Medicine. Envisioning the National Health Care Quality Report. Washington, DC: National Academies Press; 2001. 13. Moher D, Liberati A, Tetzlaff J, Altman DG; The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6: e1000097. 14. Marshall M, Klazinga N, Leatherman S, et al. OECD health care quality indicator project. The expert panel on primary care prevention and health promotion. Int J Qual Health Care. 2006;18:21-25. 15. Engels Y, Campbell S, Dautzenberg M, et al. Developing a framework of, and quality indicators for, general practice management in Europe. Fam Pract. 2005;22:215-222. 16. Engels Y, Dautzenberg M, Campbell S, et al. Testing a European set of indicators for the evaluation of the management of primary care practices. Fam Pract. 2006;23:137-147. 17. Kringos DS, Boerma WG, Bourgueil Y, et al. The European primary care monitor: structure, process and outcome indicators. BMC Fam Pract. 2010;11:81. 18. Sullivan-Taylor P, Webster G, Mukhi S, Sanchez M. Development of electronic medical record content standards to collect
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.
Quality Indicators for Primary Health Care
19.
20.
21.
22.
23.
24.
25.
26.
pan-Canadian primary health care indicator data. Stud Health Technol Inform. 2009;143:167-173. Agency for Healthcare Research and Quality. Research findings, quality and patient safety, measuring healthcare quality. NHQR report 2007. http://www.ahrq.gov/. Published 2007. Accessed November 15, 2011. Agency for Healthcare Research and Quality. Research findings, quality and patient safety, measuring healthcare quality. NHQR report 2010. http://www.ahrq.gov/. Published 2010. Accessed November 15, 2011. Feifer C, Ornstein SM, Jenkins RG, et al. The logic behind a multimethod intervention to improve adherence to clinical practice guidelines in a nationwide network of primary care practices. Eval Health Prof. 2006;29:65-88. Wessell AM, Liszka HA, Nietert PJ, Jenkins RG, Nemeth LS, Ornstein S. Achievable benchmarks of care for primary care quality indicators in a practice-based research network. Am J Med Qual. 2008;23:39-46. Perera R, Dowell T, Crampton P, Kearns R. Panning for gold: an evidence-based tool for assessment of performance indicators in primary health care. Health Policy. 2007;80:314-327. New Zealand Ministry of Health. PHO Performance Management Programme: Summary Information for PHOs. Wellington, New Zealand: New Zealand Ministry of Health; 2006. Sibthorpe B, Gardner K. Conceptual framework for performance assessment in primary health care. Aust J Prim Health. 2007;13:96-103. Gardner KL, Sibthorpe B, Longstaff D. National quality and performance system for divisions of general practice: early
27.
28.
29.
30.
31. 32.
33.
34.
35.
❘ 9
reflections on a system under development. Aust New Zealand Health Policy. 2008;5:8. Primary Health Care Research and Information Service. Divisions Network Reporting: Support Pages. Adelaide, Australia: Primary Health Care Research and Information Service; 2008. Royal Australian College of General Practitioners (RACGP). RACGP Policy: Clinical Indicators and the RACGP Policy. South Melbourne, Australia: RACGP; 2009. Ford DR, Knight AW. The Australian Primary Care Collaboratives: an Australian general practice success story. Med J Aust. 2010;193:90-91. Organization for Economic Co-operation and Development (OECD). Health care quality indicators (HCQI). http:// www.oecd.org/health/hcqi/. Accessed November 15, 2011. Iezzoni LI. Assessing quality using administrative data. Ann Intern Med. 1997;127:666-674. World Health Organization (WHO). WHO Statistical Information System (WHOSIS). http://apps.who.int/whosis/. Accessed November 15, 2011. World Health Organization (WHO). The Global Health Observatory (GHO). http://www.who.int/gho/. Accessed November 15, 2011. European Community Health Indicators and Monitoring project (ECHIM). http://www.healthindicators.eu/. Accessed November 15, 2011. Groene O, Skau JK, Frolich A. An international review of projects on hospital performance assessment. Int J Qual Health Care. 2008;20:162-171.
Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.