International Journal of Health Care Quality Assurance Roadmap for developing a national quality indicator set for general practice Ailis ni Riain Catherine Vahey Conor Kennedy Stephen Campbell Claire Collins

Article information:

Downloaded by Temple University At 12:33 31 January 2016 (PT)

To cite this document: Ailis ni Riain Catherine Vahey Conor Kennedy Stephen Campbell Claire Collins , (2015),"Roadmap for developing a national quality indicator set for general practice", International Journal of Health Care Quality Assurance, Vol. 28 Iss 4 pp. 382 - 393 Permanent link to this document: http://dx.doi.org/10.1108/IJHCQA-09-2014-0091 Downloaded on: 31 January 2016, At: 12:33 (PT) References: this document contains references to 30 other documents. To copy this document: [email protected] The fulltext of this document has been downloaded 58 times since 2015*

Users who downloaded this article also downloaded: Vinaysing Ramessur, Dinesh Kumar Hurreeram, Kaylasson Maistry, (2015),"Service quality framework for clinical laboratories", International Journal of Health Care Quality Assurance, Vol. 28 Iss 4 pp. 367-381 http://dx.doi.org/10.1108/IJHCQA-07-2014-0077 Jingwei Alex He, Wei Yang, (2015),"Clinical pathways in China – an evaluation", International Journal of Health Care Quality Assurance, Vol. 28 Iss 4 pp. 394-411 http://dx.doi.org/10.1108/ IJHCQA-09-2014-0096 Asgar Aghaei Hashjin, Bahram Delgoshaei, Dionne S Kringos, Seyed Jamaladin Tabibi, Jila Manouchehri, Niek S Klazinga, (2015),"Implementing hospital quality assurance policies in Iran: Balancing licensing, annual evaluation, inspections and quality management systems", International Journal of Health Care Quality Assurance, Vol. 28 Iss 4 pp. 343-355 http://dx.doi.org/10.1108/ IJHCQA-03-2014-0034

Access to this document was granted through an Emerald subscription provided by emeraldsrm:310011 []

For Authors If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.

About Emerald www.emeraldinsight.com Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as

well as providing an extensive range of online products and additional customer resources and services. Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.

Downloaded by Temple University At 12:33 31 January 2016 (PT)

*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available on Emerald Insight at: www.emeraldinsight.com/0952-6862.htm

IJHCQA 28,4

Roadmap for developing a national quality indicator set for general practice

382

Ailis ni Riain Education & Research Solutions, Wicklow, Ireland

Received 1 September 2014 Revised 30 November 2014 Accepted 22 December 2014

Catherine Vahey and Conor Kennedy Research Department, Irish College of General Practitioners, Dublin, Ireland

Stephen Campbell Downloaded by Temple University At 12:33 31 January 2016 (PT)

National Primary Care Research and Development Centre, University of Manchester, Manchester, UK, and

Claire Collins Research Department, Irish College of General Practitioners, Dublin, Ireland Abstract Purpose – The purpose of this paper is to describe a national, comprehensive quality indicator set to support delivering high-quality clinical care in Irish general practice. Design/methodology/approach – Potential general practice quality indicators were identified through a literature review. A modified two-stage Delphi process was used to rationalise international indicators into an indicator set, involving both experts from key stakeholder groups (general practitioners (GPs), practice nurses, practice managers, patient and health policy representatives) and predominantly randomly selected GPs. An illustrative evaluation approach was used to road test the indicator set and supporting materials. Findings – In total, 80 panellists completed the two Delphi rounds and staff in 13 volunteer practices participated in the road test. The original 171 indicators was reduced to 147 during the Delphi process and further reduced to 68 indicators during the road test. The indicators were set out in 14 sub-domains across three areas (practice infrastructure, practice processes and procedures, and practice staff). Practice staff planned 77 quality improvement activities after their assessment against the indicators and 31 (40 per cent) were completed with 44 (57 per cent) ongoing and two (3 per cent) not advanced after a six-month road test. A General Practice Indicators of Quality indicator set and support materials were produced at the conclusion. Practical implications – It is important and relatively easy to customise existing quality indicators to a particular setting. The development process can be used to raise awareness, build capacity and drive quality improvement activity in general practices. Originality/value – The authors describe in detail a method to develop general practice quality indicators for a regional or national population from existing validated indicators using consensus, action research and an illuminative evaluation. Keywords Ireland, Evaluation, Quality indicators, Quality improvement, Primary healthcare, General practice, Health systems Paper type Research paper

International Journal of Health Care Quality Assurance Vol. 28 No. 4, 2015 pp. 382-393 © Emerald Group Publishing Limited 0952-6862 DOI 10.1108/IJHCQA-09-2014-0091

The authors thank the Delphi panellists and Indicator Working Group members; Royal College of General Practitioners; Royal Australian College of General Practitioners; Royal New Zealand College of General Practitioners and the European Practice Assessment Collaboration Group for giving permission to include their indicators. This study was supported by a grant from the Health Information and Quality Authority.

Introduction Objective quality improvement activity measurement and improvement is more important than simple activity measures (Campbell et al., 2003). It is important to develop and use appropriate methods, tools and indicators that are relevant to primary care settings. Quality indicators, widely accepted as a measure, are defined by the European Working Party on Quality in Primary Care as:

Downloaded by Temple University At 12:33 31 January 2016 (PT)

Measurable elements of practice performance for which there is evidence or consensus that they can be used to assess quality, and hence change the quality of care provided (Lawrence and Olesen, 1997, p. 104).

In recent years, indicators have exploded across diverse healthcare areas, from hospital clinical care (Mattke et al., 2006; Thomson et al., 2004) to mental health (Shield et al., 2003), nursing home care (Zimmerman et al., 1995) and prescribing (Avery et al., 2011). In general practice, quality indicators have been developed for national accreditation schemes (Royal New Zealand College of General Practitioners, 2002; Royal Australian College of General Practitioners, 2005), in quality improvement initiatives (Royal College of General Practitioners, 2006), in prevention and health promotion (Marshall et al., 2006), in general practitioner (GP) contractual agreements (Shekelle, 2003) and also in smaller-scale projects to improve specific care; e.g., prescribing and chronic disease management (Gribben et al., 2002; Jeacocke et al., 2002). Staff forming an across-country collaboration developed quality indicators for practice management in European general practices (Engels et al., 2005). Ireland’s general practices, like elsewhere, have seen an increased emphasis on delivering quality care and ensuring patient safety. The Irish College of General Practitioners (ICGP) supports this through continuous medical education and initiatives that mirror quality improvements trends in other European countries (Baker et al., 2006). The ICGP recognised the need to provide practical quality improvement assistance to general practices and initiated a project to develop comprehensive quality indicators to achieve high-quality clinical care for Irish general practice. Adhering to recommendations (Mainz, 2003; Marshall et al., 2002, Campbell et al., 2003), the focus was on developing indicators recognised as important to quality care and within practice control. Heath et al. (2009) posit that effective primary care depends on integrating vertical and horizontal services. The former refers to managing specific diseases across the healthcare system while the latter emphasises coordinating and personalising care around individual needs. Our project focused on developing a formative, as opposed to summative, qualitative tool (Campbell et al., 2010) that addresses the horizontal component in the healthcare model. We report on this development (General Practice Indicators of Quality (GP-IQ)) using a modified Delphi technique that incorporated relevant stakeholder group views and a subsequent road test in volunteer general practices. Method The Delphi method is a common group decision-making technique that uses structured questionnaires to explore consensus about a topic among a panel (Linstone and Turoff, 1975). It is used to develop indicators for measurement in areas where research evidence is lacking or contested (Campbell et al., 2003). Our key stakeholders were GPs, practice managers (PMs), practice nurses (PNs), patient representatives and health policy representatives. The Delphi method generally involves nominated experts as panellists, who are selected because they are interested or have a professional role in

Developing a national quality indicator 383

IJHCQA 28,4

Downloaded by Temple University At 12:33 31 January 2016 (PT)

384

improving service quality. Additionally, we included a randomly selected GP cohort in the core stakeholder group in the development process, which has been shown to improve the indicators’ relevance and to increase the likelihood that they will be adopted (Campbell et al., 2003; Royal Australian College of General Practitioners, 2005). Project oversight was provided by an Advisory Group (nine experts) drawn from the stakeholder groups, including two international experts in quality indicator development from the UK and Denmark. Ethical approval was obtained from the ICGP Research Ethics Committee. Modified Delphi process The Irish College of General Practitioners adapted and adopted the European general practice definition, which outlines six core competencies and associated general practice characteristics (www.icgp.ie/curriculum/definition.php). Using this definition as a guiding principle, an Indicator Working Group (IWG; 13 individuals), drawn from all stakeholder groups, prioritised care domains for indicator development. A focused literature review identified validated quality indicators within these domains, which were predominantly drawn from the UK (Royal College of General Practitioners, 2006), Europe (Engels et al., 2005), New Zealand (Royal New Zealand College of General Practitioners, 2002) and Australia (Royal Australian College of General Practitioners, 2005). Relevant bodies gave permission to use these indicators. In total, 171 indicators were listed once duplicates and those that did not apply to Irish GP settings were removed. Two questionnaire rounds were completed by the Delphi panel, rating the indicators for importance to quality care (Rounds 1 and 2), wording clarity (Round 1) and measurability (round 2). Ratings were made on nine-point Likert scales, based on the RAND appropriateness method (Campbell et al., 2000). Indicators that received an ⩾7 rating in round one were retained for round two and those rated o 7 were excluded. In round one, panellists could also comment and suggest new indicators. In round two, panellists were given the mean rating and range for each indicator from round one and re-rated indicators for importance to quality. In this round, panellists also rated indicator measurability using the same nine-point scale. An indicator was considered measurable if indicator data could be reliably collected in the practice (overall panel rating ⩾7). Panellists without general practice experience were given the option of not rating on this scale. Road testing the preliminary indicator set The indicator set identified through the Delphi process and its merit as a quality improvement tool were tested in 13 volunteer general practices, broadly representing Irish general practice. Practice staff measured themselves against the indicators, identified five areas for quality improvement, undertook those improvements and participated in the evaluation, which resulted in further refinements to the indicator set. During the Delphi process, it became apparent, from comments by panellists, that the road-test would benefit from supporting materials to facilitate and quantify practice improvements; i.e., a handbook detailing relevant information about how practice staff should self-assess against each indicator and signposting to relevant resources such as web sites, articles and books; a rating book; a quality improvement template and a waiting room notice. A GP-IQ logo was designed by the research team to signify all project documents that contributed to quality indicator set identity.

Downloaded by Temple University At 12:33 31 January 2016 (PT)

Evaluating the road test The evaluation aimed to determine whether the indicators were fit for purpose and to identify changes to the indicators and GP-IQ support materials. An illuminative evaluation approach, described by Parlett and Hamilton (1972) and used by Macfarlane et al. (2004), was selected. This approach focuses on description and interpretation rather than measurement and prediction. The multi-modal evaluation included immediate and reflective feedback, collected all practice staff views and took individual and collective input into account. Postal surveys, telephone feedback, practice visits and a focus group discussion with staff in all participating practices in the evaluation were used during this process.

Developing a national quality indicator 385

Results Indicator working group This group identified 16 subdomains classified into three broad domains (practice infrastructure, practice processes and procedures, and practice staff). Panellist response rates Round one questionnaires were sent to 332 panellists; 125 (38 per cent) responded. In total, 80 (64 per cent) completed the second round questionnaire (Table I). As expected, the nominated panellists who had given prior agreement to participate responded better than randomly selected GPs in round one (73 per cent compared with 26 per cent) but the groups had a more evenly matched response rate in round two (72 per cent compared with 57 per cent). The panel breakdown for both rounds was approximately 60 per cent GPs and 40 per cent other stakeholders. Questionnaire round one The 171 indicators were rated by 125 panellists (60 nominated individuals from all stakeholder groups and 65 randomly selected GPs). In total, 15 indicators were excluded as they were rated o 7 for importance to quality overall, or by GPs and at least one other stakeholder group. All remaining indicators were rated ⩾7 for clarity. Random All GPs panellists

Nominated panellists Target for completion Identified Agreed to participate Completed Round 1 GPs 14

Practice Nurses 14

Practice Managers 10

Completed Round 2 11

8

9

50

50

100

120 82

250 n/a

370 n/a

60

65

125

Patient Policy representatives representatives 10 12 43

37

80

6

9

Table I. Delphi panels and response rates

IJHCQA 28,4

Downloaded by Temple University At 12:33 31 January 2016 (PT)

386

However, 19 indicators had minor modifications to their wording after suggestions from panellists. Six additional indicators were added at the panellists’ suggestions. Questionnaire round two The 162 indicators were rated by 80 panellists (43 nominated panellists and 37 GPs). Overall, all indicators were rated ⩾7 for importance to quality care with 100 (61.7 per cent) rated 9, 50 (30.3 per cent) rated 8/8.5 and 12 (7.4 per cent) rated 7/7.5. Five indicators were rated o 7 by the GP sub-group in both rounds. Given this consistency and the seven rating by the overall group, they were excluded. All but three indicators were rated ⩾7 for measurability by the panel. The indicators rated low for measurability were retained as they had received high ratings for importance. Their merit was subject to discussion in a subsequent IWG meeting. Indicator working group and advisory group input The 157 indicators resulting from the Delphi process were considered at an IWG meeting and by the Advisory Group. Following this consultation process, nine indicators were deleted, three others were amalgamated and one was divided into two, resulting in 147 indicators distributed across 15 care subdomains. One subdomain (quality improvement) was deleted, as its indicators had either been excluded or moved to other subdomains. Support materials were developed at this stage. Road test and evaluation The indicator set, their presentation and support materials, and their ability to drive quality improvement activity were evaluated simultaneously and results were mapped onto a matrix to ensure all elements were evaluated (Table II). There was high agreement throughout all evaluation components that the GP-IQ domains and subdomains were appropriate and relevant. Subsequent changes to the indicators necessitated amalgamating two subdomains (patient information and patient involvement) GP-IQ domains and subdomains are listed as follows: (1) Practice infrastructure: •

premises and facilities;



medical equipment;



information technology;



health and safety; and



financial management.

(2) Practice processes and procedures: •

service availability and access;



patient records management;



medicines management;



continuity and coordination; and



patient information and involvement.

x

x x x

x

X x

Impact on Logistics patients x x

Practical examples

X X x

Further Motivation development

x X x x X x X x x X Primary purpose of each element of evaluation

X X

Involvement and impact on staff

X x x x x x x x x x Plan. X = All purposes addressed by each element of evaluation. X =

x

X

X

X x x x x

X x x x x

Review of Self-assessment Review of QIP report Researcher contact/support Immediate feedback Practice visit Staff survey Focus group Note: QIP ¼ Quality Improvement

Support materials

Rating Indicators Domains scales

Downloaded by Temple University At 12:33 31 January 2016 (PT)

Developing a national quality indicator 387

Table II. GP-IQ evaluation matrix with each element’s primary purpose highlighted. Indicators Domains Rating scales Support materials Logistics Impact on patients Involvement & impact on staff Practical examples Motivation Further development

IJHCQA 28,4

388

(3) Practice staff: •

qualifications and training;



learning organisation;



teamwork and team support; and



human resource management.

Downloaded by Temple University At 12:33 31 January 2016 (PT)

The most common feedback was that the overwhelming number needed to be drastically reduced. Group discussion, input from Advisory Group experts and a literature review resulted in several strategies to achieve reduction. Indicators that had been easily achieved by all practices were deleted. Indicators that measured the same care aspect were grouped, with some being subsumed as criteria under broader indicators. This resulted in a final GP-IQ 68 indicator set, some with subsidiary criteria. The rating scale was simplified to a four-point scale. It was recommended that rating should be exclusively electronic. The handbook was considered informative, comprehensive and useful by participating staff. The layout was altered to reflect the final indicator set. The quality improvement template was modified slightly. Additional support materials (patient and staff questionnaires and associated instructions) were developed. Quality improvement activity Practice staff were instructed to identify five areas for improvement arising from their rating against the indicators and carry out improvements. In total, 77 quality improvements were planned (staff in seven practices planned five improvements each, four planned six, one eight and one ten). These planned improvements spanned 13 of 15 quality domains, with the majority in the practice infrastructure domain. The top ten quality improvements were undertaken in many practices (Table III). After nine months, 31 of 77 improvements were reported to be complete, 44 were ongoing and two were not possible. However, it is possible that more improvements were completed, as a final report was not received from six practices. As this request fell outside our research agreement, we did not follow up the non-responders. Improvements that were specific and linked to a single or a few indicators were more likely to be completed. Outputs The project’s outputs included a validated quality indicator set for Irish general practice (GP-IQ) with support materials; recommendations for further development; Quality improvement

Table III. Top ten quality improvements in the GP-IQ road test

Develop/update health and safety statement Offer Hepatitis B vaccination to staff Develop hygiene/infection control protocols Develop/update patient complaints procedure Develop an incident report sheet Update doctor’s bag procedures Establish regular practice meetings Undertake staff appraisals Update patient information leaflet and web site Improve practice signs

Practices 7 5 5 5 3 3 3 3 3 3

GP-IQ related subdomain Health and safety Health and safety Health and safety Patient information and involvement Health and safety Medical equipment Teamwork and team support Human resource management Patient information and involvement Premises and facilities

raising awareness about measuring quality improvements amongst GPs and general practice staff; building capacity for introducing compulsory general practice accreditation in Ireland (www.icgp.ie/go/research/reports statements/5281D64DE9B5-E54201D6B39051C2 E0AA.html) (Table IV).

Downloaded by Temple University At 12:33 31 January 2016 (PT)

Discussion We developed quality indicators for Irish general practice, where none previously existed, in an environment where a national quality and safety framework is being developed for healthcare.

Developing a national quality indicator 389

Selecting quality domains Our decision to focus on non-clinical indicators and focus on practice infrastructure, practice processes and procedures, and practice staff domains was informed by reviewing existing general practice indicators previously developed in several countries. The Royal College of General Practitioners (2002, p. 3) policy argues against using clinical indicators alone in general practices because “the application of quality indicators for specific clinical conditions within a generalist discipline will yield information that relates to only a relatively small part of the generalist’s work” while focusing on improving generic practice systems that support patient care is recognised as a beneficial activity (Edwards et al., 2010). While good organisational processes do not necessarily ensure good quality clinical care, it is unlikely that good quality clinical care can be consistently delivered without sound organisational structures and good practice management processes (Donabedian, 1980). Reducing the indicators This process reduced more than 500 indicators identified in the literature to 171 for the Delphi process, 147 at the end of the Delphi process and to 68 in the final indicator set. Most indicators were rated both important to care and measurable in practice by the panel as they were selected from validated indicator sets. Allowing Delphi panellists to Validated quality indicator set for Irish general practice GP-IQ support materials

Recommendations for further development

Raising awareness and building capacity in general practice

68 indicators over 3 domains and 14 subdomains Rating book Handbook (instructions, examples and hyperlinks to templates and resources) Patient survey instrument and instructions Staff survey instrument and instructions Quality improvement plan template and instructions Develop a national GP-IQ scheme Integrate GP-IQ into the Irish College of General Practitioners quality agenda Recognise GP-IQ as a broader governance and regulatory framework component 112 general practice staff members involved in Indicator Working Group and Delphi process (85 GPs, 15 practice nurses, 12 practice managers) 156 general practice staff members involved in the road test (53 GPs, 31 practice nurses, 53 administrative staff and 19 other clinical professionals)

Table IV. GP-IQ project outputs

IJHCQA 28,4

suggest new indicators ensured that Irish general practice individuality was retained. Keeping the indicators at a more manageable amount ensures better engagement with the process by practice staff, which is important when the application process is voluntary rather than compulsory. Retaining indicators in almost all original areas defined by the IWG meant that the GP-IQ initiative’s scope was also retained, despite reducing the total indicators.

Downloaded by Temple University At 12:33 31 January 2016 (PT)

390 Importance A good quality indicator should define care that is attributable to and within the person’s control who is delivering the care (Marshall et al., 2002; Campbell et al., 2003). Grassroots involvement in indicator development is widely acknowledged (Royal Australian College of General Practitioners, 2005; Royal College of General Practitioners, 2002; Marshall et al., 2003) and a limitation identified in the European Practice Assessment was that many indicators, important to individual participating countries, could not be included as they had not been highly rated by all countries (Engels et al., 2005). Developing a quality indicator set specifically for Ireland provided us with the opportunity to incorporate local feedback and knowledge, assuring acceptance and relevance, which should encourage widespread use. Hence our project included all stakeholder groups at all project stages, with GPs as the Delphi process majority. Providing Irish GPs and other stakeholders in Irish general practice with the opportunity to contribute to the development was a particular strength. Grassroots involvement in developing an Irish indicator set ensures that it reflects what is important in the Irish setting and is likely to promote acceptance amongst the broader GP population. It will likely prove more cost and time effective in the long run. Driving quality improvement activity One major limitation, even among indicators that are rigorously developed, is that they do not define the quality problem’s cause, rather, they identify an issue that may require further investigation (Campbell et al., 1998). The Royal College of General Practitioners Policy Statement (2002) asserts that using quality indicators should stimulate discussion about improving services and patient care. Requiring participating practice staff to use the initial measurement against the indicators to identify five quality improvements required within their practice and then providing them with the tools to make the changes meant that the evaluation encompassed the quality indicators’ purpose. It was important to allow staff in each practice to select areas for formative improvement to allow meaningful and relevant interventions to be undertaken. This is reflected in the relatively high participation rate (40 per cent completed and 57 per cent ongoing) within a short six-month timeframe. A strong feedback theme was how the process had led to more discussions among practice staff about quality improvements. There was universally positive feedback from staff about the support materials and they also greatly valued having a designated contact person in the ICGP to troubleshoot problems and provide advice. Several potential roles for the GP-IQ were identified by road-test practice staff. It could be promoted as a quality improvement tool to interested practices in general or, specifically, to assist staff in new or struggling practices. It was also suggested that it was useful in directing newly appointed practice managers, provided a structure for risk management and improved patient involvement and satisfaction with the practice.

Downloaded by Temple University At 12:33 31 January 2016 (PT)

Limitations The low response rate among randomly selected GPs is a limitation; 24 per cent who received the questionnaire in round completed the Delphi process. However, this response rate mirrors Delphi studies that sampled several individuals who had not given prior agreement to participate (Jeacocke et al., 2002; Campbell et al., 2000). We attempted to mitigate this by deliberately oversampling randomly selected GPs. Morris et al. (2001) determined that questionnaire size was among two main factors influencing GP decisions to complete questionnaires. Considering that the first Delphi questionnaire was 20 pages, perhaps the response rate was more affected by this than by a disinterest in quality indicators. The major limitation identified from participating practices was the time and resources required to undertake assessment and quality improvements, consistent with other indicator evaluations. The time-frame for the road test (two months for assessment and six months for quality improvements) was also judged to be too short. It is unlikely that GP-IQ will be widely adopted in its entirety as a purely voluntary activity because, as one participating practice manager stated, “everything else is a higher and more urgent priority than this”. However, it provides validated indicators that can be incorporated into evolving health and safety frameworks in Ireland. Conclusions It is important and relatively easy to customise existing quality indicators for particular settings. The volunteer practices in the road test embraced quality improvement and delivered real results. They were enthusiastic about the structures provided to enable them to undertake this work. References Avery, A.J., Dex, G., Mulvaney, G., Serumaga, B., Spencer, R., Lester, H. and Campbell, S.M. (2011), “Development of prescribing safety indicators for GPs using the RAND appropriateness method”, British Journal of General Practice, Vol. 6 No. 589, pp. 526-536. Baker, R., Wensing, M. and Gibis, B. (2006), “Improving the quality and performance of primary care”, in Saltman, R.B., Rico, A. and Boerma, W. (Eds), Primary Care in the Driver’s Seat? Organizational Reform in European Primary Care, Open University Press, Maidenhead, pp. 203-225. Campbell, S.M., Braspenning, J., Hutchinson, A. and Marshall, M.N. (2003), “Improving the quality of health care: research methods used in developing and applying quality indicators in primary care”, British Medical Journal, Vol. 326 No. 7393, pp. 816-819. Campbell, S.M., Cantrill, J.A. and Roberts, D. (2000), “Prescribing indicators for UK general practice: Delphi consultation study”, British Medical Journal, Vol. 321 No. 7258, pp. 425-428. Campbell, S.M., Chauhan, U. and Lester, H.E. (2010), “Primary medical care provider accreditation (PMCPA): pilot evaluation”, British Journal of General Practice, Vol. 60 No. 576, pp. 295-304. Campbell, S.M., Roland, M.O., Quayle, J.A., Buetow, S.A. and Shekelle, P.G. (1998), “Quality indicators for general practice: which ones can general practitioners and health authority managers agree are important and how useful are they?”, Journal of Public Health Medicine, Vol. 20 No. 4, pp. 414-421. Donabedian, A. (1980), Explorations in Quality Assessment and Monitoring. Vol. 1. The Definition of Quality and Approaches to its Assessment, Health Administration Press, Ann Arbor, MI.

Developing a national quality indicator 391

IJHCQA 28,4

392

Edwards, A., Rhydderch, M., Engels, Y., Campbell, S., Vodopivec-Jamsek, V., Marshall, M., Grol, R. and Elwyn, G. (2010), “Assessing organisational development in European primary care using a group-based method: a feasibility study of the Maturity Matrix”, International Journal of Health Care Quality Assurance, Vol. 23 No. 1, pp. 8-21. Engels, Y., Campbell, S., Dautzenberg, M., van den Hombergh, P., Brinkmann, H., Szécsényi, J., Falcoff, H., Seuntjens, I., Kuenzi, B. and Grol, R. (2005), “Developing a framework of, and quality indicators for, general practice management in Europe”, Family Practice, Vol. 22 No. 2, pp. 215-222. Gribben, B., Coster, G., Pringle, M. and Simon, J. (2002), “Quality of care indicators for population-based primary care in New Zealand”, New Zealand Medical Journal, Vol. 115 No. 1151, pp. 163-166.

Downloaded by Temple University At 12:33 31 January 2016 (PT)

Heath, I., Rubinstein, A., Stange, K.C. and van Driel, M.L. (2009), “Quality in primary health care: a multi-dimensional approach to complexity”, British Medical Journal, Vol. 338 No. 1242, pp. 1242-1243. Jeacocke, D., Heller, R., Smith, J., Anthony, D., Stewart Williams, J. and Dugdale, A. (2002), “Combining quantitative and qualitative research to engage stakeholders in developing quality indicators in general practice”, Australian Health Review, Vol. 25 No. 4, pp. 12-18. Lawrence, M. and Olesen, F. (1997), “Indicators of quality in health care”, European Journal of General Practice, Vol. 3 No. 3, pp. 103-108. Linstone, H.A. and Turoff, M. (1975), The Delphi Method: Techniques and Applications, Addison Wesley Advanced Book Program, Reading, MA, available at: http://is.njit.edu/pubs/ delphibook/ accessed August 2014. Macfarlane, F., Greenhalgh, T., Schofield, T. and Desombre, T. (2004), “RCGP quality team development programme: an illuminative evaluation”, Quality and Safety in Health Care, Vol. 13 No. 5, pp. 356-362. Mainz, J. (2003), “Developing evidence-based clinical indicators: a state of the art methods primer”, International Journal for Quality in Health Care, Vol. 15 No. S1, pp. i5-i11. Marshall, M., Campbell, S., Hacker, J. and Roland, M. (2002), Quality Indicators for General Practice: A Practical Guide for Health Professionals and Managers, Taylor & Francis, London. Marshall, M., Klazinga, N., Leaderman, S., Hardy, C., Bergmann, E., Pisco, L., Mattke, S. and Mainz, J. (2006), “OECD health care quality indicator project. The expert panel on primary care prevention and health promotion”, International Journal for Quality in Health Care, Vol. 18 No. S1, pp. 21-25. Marshall, M.N., Shekelle, P.G., McGlynn, E.A., Campbell, S., Brook, R.H. and Roland, M.O. (2003), “Can healthcare quality indicators be transferred between countries?”, Quality and Safety in Health Care, Vol. 12 No. 1, pp. 8-12. Mattke, S., Epstein, A.M. and Leatherman, S. (2006), “The OECD health care quality indicators project: history and background”, International Journal for Quality in Health Care, Vol. 18 No. S1, pp. 1-4. Morris, C.J., Cantrill, J.A. and Weiss, M.C. (2001), “GP survey response rate: a miscellany of influencing factors”, Family Practice, Vol. 18 No, 4, pp. 454-456. Parlett, M. and Hamilton, D. (1972), “Evaluation as illumination: a new approach to the study of innovatory programmes”, in Hamilton, D. (Ed.), Beyond the Numbers Game: A Reader in Educational Evaluation, Macmillan, London. Royal Australian College of General Practitioners (2005), Standards for General Practices, 3rd ed., The Royal Australian College of General Practitioners, Melbourne.

Downloaded by Temple University At 12:33 31 January 2016 (PT)

Royal College of General Practitioners (2002), Policy Statement: Quality Indicators in General Practice, Royal College of General Practitioners, London. Royal College of General Practitioners (2006), Quality Team Development, Royal College of General Practitioners, London. Royal New Zealand College of General Practitioners (2002), Aiming for Excellence: An Assessment Tool for General Practice, 2nd ed., Royal New Zealand College of General Practitioners, Wellington. Shekelle, P. (2003), “New contract for general practitioners”, British Medical Journal, Vol. 326 No. 7387, pp. 457-458. Shield, T., Campbell, S., Rogers, A., Worrall, A., Chew-Graham, C. and Gask, L. (2003), “Quality indicators for primary care mental health services”, Quality and Safety in Health Care, Vol. 12 No. 2, pp. 100-106. Thomson, R., Taber, S., Lally, J. and Kazandjian, V. (2004), “UK quality indicator project (UK QIP) and the UK independent health care sector: a new development”, International Journal for Quality in Health Care, Vol. 16 No. S1, pp. i54-i56. Zimmerman, P.R., Karon, S.L., Arling, G., Clark, B.R., Collins, T., Ross, R. and Sainfort, F. (1995), “Development and testing of nursing home quality indicators”, Health Care Finance Review, Vol. 16 No. 4, pp. 107-127.

For instructions on how to order reprints of this article, please visit our website: www.emeraldgrouppublishing.com/licensing/reprints.htm Or contact us for further details: [email protected]

Developing a national quality indicator 393

Roadmap for developing a national quality indicator set for general practice.

The purpose of this paper is to describe a national, comprehensive quality indicator set to support delivering high-quality clinical care in Irish gen...
179KB Sizes 2 Downloads 6 Views