Journal of Hospital Infection 89 (2015) 328e330 Available online at www.sciencedirect.com

Journal of Hospital Infection journal homepage: www.elsevierhealth.com/journals/jhin

How infection control teams can assess their own performance and enhance their prestige using activity and outcome indicators for public reporting P. Parneix* South-West France Healthcare-Associated Infection Control Centre, Bordeaux, France

A R T I C L E

I N F O

Article history: Received 29 October 2014 Accepted 23 December 2014 Available online 6 January 2015 Keywords: France Infection control Performance indicators

S U M M A R Y

In France, infection control (IC) practitioners appeared in the late 1970s. In 1995, French health authorities formally introduced the concept of IC teams, which became mandatory in November 1999. Confidential IC annual reports for each hospital became mandatory in 2000. Under pressure from consumer associations, the Ministry of Health introduced IC performance indicators for public reporting in 2004, the first being known as ICALIN. Although the annual IC report was intended to be a hospital report, in practice it was often considered to be the IC team’s report, so IC teams began to be held accountable for the performance of their hospitals against IC indicators. Several IC teams thought that the report failed to reflect the volume and range of their activities, especially in terms of counselling. However, most of them recognized the benefit of public reporting, as their work was at least under scrutiny and recognized as useful. Using indicators to evaluate IC performance thus provided a real boost for IC teams in France. Indicators must be refined periodically if they are to be sustainable. Further work on core competencies for hospital hygiene professionals is needed in France to improve their performance and credibility. Should an IC team be accountable for nosocomial infection in its hospital? The answer ‘no’ might seem strange, but so might an unqualified ‘yes’. ª 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

Introduction The first infection control practitioners (ICPs) in France appeared in the late 1970s. The South-West France healthcareassociated infection control centre (CCLIN SO) was created in 1992. Its mission was to implement national policy in the 473 healthcare facilities located in seven administrative regions, including three overseas (Guadaloupe, Martinique, and Guyane). It was only in 1995 that the French health authorities * Address: Centre Hospitalier Universitaire de Bordeaux, Groupe Hospitalier Pellegrin, 33076 Bordeaux, France. Tel.: þ33 05 56 79 60 58; fax: þ33 05 56 79 60 12. E-mail address: [email protected].

officially introduced the concept of infection control (IC) teams and defined their composition and purpose. IC teams became mandatory in public and private French healthcare facilities in November 1999, with two years allowed for implementation. In 2000 a new official circular refined the concept, requiring IC teams to specialize in the management of infectious risk and describing how hospital IC should be configured. On this foundation, the health minister introduced a mandatory requirement for each hospital to produce an annual report on nosocomial infection. The minister’s circular also set out how IC was expected to be organized and carried out, and how IC data and reports were to be sent to the heath authorities in electronic format. However, until 2004 there was only limited use of these data.

http://dx.doi.org/10.1016/j.jhin.2014.12.009 0195-6701/ª 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

P. Parneix / Journal of Hospital Infection 89 (2015) 328e330 In 1998, under pressure from consumer associations, a ‘health safety’ law established transparency and made the prevention of nosocomial infection a public health priority. The Ministry of Health established indicators for public reporting of IC performance. The two main objectives of this programme were to increase transparency and to improve quality in all settings, not just those engaged in voluntary IC projects. In February 2006, after more than two years of preparation and negotiation, data based on the annual IC reports of 2004 were released. The first set of indicators, called ICALIN, was similar to the set then used in the UK National Health Service (NHS), which relied on 16 items, but ICALIN used 34 items to calculate a score on a 100-point scale. Agreeing on this scoring system proved the most difficult task, but fortunately the NHS’s percentiles method was ready to use, being both suitable for ICALIN and trusted by IC specialists. The year 2003 was used as the baseline by which to assess annual progress. Although the annual IC report was intended to be that of the hospital as a whole, in practice it was often considered, with some justification, as the report of the IC team. Therefore the IC teams began to be held accountable for performance of their hospital against the published indicators. It also gave the IC team leverage in negotiations with management, not least because the composition and the size of the IC team were themselves performance indicators. Year after year, new IC indicators were introduced and included in the aggregated score. Several IC teams complained that the report failed to reflect the volume and range of their activities, especially in terms of counselling. Even so, most of them recognized the benefits of public reporting. Not only was their work now under scrutiny, but it was also recognized as important and useful. Public reporting can have various drawbacks such as ‘tunnel vision’ or ‘sub-optimization’, where there is focus on optimizing one component to the detriment of others. IC teams had to focus on, and prioritize, performance indicators but at the same time resist the ‘gaming’ temptation.1 (Marshall et al. describe gaming as: ‘alter[ing] their behaviour to gain strategic advantage’.) As well as collecting IC indicator data along with all the other IC activities, IC teams became responsible for establishing the accuracy of those indicator data. IC teams now, in effect, publicly assess their own performance as well as that of their hospital. Many IC teams found this difficult. On one hand, they are under pressure from hospital managements that are eager to publish favourable accounts of IC performance (and such managements are not always receptive to warnings that they may not get the best ranking, or at least not get it as quickly as they would like). On the other hand, IC teams and their annual reports are subject to scrutiny from the health authorities, which undertake detailed reviews of 10% of institutions annually. And when health authorities undertake a review of an institution, it is often the IC team that has to meet them. So IC teams are not usually eager to overestimate the performance of their hospitals. The IC team performance is dependent on resources, and establishing recommended staff ratios is an ongoing fight. Moreover, action in line with national policy and programmes is expected. That much is widely understood. More controversially, but interesting, are the results. Should an IC team be accountable for nosocomial infection rates in its hospital? To answer ‘no’ might seem strange, but so might an unqualified ‘yes’. If ‘no’ is applicable, it applies to all the results, however good or bad they are. The prime purpose of IC is the reduction

329

of preventable infection, so the results are important. Moreover, analysis not only of the ‘what’ but also of the ‘why’ is an essential component of the work of an ICP. ICPs are increasingly aware of procedures for root cause analysis and how to use them in the field of healthcare-associated infections. But root cause analysis is also perfectly suited to investigation of their own shortcomings in failing to hit a performance indicator target. Suppose my hospital undertook surgical site infection surveillance in only 50% of surgical specialties when 100% coverage was expected. I could list my strategy to reach that objective step by step. Then I could look at differences between my strategy and the recommended one and, by digging as deep as possible, seek the root causes. Corrective action could be taken in the light of those causes. Root cause analysis is most effectively done by the hospital’s entire IC team, assuming of course that the ‘team’ consists of more than one person. To further illuminate the utility of indicators, consider consumption of alcoholic hand-rub products. The most difficult part was determining how to compare settings on the basis of their consumption. In the end, it was decided to set consumption targets for each setting according to their annual activity. Individual targets were chosen by reference to arbitrary standards that varied by specialty according to the burden of healthcare: from seven hand rubs per patient days for a medical unit rising to 48 for an intensive care unit. This allowed the calculation of a percentage achievement of the target (actual consumption  100 divided by expected consumption) and emerged as an example of an indicator which changed behaviour, first of the ICPs themselves and later of other healthcare workers (HCWs). Many French ICPs had been reluctant to use alcoholic hand-rub products for hand hygiene. It will not be a surprise that other HCWs shared that reluctance. By making it the reference method for hand hygiene in 2001, the French national IC committee took the decision that launched the change. At this time, an audit performed in CCLIN SO healthcare facilities showed that there was no action before or after 38% of hand hygiene opportunities. In addition, disinfection with a hand-rub product was used in only 10% of hand hygiene procedures; handwashing with soap and water was, by subtraction, calculated to account for the remaining 90%. Regional campaigns to promote hand hygiene started in 2005. For example, in 2005, 64% of health facilities of the CCLIN SO organized a training day, with 28,632 HCWs participating, whereas in 2007, 61% of facilities trained 24,299 HCWs and 6099 consumers. In 2009, the national WHO hand hygiene day was implemented in France. In 2005, the year of the first evaluation of the national indicator, another region-wide audit showed that missed hand hygiene opportunities had fallen to 24%, with hand rubbing reaching 39%; in 2009, the equivalent results were 10% and 71%, respectively.2 Missed opportunities for hand hygiene of only 10% may well represent a ‘rose-tinted vision’ but it is at least a trend in the right direction. Further evidence of this trend is provided by the three-fold increase in hand-rub product consumption in French healthcare facilities between 2005 and 2012. There are at least two ways of dealing with bad news. On one hand, an IC team with poor results in its hospital could say that the pharmacist was reluctant to buy enough hand rub, that management was not interested in infection control, that the safety officer invoked fire risk as an obstacle, that HCWs were tired of hand hygiene messages and dissatisfied with hand-rub products that led to skin intolerance, sticky

330

P. Parneix / Journal of Hospital Infection 89 (2015) 328e330

hands, and so forth. On the other hand, the IC team could think about how to improve its impact on behaviour: What about using this new monthly consumption monitoring software? What about changing my way of teaching by introducing short, appealing messages and prompts? What about exploring the psychological commitment theory? What about searching for the help of consumer representatives? In south-western French healthcare facilities, the proportion of meticillin-resistant Staphylococcus aureus (MRSA) strains was 41.4% in 1999, decreasing steadily to 37.8% in 2005 and to 31.5% in 2008. The prevalence of nosocomial infection with MRSA decreased from 0.63% of hospitalized patients in 1999 to 0.30% in 2008, with a fall in prevalence of nosocomial infection from 5.5% to 3.8% over the same period. Whether or not you are music lover, you should have a close look at the work published by Kaplan et al.3 They describe a model for understanding success known as MUSIQ. At the core of the model is the insurance quality team, analogous to the IC team. The three key ingredients of success are: team leadership, team diversity and expertise. The authors expand on the last of these thus: ‘One or more team members is knowledgeable about the outcome, process, or system being changed’. Other factors highlighted by Kaplan et al. are: team decisionmaking process, team norms (strong norms of behaviour about how work is to be carried out and how goals are to be achieved), and team skill (ability to use improvement techniques to make changes). These would all contribute to the efficacy of IC measures. It is not easy to measure these skills in a standardized way, but it is prudent to keep them in mind when considering the extent to which an IC team is able to lead its institution to the standards of performance expected of it. The story of endoscope disinfection in France is another illustration of the interaction between guidelines, legislation and implementation by IC teams. In its July 1982 bulletin the Socie ¸aise d’Hygie ´te ´ Franc `ne Hospitalie `re (SF2H) published the first national guidelines for endoscope disinfection. Three years later the CCLIN SO performed an audit of practice in this field. Alarming as it might look today, it was found that 8% of endoscopic procedures at that time were done without cleaning or disinfection. And a surprisingly small number of people expressed any concern, despite the huge associated risk. To promote implementation of the SF2H guidelines, the French Health Ministry then incorporated them into an official circular. The following year, a fresh audit by CCLIN SO showed a significant improvement in practice. The dangerous lack of endoscope disinfection fell to 3% of procedures in 1997, but it took a further 10 years to reduce it to zero. That does not prove that there is no risk of transmission of infection via endoscopes in France but it does show that it is possible to establish a solid foundation for patient safety. Today, with increasingly sophisticated endoscope designs, new problems have to be solved and IC teams have found new ways to solve them. For example, a risk assessment tool has been developed to check an endoscopy unit’s quality management with respect to ten

key points. These points covered the main causal factors in previously identified episodes of endoscope contamination. Preventive maintenance of the endoscopes is a real challenge in healthcare facilities. A curative rather than preventive approach is often taken with endoscopes, but curative action is often too late. Thus, performance assessment of an IC team could include either just a ‘yes’ or ‘no’ statement as to whether they have checked these critical points, or go further and describe the outcome of their action. The purpose is not to put all the responsibility on the shoulders of the IC team, since the cooperation of the endoscopy unit is crucial. But if the IC team feels accountable for the outcomes it will be more inclined to investigate failures in a positive way and to find more effective ways to implement national guidelines. Another possible approach to assessing performance of ICPs is to review their training and ongoing development of competencies. This approach has not been fully explored in France, unlike countries such as the UK and the USA where the Infection Prevention Society and the Association for Professionals in Infection Control and Epidemiology have developed detailed standards. In conclusion, the introduction and publication in France of national IC performance indicators gave a real boost both to IC teams and to IC itself.4 The use of these data to evaluate the performance of IC teams was not originally intended and has not always been fair, but now appears both logical and useful. To be sustainable, the indicators must be refined periodically as national strategy, concepts, and tools evolve. Nevertheless, France has still to work on core competencies for infection control and hospital hygiene professionals in order to improve their performance and credibility. This is a goal for the future. Conflict of interest statement None declared. Funding sources None.

References 1. Marshall MN, Romano PS, Davies HT. How do we maximize the impact of the public reporting of quality of care? Int J Qual Health Care 2004;16(Suppl. 1):i57ei63. 2. Venier AG, Zaro-Goni D, Pefau M, et al. Performance of hand hygiene in 214 healthcare facilities in South-Western France. J Hosp Infect 2009;71:280e282. 3. Kaplan HC, Provost LP, Froehle CM, Margolis PA. The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality Improvement. BMJ Qual Saf 2012;21:13e20. 4. French Society for Hospital Hygiene (SF2H), Lucet JC, Parneix P, Grandbastien B, Berthelot P. Should public reporting be made for performance indicators on healthcare-associated infections? Med Mal Infect 2013;43:108e113.

How infection control teams can assess their own performance and enhance their prestige using activity and outcome indicators for public reporting.

In France, infection control (IC) practitioners appeared in the late 1970s. In 1995, French health authorities formally introduced the concept of IC t...
162KB Sizes 0 Downloads 3 Views