Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science Volume 12, Number 3, 2014 ª Mary Ann Liebert, Inc. DOI: 10.1089/bsp.2014.0007

A Public Health Emergency Preparedness Critical Incident Registry Rachael Piltch-Loeb, John D. Kraemer, Christopher Nelson, and Michael A. Stoto

Health departments use after-action reports to collect data on their experience in responding to actual public health emergencies. To address deficiencies in the use of such reports revealed in the 2009 H1N1 influenza pandemic and to develop an effective approach to learning from actual public health emergencies, we sought to understand how the concept and operations of a ‘‘critical incident registry,’’ commonly used in other industries, could be adapted for public health emergency preparedness. We conducted a workshop with public health researchers and practitioners, reviewed the literature on learning from rare events, and sought to identify the optimal characteristics of a critical incident registry (CIR) for public health emergency preparedness. Several key critical characteristics are needed for a CIR to be feasible and useful. A registry should: (1) include incidents in the response in which public health agencies played a substantial role, are ‘‘meaningful,’’ test one or more emergency preparedness capabilities, and are sufficiently limited in scope to isolate specific response issues; (2) be supported by a framework and standard protocols for including reports based on rigorous analysis of individual incidents and methods for cross-case analysis; and (3) include explicit incentives for reporting, to overcome intrinsic disincentives. With proper incentives in place, a critical incident registry can be a useful tool for improving public health emergency preparedness. Standard protocols for reporting critical events and probing analysis are needed to enable identification of patterns of successes and failures.

T

he Institute of Medicine (IOM) has defined the public health system as the ‘‘complex network of individuals and organizations that have the potential to play critical roles in creating the conditions for health.’’1(p28) For public health emergency preparedness (PHEP), the critical components of this system include not only federal, state, and local health departments, but also hospitals and healthcare providers, fire departments, schools, the media, and many other public and private organizations.2 In the past decade, health departments and other organizations

involved in emergency preparedness have worked with determination to innovate and to improve their processes, but most have not systematically analyzed why these practices work or do not work, and they do not have a framework for disseminating lessons learned from their experience. As a result, lessons from public health emergencies often remain unlearned—or at least untranslated to new incidents and new organizations. This can result in avoidable morbidity and deaths or, at best, inefficient use of resources in a time when public health budgets are small and often shrinking.

Rachael Piltch-Loeb is a research assistant, and Michael A. Stoto, PhD, is a Professor, both in the Department of Health Systems Administration, Georgetown University, Washington, DC. John D. Kraemer, JD, MPH, is Assistant Professor, Department of Health Systems Administration, Georgetown University, and O’Neill Institute for National and Global Health, Georgetown University Law Center. Christopher Nelson, PhD, is Senior Political Scientist, RAND Corporation, and Professor, Pardee RAND Graduate School, Santa Monica, California. 132

PILTCH-LOEB ET AL.

To address these problems, and ultimately to improve the public health system’s ability to respond effectively to emergencies, the federal government’s National Health Security Strategy (NHSS) calls on the public health system— defined broadly as per the IOM—to adapt systematic quality improvement (QI) methods and a culture of quality improvement to learn from experience in order to enhance our nation’s health security.3 As in healthcare more broadly and in other industries, effective quality improvement in the public health emergency preparedness system requires rigorous analytical methods that allow the system’s performance to be assessed and compared over time and across jurisdictions. However, standard quality improvement methods such as learning collaboratives, widely used in healthcare settings, may not be appropriate in the context of emergency preparedness because of the lack of established evidence-based outcome and performance measures, as well as the difficulty of carrying out rapid plan-do-study-act (PDSA) cycles and measuring associated outcomes.4 One challenge is that public health emergencies are singular events. While routine hospital services can be studied and improved with statistical process and outcome measures, system improvement for rare events requires the in-depth study of individual cases.5 Similarly, effective public health emergency preparedness system improvement requires systematic methods for learning from individual organizations’ experience in unusual situations. While responding to public health emergencies requires a core set of emergency preparedness capabilities, such as surveillance and communication with the public, the context, the challenge, and the specific ways in which these capabilities are employed varies considerably from incident to incident. Building on approaches developed by the military and the US Forest Service for managing wildland fires, health departments now routinely prepare After-Action Report/ Improvement Plans (AAR/IPs) to learn from their experience in responding to actual public health emergencies. However, in 2008, Nelson and colleagues noted that, despite efforts at standardizing formats, the structures of these reports are almost as varied as the individuals who produce them; they called for an investment to develop standardized data elements that can support comparisons across settings and over time.6 More recently, Savoia and colleagues7 examined after-action reports that described the response to the 2009 H1N1 pandemic as well as Hurricanes Katrina, Gustav, and Ike. The reports were drawn from the Department of Homeland Security’s (DHS) Lessons Learned Information Sharing (LLIS) system, and the researchers found that these reports varied widely in their intended uses, users, scope, timing, and format. The DHS Homeland Security Exercise Evaluation Program (HSEEP) format that is commonly used in these reports permits but does not require root cause analyses, and they are not common.8 Singleton and colleagues9 are more sanguine about these reports but still report significant difficulties in applying the HSEEP approach, especially to identify root causes. Volume 12, Number 3, 2014

Other fields have found ways to learn from one-of-a-kind events, however, and they may provide a model for the public health emergency preparedness system. Aviation is probably the most prominent example. In the mid-1970s, almost 1,000 people died every year from air crashes around the world. Today, that number has decreased by half despite the dramatic increase in number of flights.10 Air safety has improved in part because the aviation industry uses critical incident registries (CIRs), which enable the identification and systematic analysis of rare events— and responses to them—to drive learning and systems improvement. Through the use of such registries, the airline industry has become adept at drawing system-wide lessons from single incidents and achieving system improvements from seemingly innocuous occurrences observed across multiple accidents or close calls.11 Critical incident registries have also been adopted in health care and in other sectors of transportation. Although these registries take different forms depending on the practical context, all are designed to facilitate learning from relatively infrequent events. The major applications of critical incident registries, regardless of sector, include: (1) understanding contexts and mechanisms that drive successful and unsuccessful practices; (2) identifying and sharing best practices; (3) driving individual and organizational improvement; and (4) describing incidents’ frequency and nature. The success of critical incident registries in other fields suggests that a properly designed public health emergency preparedness critical incident registry could support broader analysis of the response to critical public health incidents, facilitate deeper analysis of particular incidents, and encourage a culture of systems improvement; this could be a valuable approach for local and national emergency preparedness systems improvement. A registry may also help to steer scarce resources to the most effective approaches. While systems for sharing lessons from emergency responses exist, none currently have all of the characteristics of successful critical incident registries. For instance, in addition to the DHS LLIS system mentioned above, the National Association of County and City Health Officials (NACCHO) maintains a collection of local health departments’ reports of ‘‘successful practices’’ from the H1N1 pandemic and other events. This collection is designed to provide quick suggestions and hypothesis generation about best practices, but analysis of root causes is neither required nor typically included. Thus, there remains a need for an approach that, in addition to capturing events and responses in a more usable way, also provides an analytical method for extracting lessons from reports about how the public health emergency preparedness system responded to specific incidents. This article aims to lay the foundation for the creation of a public health emergency preparedness critical incident registry by describing the scope of a proposed reporting system, presenting an approach for structuring incident 133

PREPAREDNESS CRITICAL INCIDENT REGISTRY

reports to facilitate individual and cross-case analysis through a database, and discussing measures to increase incentives for and reduce barriers to reporting. We conclude by discussing the use of a critical incident registry in public health practice, suggesting how a public health emergency preparedness critical incident registry could be institutionalized by the Centers for Disease Control and Prevention (CDC) or other organization charged with improving public health preparedness.

Rather than report the results of each of these strategies separately, we summarize the results in terms of the implications for the creation of an effective public health emergency preparedness critical incident registry.

Elements of a Registry

We used a 3-pronged approach to address existing challenges in critical incident analyses. We started by reviewing the literature on case study research methods to better understand how information from rare and singular events might be analyzed and interpreted to generate learning. We reviewed both literature of an empirical nature, which was relatively scarce, and literature that presented expert opinion on improving complex systems and evaluating rare events. We also reviewed literature on realist evaluation and quality improvement methods, which provide the theoretical framework for the development and use of a critical incident registry. Since the relevant literature comes from many disciplines, some of which is not peer-reviewed, we did not think that a formal bibliographic search would be useful. Second, through web searches and contact with experts (some of whom participated in the meeting described below), we identified and analyzed critical incident registries in fields other than public health to identify the key response issues and the range of approaches taken to create and maintain them. We systematically assessed such registries by identifying and characterizing their major design features, including the processes by which the registries are accessed and reports are solicited, the nature of the incidents being analyzed, the target audience of the systems, and any long-term lessons learned. Finally, we convened a 1-day meeting of researchers and practitioners to explore approaches that would be effective in public health practice (participants are listed in Appendix 1). The meeting sought to identify the major design and practical issues that must be addressed to create a functional public health emergency preparedness critical incident registry. To add substance and context to the discussions, participants reviewed 6 sample critical incident registry reports, based on incidents that the authors and our colleagues were familiar with, which we prepared in advance. Further, we used the meeting to engage practitioners in discussions about learning from singular events and to determine the long-term goals for the development of a critical incident registry focused on public health emergency preparedness. This research was reviewed by the Georgetown University Institutional Review Board and considered exempt.

Events that are rare and unique in a system may be common when aggregating across institutions or systems. Even events that are truly unique require the same basic capabilities or mechanisms to be employed in response. Thus, the goal should be to identify and address underlying factors that could limit the system’s ability to respond effectively to future events, which will likely require the same basic response capabilities, rather than place blame for what has occurred. For instance, Donahue and Tuohy12 were able to examine particular response mechanisms in varying contexts in their analysis of responses to a broad range of natural and man-made disasters, and they found common response failures across incidents. In reviewing the federal government’s response to the 2009 H1N1 influenza pandemic, Lurie noted that all-hazards public health preparedness planning paid off, allowing an effective response.13 Similarly, Larson and colleagues14 identified commonalities across incidents, ranging from plane crashes to hurricanes to terrorist attacks, suggesting that even broadly dissimilar emergencies require similar response mechanisms. This suggests that the data stored in a critical incident report for public health emergencies should be structured to facilitate cross-incident investigation, and its creation should adopt a system-wide approach that focuses on basic public health emergency preparedness capabilities. Our search identified 9 existing critical incident registries that serve different industries; they are summarized in Table 1. Each of the registries consists of a database of reports about individual incidents. To inform our thinking for a public health emergency preparedness critical incident registry, we identified and focused on 3 elements of the registries: reporting mechanisms, incentives for reporting, and data-sharing approaches. Reporting mechanisms ranged from voluntary, based on experience or concern, to mandatory with legal ramifications for not reporting. These mechanisms in turn influence the incentives and disincentives to reporting and the nature of what is reported. In terms of data collection, reports are filed either at standard time intervals or within a certain number of days after the incident has occurred. In some registries, a follow-up report with a more detailed analysis is filed later. Data-sharing arrangements also vary. Some registries remove the identity of the origin of the reports and make their contents available to the general public, while others keep the contents confidential. Our review of existing registries, the related literature, and stakeholder views suggests that, at its core, a critical incident registry is a catalogue of case reports organized to

134

Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science

Methods

Volume 12, Number 3, 2014

135

Biological Incidence Database

United Nations

Representatives from UN countries affected by a biological incident

UN general assembly members

UN representatives enter information in UN-sponsored database in regard to incidents that have been acknowledged in country.

Registry Name

Owner/ Operator

Who Reports

To Whom Data Are Available

How to Report Four different forms are used, depending on the type of accident or incident to be reported via monthly department submissions.

De-identified data available to general public

Railroad employees and highway safety personnel

FRA Office of Safety Analysis

Federal Railroad Administration Safety Database

Table 1. Summary of Registries in Other Industries

Data are submitted and maintained by the Flight Standards Service, Regulatory Support Division, Aviation Data System Branch, and AFS-620 with local representative.

De-identified data available to general public

Aviation employees submit reports that are reviewed by external NTSB.

National Transportation Safety Board (NTSB)/FAA

Any event in which the reporter believes aviation safety was compromised can result in an ASRS report.

De-identified data available to general public

Pilots, air traffic controllers, flight attendants, ground personnel

FAA/NASA

Aviation’s Safety Federal Aviation Administration (FAA) Reporting System (ASRS) Accident/ Incident Data System (AIDS)

NMAC is reported in terms of distance and direction from the nearest air navigation facility, airport, or airway fix. Those that occur in oceanic airspace are reported by latitude and longitude.

De-identified data available to general public

Pilots and flight attendants, if applicable

FAA

Near Midair Collision System (NMACS)

NTSB issues a separate report for each aircraft involved in an aviation accident or incident. Data are obtained using the following collection forms: NTSB Form 6120.19A and NTSB Form 6120.4.

De-identified data available to general public

NTSB investigates and files all reports.

FAA

NTSB Accident and Incident Data System

Local and state law enforcement agencies maintain a database of the details of criminal incidents that are reported to them and report these details to their state programs. Residents who choose to participate use a web-based database application. Pharmacists and other hospital employees use MEDMARX because their employer has paid for the MEDMARX service. Data is deidentified when a report is entered.

(continued)

All information at the state level

The FBI certifies states and their local affiliates.

Federal Bureau of Investigation (FBI)

National IncidentBased Crime Reporting System (NIBRS)

Not public

Internal medicine residents as part of their educational training

New York Chapter, American College of Physicians

NY Physician Near Miss Reporting System

De-identified data to members of Quantros

Facilities must register for Quantros reporting systems.

Quantros; Adverse Drug Event Reporting

MEDMARX

136

Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science

Biological incidents are sorted first based on the cause of the incident: natural, accidental, or deliberate. Within each cause is a framework for how the incident occurred and the steps to deal with the incident. Each type of event triggers a predetermined response from the UN community.

Required by UN Convention

Events Reported

Incentives and Disincentives to Report

Table 1. (Continued)

Legally required by Department of Transportation via mandates updated over the last century

Three groups of incidents: highway-rail grade crossing incident/accident; rail equipment accident/ incident; death, injury, or occupational illness

Nonpunitive

Contains incident data records for all categories of civil aviation. Incidents are events that do not meet the aircraft damage or personal injury thresholds contained in the NTSB definition of an accident.

Submission of a report can prevent legal liabilities for those involved.

The reporter’s experience, visibility conditions, duration of the event, trauma experienced by the reporter, or other factors can influence a report.

Bias to individual pilot, and expected to be significantly underreported

An incident associated with the operation of an aircraft in which a possibility of a collision occurs as a result of proximity of less than 500 feet to another aircraft, or a report is received from a pilot or flight crew member stating that a collision hazard existed between 2 or more aircraft.

Outside investigation

A preliminary, factual, and final report is detailed. Preliminary reports contain only a few data elements: date, location, aircraft operator, type of aircraft, etc.

Database must be paid for by medical system, resulting in nonpunitive and encouraged reporting.

Adverse drug events and adverse drug reactions are reported to simply collect data. Events include prescription errors, dosage errors, and patient response errors. Currently largest existing database of said information with more than 2 million reported incidents.

A certificate to be used to assist in meeting their practice-based learning and improvement system requirements for the state of NY

Near-miss events are reported; incidents in which a patient’s wellbeing is jeopardized, protocol is not followed, procedures are not in place, or relationships with other parties could adversely affect an outcome, but no permanent damage persists.

(continued)

Reports must be compiled for UCR system, but this format makes reports more fluid and incident focused versus only arrest or outcome focused.

IBR systems, which are defined at the local and state levels, but all involve comprehensive data collection at the incident level on the various aspects of reported criminal incidents; not all have been investigated to the point of arrest. Currently 38 states participating.

Volume 12, Number 3, 2014

137

Database is webbased with predefined entry fields, including type of incident, organism delivery method, methods of detection, impact, geography, historical context of agent, and consequence management.

No information

No information

No information

Database Organization and Tools

Data Analysis Mechanisms

Timeline

Changes Over Time

Table 1. (Continued)

Segregation into these categories began in 2003; reporting conceptualized in 1970.

Reports are submitted on a monthly basis.

Organized by type of report and demographics, internal root-cause analysis performed depending on relevant category

Database includes specifics of incidents based on data reported in relevant form by specific entry fields, including: size of train, employees on duty cases, highways involved, location, rail type, casualties, distance, if the incident has previously been reported, and how the operators handled the situation.

Reports have been compiled since 1978 and expanded to include narrative components to improve analysis.

Database is updated monthly.

Investigation into reports leads to outside perspective on contributing factors versus internal reflection on what happened.

Data are presented in a report format divided into the following categories: location, aircraft, operator, narrative, findings, environmental information, and pilot information. If more than 1 aircraft is involved in an incident, 2 separate reports appear in the database.

Data entry previously limited to 115 characters; expanded in 1996.

One stipulation to legal protection is that a reporter must file a report within 10 days of the incident.

Root cause of discrepancies and human error analysis to limit future implications and improve said research

Information collected by the ASRS is used to identify deficiencies and discrepancies in the National Aviation System and enhance the basis for human factors research and recommendations for future operations.

Separate database created on this topic because brevity of nearcollision.

No information

Analysis is more limited due to perception on information. Repeat issues are flagged for system change.

Fields of submission include: report number, start/end dates, state code, aircraft make/ model, operator/airline, type of flight operation, and airport identifier. The accuracy of the reporting individual’s perception of an NMAC can vary considerably among the flight crew members.

Three-pronged report system evolved over 30 years; adapted 1993.

A preliminary report is to be completed within 5 working days of the event, and a factual report with additional information is available within a few months.

A final report includes a statement of the probable cause (may not be completed for months or after the investigation has been completed).

The data are presented in a report format that is divided into the following categories: location, aircraft, operator, narrative, sequence of events, findings, injury summary, environmental information, and pilot information.

No information

No information

Reports are prepared as statistics are reported for legal purposes. Analyses compare to other facilities.

From the deidentified data, comparative reports across demographics, performance and outcome measurement services, and patient safety goals are monitored in preparation for accreditation, FDA, and statelevel purposes.

System has been modeled in MD and PA as of 2008 to encourage voluntary reporting.

2007-2009: residents from 46 training programs filed 3,300 reports.

Process improvement and changes are the end goal of reporting as new residents are trained.

Reporters are encouraged to remove identifying information. Independent reviewers look at data on event narratives for adverse outcomes, responsible parties, preventability, and process problems.

Still expanding and subjective at the state and local levels

Monthly reports are submitted by each system, and the FBI releases aggregate reports annually.

Linkages can be established between variables for examining interrelationships between offenses, offenders, victims, property, and arrestees.

Differing data specifications per type of arrest, sorted into ‘‘group’’ categories. Since NIBRS core elements are standardized across states and localities, large data sets can be obtained for analysis. Data collection is not restricted to a limited number of offense categories.

PREPAREDNESS CRITICAL INCIDENT REGISTRY

facilitate critical analysis. Thus, the public health emergency preparedness critical incident registry should consist of a database of reports about the response to individual incidents that are submitted by the public health agencies responding to the emergency using standard protocols for probing analysis. Therefore, we begin with a description of the scope of the registry, an overarching analytical framework and approach, and the structure of individual reports. Next, we explore ways to establish a registry through policies regarding who can and must submit reports, and we consider incentives and barriers to reporting, which is the topic of the following section. The conclusions discuss how a public health emergency preparedness critical incident registry could be institutionalized in the United States.

Scope

this reason that the aviation industry maintains registries for crashes, near midair collisions, and other serious safety deficiencies. A different approach to expanding the sample of rare incidents is to identify smaller-scale incidents that require the same response capabilities as more severe ones, which allows the isolation of specific capabilities. These could include situations that are relatively well integrated into routine activities but bear similarity to public health emergencies, such as foodborne outbreaks.17 These events create conditions where many of the same capabilities are tested—public communication, epidemiology and surveillance, and the like—as in large-scale emergencies, so information about a health department’s preparedness could be gleaned. Finally, the fact that some public health emergencies unfold over weeks or months means that they can usefully be treated as several distinct events. As a result, even when analysis of a public health emergency response is limited to a single functional capability, how that capability is used may change markedly over a response that begins with trying to acquire an initial understanding of the incident, progresses through an early, high-intensity response, and continues through subsequent, lower-intensity phases and recovery. This may require interim analyses during an incident or separate incident reports than can be linked in the critical incident registry.11 A critical incident registry that is more inclusive of a variety of events in their nature, size, and duration will allow for a more rapid and thorough case development phase, potentially allowing researchers to consider future improvements in the registry.

The foundation of a successful critical incident registry is the scope of incidents selected for inclusion. For public health emergencies, the scope is broad by nature because the boundaries of public health emergency preparedness systems are unclearly defined. Because public health emergencies are the focus of the critical incident registry, incidents in which public health organizations played a significant role should be featured. This does not mean the registry should be strictly limited to incidents in which public health organizations led the response. Rather, a public health organization must have been sufficiently involved that meaningful analysis can be conducted on at least 1 public health preparedness capability, such as communication with the public. Selecting meaningful events for inclusion is rooted in a rich literature about learning from single, rare events. Some incidents will be meaningful because they cause high morbidity or mortality, require that public health agencies engage in nonroutine practices with different partners, occur on a larger or new scale, have substantial communitywide nonhealth impacts, significantly alter the system’s behaviors or beliefs, or help identify best practices to address a common problem. Finally, some incidents may be meaningful simply because they capture the public health community’s attention. Such events may highlight a common issue or successful approach and provide a more vivid platform for learning and change because of the scrutiny given the event. A number of strategies—generally not mutually exclusive—have been proposed to identify events and expand the potential sample of meaningful events, which increases the utility of a critical incident registry for cross-case analysis. The insight of March and colleagues15 is that the line is often fine between incidents and nonincidents that had potential for serious consequences, so both may be examined to develop causal theories about what led to the critical incidents. Examining near misses as well as actual incidents creates a fuller picture of potential issues and provides additional content in a system where events are rare.16 It is for

Since the goal of a critical incident registry is to facilitate learning from individual incidents and apply the findings in other contexts, a public health emergency preparedness critical incident registry requires a framework and set of tools that can produce meaningful and actionable insights about performance drivers in addition to outcomes. This requires, first of all, a method for performing rigorous analyses of individual incidents.18 Although researchers often use quantitative methods to ensure objectivity, robust qualitative methods that are widely used in the social sciences and other fields can be equally rigorous and more relevant for assessing the performance of complex public health emergency preparedness systems, rather than the individuals who work in, or are served by, those systems.19 These methods are summarized by Stoto et al20 in a recent issue brief and founded in the work of Pawson and Tilley21 and Gilson and colleagues.22 Working with and drawing on the experience of frontline practitioners with ‘‘insider’’ knowledge of how a specific public health emergency preparedness system functions helps to ensure responsiveness to context, maintains focus on implementation and sustainability, and provides insights about mechanisms.

138

Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science

Analytical Framework and Approach

PILTCH-LOEB ET AL.

Moreover, in order to promote learning across incidents, the analytical approach should address specific public health preparedness capabilities. But since undue focus placed on a single component of the response may lead to improving that component without realizing that the overall response will be harmed, reports should also encourage consideration of how changing one capability will alter interrelated capabilities and the overall response through a ‘‘what-if’’ type of analysis.

Structure A critical incident registry must have a structure that facilitates analysis of individual incidents and supports cross-case analysis. Thus, to share lessons learned from particular incidents and to enable comparisons, the registry should be searchable by type of event, contextual factors, and emergency response capability. Critical incident registry reports should consist of 4 parts: a brief summary, a background/ context section, an incident description, and an analysis of the emergency preparedness system’s response to the incident. The abstract-length summary should provide a condensed overview of the incident, the emergency preparedness capabilities that were tested, significant contextual factors, and the key findings derived from the incident. The summary should be searchable by researchers and practitioners seeking to identify trends across similar incidents or lessons that might be applicable to a current or anticipated incident. Specific CDC public health preparedness capabilities tested during the incident should be mentioned. Searching reports by capability is important, because these are increasingly becoming the public health emergency preparedness community’s standard language to describe the response to incidents.23 The context in which responses occur is a crucial component of a critical incident report. Just enough contextual details should be included to understand why particular response mechanisms did or did not work. Providing too much background information may erroneously make it appear that responses succeed or fail solely on the basis of contextual factors largely outside the control of the responder, yet incomplete contextual information may make it appear that employing the ‘‘correct’’ action would have achieved the desired results without regard to what factors enable the success of that action. The isolation of specific issues that occurred and the response to those ‘‘bite size’’ elements are critical components of the analysis. The incident description should be a concise narrative, including a timeline of how the incident unfolded and its relationship to a larger event if relevant. The description should include sufficient detail on both the incident’s key events and health system context to provide enough information for a critical analysis of the system’s response.24 For example, the report might include the extent to which prior training helped an organization prepare. And this training may depend in part on the organization’s history of Volume 12, Number 3, 2014

critical incidents and, as a result, its perception that training for emergencies is valuable. Too much detail in case studies, however, can lead readers to address the incident as a technical problem to be solved (asking, for instance, how they would have responded) rather than a mechanism to understand root causes and potential system improvements (E. Rogers, personal communication, May 28, 2013). Finally, the analysis section should use the sort of rootcause or ‘‘what-if’’ analysis described above to analyze whether and why particular public health emergency preparedness capabilities were successfully employed in response to the incident. An in-depth analysis will investigate interactions between contextual factors and the mechanisms employed to respond to an incident and include enough information to reveal the roles these interactions played. This can allow users of the registry to draw on the expertise of those most familiar with the context of the emergency response system and its capabilities while probing for areas of improvement.

Reporting Requirements and Incentives The basis of a critical incident registry is the reports submitted to it, but acquiring credible reports can be challenging. First, reporting requires the time and effort of staff with the appropriate analytical skills, which may be in short supply during an incident. In some cases, after-action reports are drafted by preparedness planners who are less familiar with the actual response issues instead of staff involved in the response, who have gone back to their ‘‘regular’’ jobs.11 An additional concern is that the organizations with the most capacity to respond to events will also be most likely to report, creating a bias. Third, especially when discussing what might be perceived as suboptimal performance, health departments and staff are concerned about ramifications for reputation, liability, future funding, and even job security. Our examination of existing registries suggests a number of ways to address these reporting barriers. Beginning with incentives, one approach is to emphasize that the critical incident registry’s purpose is to facilitate organizational learning and systems improvement rather than accountability, which seems to be a key reason that airline personnel submit reports about safety deficiencies that would not otherwise be detected.12 Designing the reporting instrument in a way that guides organizations through a rigorous analysis to support improvement efforts in that organization itself can help to create this understanding. It can also be accomplished by developing the registry— preferably in partnership with end users—so that the collection of reports is perceived as a valuable source of lessons for organizations seeking guidance for their own responses. Once an organization uses the critical incident registry for its own benefit, the incentive to share with others and to ‘‘pay it forward’’ may increase. 139

PREPAREDNESS CRITICAL INCIDENT REGISTRY

As is the practice in current after-action reports, calling attention to system responses that worked well in addition to those that were problematic could help to emphasize the registry’s goal of driving improvement and learning. It can also temper reporting agencies’ concerns about appearing to have performed poorly. Aviation reports, for example, often analyze conduct that mitigated incidents and that should be emulated as well as errors and their root causes. In some settings, focusing on responses that exceed expectations— so-called ‘‘positive deviants’’—can facilitate significant learning while overcoming reluctance to disclose negative outcomes.18 In addition, there might be ways to provide tangible incentives for organizations to submit high-quality reports. For instance, exemplary reports might be recognized by professional societies such as NACCHO or the Association of State and Territorial Health Officials (ASTHO) and published in a widely disseminated journal. This would encourage health departments that see themselves as leaders to submit high-quality reports and would incentivize academic partners, who are rewarded for peer-reviewed publications, to assist health departments in preparing these reports. If the CDC were an active partner in the critical incident registry, it might be able to use the cooperative agreements to support state and local preparedness efforts to reward organizations that develop particularly strong analyses and improvement plans, similar to the approach taken by the New York Physician Near Miss Reporting System, which uses a reward-based system.25 Ultimately, a reporting system should be designed to foster a culture of systems improvement that rewards those who conduct serious analyses, draw lessons learned relevant to improving their performance, and report their findings so that others can benefit from them. Mandatory reporting should also be considered. Critical incident submission is required by regulatory agencies in the aviation and railroad industries. Under this mandatory reporting model, incidents that meet certain criteria must be reported, and failure to do so results in penalties to those involved in the incident. Even in the presence of a mandate, however, other barriers to reporting will have to be reduced, especially if reports are to contain candid, high-quality analyses. One such barrier is the limited time health department personnel have to conduct detailed analyses after the response to an event concludes. Developing partnerships with academic institutions that have faculty and students capable of doing such analysis, and who relish the opportunity to publish case studies, would provide an incentive for them to work with health departments to prepare the reports. Another option is the development of a peer assessment model that engages public health practitioners as reviewers and report developers. Peer practitioners conducting the analysis would benefit by learning valuable lessons for their own jurisdictions, leading to potential reciprocal relationships between departments.26 140

If supported by adequate funding, the registry itself could also have a staff that assists public health organizations with the development of incident reports, similar to the Joint Commission’s model. Another option is for the critical incident registry staff to seek opportunities to assist responding organizations in preparing registry reports, especially when an issue is identified as important. Private companies such as QUANTROS27 take this approach in managing their pharmaceutical error database, actively sending staff to participating healthcare facilities to see if and when pharmaceutical errors occur. While significantly more costly, this approach might help to ensure that the most critical of incidents are well analyzed. The major barrier to reporting is embarrassment or liability concerns associated with disclosing parts of a response did not work well. Learning is seriously impeded when participants do not feel psychologically safe in acknowledging failures, a situation that reduces both incident reporting and rigorous root cause analyses. This could be addressed by removing identification from reports to be included in the registry, as is the case for the railroad and certain aviation systems. An alternative is to allow access to the registry only to practitioners and researchers with legitimate systems improvement purposes. Because it will sometimes be difficult to remove identification from material in the critical incident registry without losing essential context (eg, ‘‘a municipal health department serving a population of more than 8 million in the northeastern United States’’ would be easily identified as New York City), this would be only a partial solution. Another way to deal with embarrassment or liability concerns is to host the registry at an entity perceived to be neutral (eg, an academic institution or research institute) or with a mission of supporting health departments (eg, ASTHO or NACCHO). For this reason, for instance, NASA rather than the regulatory Federal Aviation Administration operates the Aviation Safety Reporting System.28

Conclusions The infrequency of serious public health emergencies has made efforts to learn from real-world incidents difficult and impeded the improvement of public health emergency preparedness systems. Health departments currently use after-action reports to collect data on their experiences in actual public health emergency responses, but the 2009 H1N1 pandemic, among other events, exposed significant weaknesses in this approach, notably the absence of reflective root cause analyses and of a framework to describe performance in terms of public health preparedness capabilities. Similarly, despite efforts at standardizing formats, the structures of these reports are almost as varied as the individuals who produce them. There is a need for an investment to develop standardized data elements that can support comparisons across settings and over time. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science

PILTCH-LOEB ET AL.

Public health emergency preparedness is not unique in facing this challenge, and other fields have found ways to learn from singular events. Critical incident registries, especially coupled with qualitative systems analysis methods including root cause analysis, can be an effective way to identify and critically analyze rare events—and responses to them—and to drive learning and quality improvement. This experience suggests that a critical incident registry could be a valuable approach for systems improvement in the public health emergency preparedness system. In particular, a public health preparedness critical incident registry could address 3 important goals: organizational learning through analysis of individual incidents, sharing reliable lessons between like health departments, and providing a database for cross-event analysis for contextual relationships and best practices. Drawing on this, to advance health security and improve organizational learning, CDC (or another national organization charged with improving public health emergency preparedness) could undertake the following 4 steps. First, clarify that the primary purpose of after-action reporting, critical incident analysis, and sharing the results of these efforts through a registry is organizational learning, identifying best practices as well as improving the system that experienced the incident. In the spirit of quality improvement called for in the National Health Security Strategy, approaches should be found to recognize and reward thoughtful analyses that reveal lessons about critical emergency preparedness capabilities rather than to evaluate the performance of reporting entities and influence future funding. Second, explore alternatives to current methods for learning from and reporting on critical incidents. Although there is value in using approaches that are consistent with other areas of emergency preparedness, methods that encourage root-cause analysis focused on public health preparedness capabilities may be more effective in the long run. Such methods could be seen as supplements to HSEEP-formatted after-action reports, or as tools used to develop them. Alternatively, a section of the LLIS for public health emergency preparedness reports could be established as a step toward the development of a full-fledged critical incident registry. Third, provide financial and other support for rigorous learning from public health emergencies. CDC or national organizations concerned with emergency preparedness could offer certification or training in methods for critical incident analysis and develop professional guidance, training materials, and exemplary emergency preparedness incident reports needed to back up this training. Programs to certify individuals or accredit institutions that demonstrate the appropriate analytic skills might also be developed. CDC could foster probing critical incident analyses by fostering partnerships between public health agencies and schools of public health that have faculty and students capable of doing the analyses. Another option is to develop Volume 12, Number 3, 2014

and provide support for a peer assessment model that engages public health practitioners as reviewers and report developers. These approaches would help to address the limited time that health department staff have to conduct detailed post-incident analyses. Finally, establish and fund an organizational entity to administer a public health emergency preparedness critical incident registry, probably based at an academic institution, research institute, or organization that supports health departments but is not involved in funding or evaluating public health agencies to develop standard protocols for analysis and reporting of critical events to foster organizational learning across the national public health emergency preparedness system. It is unlikely that a regulatory requirement for emergency preparedness incident reporting could be implemented in the short term, but CDC might, through its cooperative agreements with states, require or strongly encourage that the narrative component of afteraction reports follow the critical incident registry report format described above and be submitted for inclusion in the critical incident registry. Funding should be sufficient for the registry to maintain a small staff to enter reports, keep the registry up to date, train and support organizations as they prepare reports, and ensure the functionality of the registry. Critical incident registry funding could also support the costs of peer assessment and academic participation in incident report preparation.

References 1. Institute of Medicine. The Future of the Public’s Health in the 21st Century. Washington, DC: National Academies Press; 2003. 2. Institute of Medicine. Research Priorities in Emergency Preparedness and Response for Public Health Systems. A Letter Report. Washington, DC: National Academies Press; 2008. 3. US Department of Health and Human Services. National Health Security Strategy. 2009. http://www.phe.gov/ Preparedness/planning/authority/nhss/Pages/default.aspx. Accessed April 9, 2014. 4. Stoto MA, Cox H, Higdon M, Dunnell K, Goldmann D. Using learning collaboratives to improve public health emergency preparedness systems. Front Public Health Serv Syst Res 2013:2(2):3. 5. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care 2003: 41(1 Suppl):I30-38. 6. Nelson C, Chan EW, Fan CE, et al. New Tools for Assessing State and Local Capabilities for Countermeasure Delivery. (TR-665-DHHS). Santa Monica, CA: RAND Corporation; 2009. http://www.rand.org/pubs/technical_reports/TR665. html. Accessed April 9, 2014. 7. Savoia E, Agboola F, Biddinger PD. Use of after action reports (AARs) to promote organizational and systems learning in emergency preparedness. Int J Environ Res Public Health 2012:9(8):2949-2963. 141

PREPAREDNESS CRITICAL INCIDENT REGISTRY 8. Stoto MA, Nelson C, Higdon MA, Kraemer J, Singleton CM. Learning about after action reporting from the 2009 H1N1 pandemic: a workshop summary. J Public Health Manag Pract 2013:19(5):420-427. 9. Singleton CM, Debastiani S, Rose D, Kahn EB. An analysis of root cause identification and continuous quality improvement in public health H1N1 after-action reports. J Public Health Manag Pract 2014:20(2):197-204. 10. Boeing Corporation. Statistical Summary of Commercial Jet Airplane Accidents, Worldwide Operations 1959-2012. 2013. http://www.boeing.com/news/techissues/pdf/statsum. pdf. Accessed April 9, 2014. 11. Wald M. Fatal crashes of airplanes decline 65% over 10 years. New York Times October 1, 2007. 12. Donahue A, Tuohy R. Lessons we don’t learn: a study of the lessons of disasters, why we repeat them, and how we can learn them. Homeland Security Affairs 2006;2(2):1-28. 13. Lurie N. Feverish Activity: Global, National, and Local Lessons Learned from the 2009 H1N1 Pandemic. National Health Policy Conference. 2011. http://www.nhpf.org/uploads/ Handouts/Lurie-slides_11-18-11.pdf. Accessed April 9, 2014. 14. Larson R, Metzger M, Cahn M. Responding to emergencies: lessons learned and the need for analysis. Interfaces 2006; 36:486-501. 15. March JG, Sproull LS, Tamuz M. Learning from samples of one or fewer. Qual Saf Health Care 2003;12(6):465-471. 16. Reason JT. Managing the Risks of Organizational Accidents. Burlington, VT: Ashgate Press; 1997. 17. Biddinger PD, Savoia E, Massin-Short SB, Preston J, Stoto MA. Public health emergency preparedness exercises: lessons learned. Public Health Rep 2010;125(Suppl 5):100-106. 18. Klaiman T, O’Connell K, Stoto M. Local health department public vaccination clinic success during 2009 pH1N1. J Public Health Manag Pract 2013;19(4):E20-E26. 19. Klaiman T, Kraemer JD, Stoto MA. Variability in school closure decisions in response to 2009 H1N1: a qualitative systems improvement analysis. BMC Public Health 2011; 11:73. 20. Stoto MA, Nelson CD, Klaiman T. Getting from what to why: using qualitative methods in public health systems re-

142

21. 22.

23.

24.

25.

26.

27.

28.

search. AcademyHealth Issue Brief. November 2013. http:// www.academyhealth.org/files/publications/QMforPH.pdf. Accessed April 9, 2014. Pawson R, Tilley N. Realistic Evaluation. London: Sage Publications; 1997. Gilson L, Hanson K, Sheikh K, Agyepong IA, Ssengooba F, Bennett S. Building the field of health policy and systems research: social science matters. PLoS Med 2011:8(8):e1001079. Stoto MA. Measuring and assessing public health emergency preparedness. J Public Health Manag Pract 2013;19(Suppl 2):S16-S21. Stebbins S, Vukotich CJ Jr. Preserving lessons learned in disease outbreaks and other emergency responses. J Public Health (Oxf) 2010;32(4): 467-471. New York Chapter, American College of Physicians. Near miss registry. http://www.nyacp.org/i4a/pages/index.cfm? pageid = 3558. Accessed April 9, 2014. Piltch-Loeb R, Kraemer JD, Stoto M. Synopsis of a public health emergency preparedness critical incident registry. J Public Health Manag Pract 2013:19(Suppl 2):S93-S94. QUANTROS. Medmarx ADE Data Repository. 2012. http:// quantros.com/our-products/safety-and-risk-management-srm/ medmarx-medication-database. Accessed April 9, 2014. Walker LO, Sterling BS, Hoke MM, Dearden KA. Applying the concept of positive deviance to public health data: a tool for reducing health disparities. Public Health Nurs 2007; 24(6):571-576.

Manuscript received January 29, 2014; accepted for publication March 27, 2014. Address correspondence to: Rachael Piltch-Loeb Research Assistant Department of Health Systems Administration Georgetown University School of Nursing and Health Studies 3700 O St., NW Washington, DC 20057 E-mail: [email protected]

Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science

PILTCH-LOEB ET AL.

Appendix 1: Meeting Participants On September 2, 2011, researchers funded by the Centers for Disease Control and Prevention’s Preparedness and Emergency Response Research Center (PERRC) at Harvard and Georgetown Universities convened a meeting of experts on Georgetown’s campus. Jesse Bump, Georgetown University Melissa Higdon, Harvard School of Public Health John Kraemer, Georgetown University Tamar Klaiman, Jefferson School of Population Health Christopher Nelson, RAND Alonzo Plough, Los Angeles County Department of Public Health Debra Robinson, NACCHO Karen Smith, Napa County Public Health Division Michael Stoto, Georgetown University Italo Subbarao, Disaster Medicine and Public Health Preparedness Michal Tamuz, School of Public Health, SUNY Downstate Medical Center Reuben Varghese, Arlington County Health Department

Volume 12, Number 3, 2014

143

A public health emergency preparedness critical incident registry.

Health departments use after-action reports to collect data on their experience in responding to actual public health emergencies. To address deficien...
218KB Sizes 2 Downloads 3 Views