HEALTH SERVICE RESEARCH CSIRO PUBLISHING

Australian Health Review, 2014, 38, 575–579 http://dx.doi.org/10.1071/AH13244

Admission time to hospital: a varying standard for a critical definition for admissions to an intensive care unit from the emergency department Shane Nanayakkara1,5 MBBS, Medical Registrar Heike Weiss1 MD, DTMH, MRCS, MFAEM, Emergency Registrar Michael Bailey2 PhD, MSc(Statistics), BSc(Hons), Associate Professor, Senior Statistical Consultant Allison van Lint3 PhD, ANZICS Data Quality, Audit and Education Officer Peter Cameron4 MBBS, FACEM, MD, Emergency Consultant David Pilcher1,2,3 MBBS, MRCP (UK), FRACP, FJFICM, Intensivist 1

Department of Intensive Care, The Alfred Hospital, PO Box 315, Prahran, Vic. 3181, Australia. Email: [email protected]; [email protected] 2 Australian and New Zealand Intensive Care – Research Centre, Department of Epidemiology and Preventive Medicine, Monash Univeristy, Commercial Road, Prahran, Vic. 3004, Australia. Email: [email protected] 3 Australian and New Zealand Intensive Care Society, Centre for Outcome and Resource Evaluation, PO Box 164 Carlton South, Vic. 3053, Australia. Email: [email protected] 4 Department of Emergency Medicine, The Alfred Hospital, PO Box 315, Prahran, Vic. 3181, Australia. Email: [email protected] 5 Corresponding author. Email: [email protected]

Abstract Objective. Time spent in the emergency department (ED) before admission to hospital is often considered an important key performance indicator (KPI). Throughout Australia and New Zealand, there is no standard definition of ‘time of admission’ for patients admitted through the ED. By using data submitted to the Australian and New Zealand Intensive Care Society Adult Patient Database, the aim was to determine the differing methods used to define hospital admission time and assess how these impact on the calculation of time spent in the ED before admission to an intensive care unit (ICU). Methods. Between March and December of 2010, 61 hospitals were contacted directly. Decision methods for determining time of admission to the ED were matched to 67 787 patient records. Univariate and multivariate analyses were conducted to assess the relationship between decision method and the reported time spent in the ED. Results. Four mechanisms of recording time of admission were identified, with time of triage being the most common (28/61 hospitals). Reported median time spent in the ED varied from 2.5 (IQR 0.83–5.35) to 5.1 h (2.82–8.68), depending on the decision method. After adjusting for illness severity, hospital type and location, decision method remained a significant factor in determining measurement of ED length of stay. Conclusions. Different methods are used in Australia and New Zealand to define admission time to hospital. Professional bodies, hospitals and jurisdictions should ensure standardisation of definitions for appropriate interpretation of KPIs as well as for the interpretation of studies assessing the impact of admission time to ICU from the ED. What is known about the topic? There are standards for the maximum time spent in the ED internationally, but these standards vary greatly across Australia. The definition of such a standard is critically important not only to patient care, but also in the assessment of hospital outcomes. Key performance indicators rely on quality data to improve decision-making. What does this paper add? This paper quantifies the variability of times measured and analyses why the variability exists. It also discusses the impact of this variability on assessment of outcomes and provides suggestions to improve standardisation. What are the implications for practitioners? This paper provides a clearer view on standards regarding length of stay in the ICU, highlighting the importance of key performance indicators, as well as the quality of data that underlies them. This will lead to significant changes in the way we standardise and interpret data regarding length of stay. Received 5 February 2014, accepted 28 July 2014, published online 5 November 2014 Journal compilation  AHHA 2014

www.publish.csiro.au/journals/ahr

576

Australian Health Review

Introduction Time spent in the emergency department (ED) before admission to a hospital bed is often considered an important key performance indicator (KPI).1 Delayed admission to the intensive care unit (ICU) from the ED may result in increased hospital mortality.2 Carter et al. have shown that times spent in the ED before admission to an ICU have been increasing in Australia over recent years.3 However, the same authors noted that it was possible that criteria for recording time of ED admission were different at different hospitals. Because such KPIs are critical for system-wide decision-making, the data and standards used to define them should be consistent and similar across all hospitals, to allow for appropriate comparison. Although acceptable standards for maximum time spent in the ED vary across the states in Australia, standards have been set internationally. The limit of 4 h was first introduced in the UK,4 and has been trialled in Western Australia since April 2009. As part of the Australian Health Reform and National Access Targets, this is currently being rolled out across the remaining states between 2011 and 2015, with financial incentives for meeting this target.5 The Australia and New Zealand Intensive Care Society (ANZICS) Adult Patient Database (APD) is a high quality database that records demographic, severity of illness and outcome data from adult ICU admissions6 from ~85% of Australian intensive care units. The ANZICS APD7 defines the time of admission to hospital as the ‘Time at which the patient was admitted to the hospital for the episode of care which included the current episode of ICU care’. It is unknown whether there is consistency in recording of the time of admission to hospital for data submitted to ANZICS and to what degree variation in defining this affects the subsequent time listed as spent waiting in the ED before admission to the ICU. The aims of the present study were to determine: (1) the method by which ‘time of admission’ to hospital is defined for patients who are admitted to the ICU from the ED, at hospitals that submit data to the ANZICS Adult Patient Database; and (2) whether different methods for defining ‘time of admission to hospital’ affect the reported time spent in the ED before admission to ICU (for patients reported to the ANZICS Adult Patient Database). Methods The study was approved by the Alfred Hospital Ethics Committee (project 248/11). Hospitals that listed admissions to the ICU from the ED between 1 January 2005 and 31 December 2009 were identified from the ANZICS APD. A minimum of 50 admissions to the ICU from the ED over the 5-year period was selected as an arbitrary cut-off for inclusion because of a concern that hospitals who had submitted only a low number of patients might either have incomplete data, inconsistent data collection methods or poor data quality. The majority of hospitals collect data for submission using AORTIC,8 a free software program provided by ANZICS. Data is entered manually into AORTIC (by paid data collectors, medical or nursing staff) with no link to other hospital- or

S. Nanayakkara et al.

state-based data systems and is de-identified before submission. ANZICS does not specify the source from which admission times may be obtained. Between May 2010 and October 2010, staff at each hospital were contacted to determine the method by which time of admission to hospital was estimated for patients admitted from the ED. Initially, the person responsible for submitting data from the ICU to ANZICS was contacted by email. If email responses were received where the primary contact was not confident of the admission time decision method, structured telephone interviews were conducted initially with the ICU staff responsible for data submission to ANZICS, then with staff in the ED or medical records department. The person ultimately responsible for data entry was contacted in each case. All information was provided in confidence and in the belief that it was correct to the best knowledge of the individuals at the hospitals in question. Decision methods were categorised and matched to patient data from the ANZICS APD. The following criteria were used to select patients for inclusion in the study: admission to the ICU directly from the ED, known times of admission to hospital and the ICU, and known severity of illness, as measured by the predicted risk of death from the Acute Physiology and Chronic Health Evaluation (APACHE) III scoring system. Times spent waiting in the ED were initially compared using a Kruskal–Wallis test and were presented as medians with an inter-quartile range. To ensure that the observed differences in times were not due to confounding variables, generalised linear modelling was performed on all patients, adjusting for confounding effects of severity of illness, hospital type and geographical variation. Time spent in the ED was found to be well approximated by a log-normal distribution and was log-transformed before analysis, with results presented as geometric means (95% CI) or as a parameter estimate (standard error). A twosided P value of 0.05 was considered to be statistically significant. All analysis was performed using SAS version 9.2 (SAS Institute Inc., Cary, NC, USA). Sample size calculations were based on post-hoc pairwise comparisons between groups. With a minimum of 2000 patients per group, this study had >99% power to detect a difference between any two groups equal to 20% of one standard deviation with a two-sided P-value of 0.01. Differences of this magnitude were perceived to be of clinical importance. Results Of 144 hospitals that submit data to the ANZICS APD, 64 were identified as fitting the inclusion criteria of having both an ED and an ICU to which at least 50 admissions had been reported to ANZICS during the study period. Twenty-one hospitals were located in New South Wales, 18 in Queensland, 13 in Victoria and the remainder in ‘Other’ areas (New Zealand - 4 hospitals, South Australia - 3 hospitals, Australian Capital Territory - 2 hospitals, Northern Territory - 2 hospitals and Western Australia 1 hospital). Twenty-two email responses were received where the primary contact was confident of the method for determining admission time to hospital. At 41 hospitals, structured telephone interviews were conducted. One hospital used both time of

Admission time to hospital: a varying standard

Australian Health Review

Metropolitan hospitals were associated with longest ED waiting times. Victorian and New South Wales hospitals had longer times compared with other regions. A higher APACHE III predicted that risk of death was associated with a shorter time spent in the ED department (parameter estimate –0.29 (0.02); P < 0.0001).

triage and time of decision to admit without any clear basis for choosing one method over another. At another, two different data collectors used different mechanisms, the first using presentation time to the ED, the second entering the time of hospital admission as 30 min before the time of admission to the ICU. In one case, the designated contact person and their delegates could not be contacted by email or by phone. After exclusion of these three hospitals, there were 61 hospitals where the mechanism by which admission to hospital was determined and could be linked to individual patient records. Four mechanisms for recording time of admission to hospital for patients admitted to the ICU from the ED were identified: (1) time of presentation to the ED; (2) time of triage within the ED; (3) time of first review by a doctor; and (4) time of decision to admit the patient by a doctor. No other mechanisms were noted. Table 1 shows the location and distribution of hospital by decision method, classified down by region, with Table 2 showing the same data by ANZICS hospital classification. Decision methods for recording time of admission to hospital were matched to 67 787 patients at the 61 hospitals over this 5year period (2005–09). Time spent in the ED before admission to the ICU for each decision method is shown in Table 3. Hospitals who listed ‘Time of presentation to ED’ as the time of admission to hospital had the longest calculated times spent in the ED before admission to the ICU. Conversely, hospitals who listed the time of hospital admission as the ‘Time of Decision to Admit’ had the shortest calculated times spent in the ED. After adjusting for hospital type, location and patient illness severity, the decision method for determining hospital admission remained a significant factor that determined the calculated time spent waiting in the ED (Table 4). As in the univariate analysis, hospitals that listed ‘Time of presentation to ED’ as the hospital admission time still had the longest times spent in the ED.

Discussion We have demonstrated that there is considerable variation in the way in which hospital admission is recorded around the country for data submitted to the ANZICS APD, and that this variation affects the time recorded as waiting in the ED before admission to the ICU, even after accounting for other confounding factors. The ANZICS definition of ‘Time at which the patient was admitted to the hospital for the episode of care which included the current episode of ICU care’9 is interpreted differently across hospitals throughout Australia and New Zealand, and as a result greatly affects apparent time spent in the ED before ICU admission. Our study highlights the need for clear definitions for variables such as time of admission to hospital, the importance of auditing compliance with these definitions, and the value in assessing the impact of variability in their interpretation. Across jurisdictions, the official standpoint on admission time varies. A definition is provided by The Australasian College for Emergency Medicine7: ‘. . .The decision to admit a patient to hospital from the emergency department should be made by an emergency physician or delegate. The time admission is requested should be recorded to the nearest minute.’ This also forms the basis of the definition in New South Wales.10 For Victoria, if a patient is admitted from the emergency department, it is defined as ‘. . .the time treatment was started in the ED, that is, when the patient was first treated

Table 1. Decision methods at hospitals (by region) surveyed for determining time of admission to hospital for patients presenting to the emergency department who are subsequently admitted to the intensive care unit, in the different regions of Australia and New Zealand ED, emergency department New South Wales

Queensland

Victoria

Other

Total

6 11 0 3

2 5 1 9

4 6 1 2

2 6 1 2

14 (23%) 28 (46%) 3 (5%) 16 (26%)

20 (33%)

17 (28%)

13 (21%)

11 (18%)

61

Presentation to the ED Triage in the ED First review by a doctor Decision to admit Total

Table 2. Decision methods at hospitals (by type) surveyed for determining time of admission to hospital for patients presenting to the emergency department who are subsequently admitted to the intensive care unit, for different hospital types in Australia and New Zealand ED, emergency department

Presentation to the ED Triage in the ED First review by a doctor Decision to admit Total

577

Rural

Metropolitan

Tertiary

Private

Total

0 11 0 2

5 3 2 5

8 10 1 6

1 4 0 3

14 (23%) 28 (46%) 3 (5%) 16 (26%)

13 (21%)

15 (25%)

25 (41%)

8 (13%)

61

578

Australian Health Review

S. Nanayakkara et al.

Table 3. Time spent in the emergency department before admission to the intensive care unit with different methods of determining time of admission to hospital Unless indicated otherwise, data are given as the median with interquartile range in parentheses. There was a significant overall difference between groups (P < 0.0001). ED, emergency department

Presentation to the ED Triage in the ED First review by a doctor Decision to admit All hospitals

No. admissions

Time spent in the ED (h)

17 130 32 720 2666 15 271 67 787

5.07 (2.82–8.68) 4.62 (2.17–7.87) 4.04 (1.57–7.67) 2.53 (0.83–5.35) 4.25 (1.87–7.60)

Table 4. Generalised linear modelling, after log-transformation of ‘Hours in ED prior to admission to ICU’ and adjusting for severity of illness, hospital type and location Unless indicated otherwise, data show the geometric mean with 95% confidence intervals in parentheses. APACHE III, acute physiology and chronic health evaluation III; ED, emergency department Effect

Length of stay (h)

Method for determining time of admission to hospital Presentation to the ED Triage in the ED First review by a doctor Decision to admit

4.24 (4.15–4.33) 3.75 (3.69–3.81) 3.53 (3.37–3.71) 3.27 (3.20–3.35)

Type of hospital Rural Metropolitan Tertiary Private

4.02 (3.92–4.12) 4.47 (4.39–4.55) 3.06 (3.01–3.11) 3.35 (3.19–3.51)

Region of Australia and New Zealand New South Wales Queensland Victoria Other A

5.55 (5.43–5.68) 2.32 (2.27–2.37) 6.21 (6.06–6.35) 2.30 (2.24–2.36)

APACHE III predicted risk of deathB Parameter estimate (standard error)

–0.29 (0.02)

A

Other includes Australian Capital Territory, Northern Territory, New Zealand, South Australia and Western Australia. B All variables in the model were significant (P < 0.0001).

by a nurse or doctor, whichever comes first.’1 There is no clear definition of admission time in Queensland. The variation in methods used by data collectors submitting data to the ANZICS APD within each of these regions suggests that either these definitions are not known or that different definitions are being used for submission of data to ANZICS compared with data being submitted to local Departments of Health. The consistent association between decision method and time spent in the ED before admission to the ICU does, however, suggest that reported methods of defining admission time by the data collectors surveyed are internally consistent. Previous studies recording ‘time of ED admission’ do not specify in regards to how this is defined. However, from the data presented, we can see that using a ‘decision to admit’ definition results in a vastly shorter ED length of stay (LOS) compared with

‘presentation to ED’ (median time 2.53 vs 5.07 h). The same degree of variation in data submitted to jurisdictional health departments will severely limit comparison between hospitals where waiting times in the ED are measured as performance indicators. Several studies have used data regarding ED admission time as an important marker for a variety of outcomes, particularly when assessing length of stay. Liew et al. assessed ED LOS as a predictor of inpatient LOS, and found that a strong correlation exists.11 Emergency LOS and time to disposition have been used in various scoring systems as a predictor of mortality, both in Australia,12 and abroad.13 Around the world, ED and hospital LOS have been used as performance indicators. The introduction of the ‘four hour rule’ in the UK raised controversy, and the clinical utility was questioned.14–17 Although the 4 h standard remains a performance indicator in the UK, it has been repositioned within other quality outcome measurements. It was felt that despite increasing patient flow, it did not ensure the highest quality of care for patients.15 A similar concept, The National Emergency Access Target (NEAT), was brought to Australia as part of the National Health reform. This concept guarantees access to a hospital bed within 4 h of arrival. It has been suggested that the NEAT target may be too simplistic a measure, and more carefully defined measurement systems are required,16 with several new frameworks proposed.17 Strict definitions regarding ED admission exist for NEAT, and the importance of having a standard definition for LOS in the ED has now been recognised by the Expert Panel Review.18 The utility of NEAT as a clinical performance indicator and interpretation of time spent in the ED will depend on compliance with these definitions. Although delay in admission is not presently a performance indicator for Australian and New Zealand ICUs, our study highlights how inconsistencies in interpretation of a definition for ‘time of admission’ confound interpretation of calculated time spent in the ED. This highlights the value of auditing compliance with definitions and is relevant for all clinical registries and performance monitoring agencies. The present study has several limitations. First, data submitted to ANZICS may not be representative of data collected and presented to the Departments of Health, and the ability to generalise our findings to other registries is limited. Additionally, hospitals that do not submit data to ANZICS or who could not be contacted were not included. Accuracy and impact of variation in the actual recording of time of admission to the ICU were not investigated. Submitted patient level data was available only until the end of 2009, whereas hospital admission definitions were obtained in 2010, and practice may have improved since this data was collected, particularly in view of the more strictly defined NEAT. It is also important to note that variable interpretation of the ANZICS definition of ‘time of admission to hospital’ does not mean there is variable interpretation of the definition of NEAT. Furthermore, although sites are audited for general data quality, and there are internal software-based data validation rules, the absolute reliability of the data is unknown. This study is strengthened by consistent results, even after multivariate analysis, and the use of a large, high-quality database. The data analysed were recent and likely to be relevant to current practice in Australia and New Zealand, and all hospital

Admission time to hospital: a varying standard

Australian Health Review

types were included, across varying regions. All sites were contacted by the same researcher who was not involved in the linkage to ANZICS patient level data or in the analysis of the data.

6

7

Conclusion Time of admission to hospital is defined in different ways at hospitals around Australia and New Zealand who submit data to ANZICS, with no single region demonstrating consistency in its approach. A clearer standard definition for time of admission to hospital is needed by ANZICS, to ensure data quality and consistency, to generate accurate information. We would recommend that a standard definition, ‘time of triage’, be used and that there is audit of compliance with this definition. Efforts should be taken to ensure validity of definition throughout Australia and New Zealand. Without such standardisation, it is difficult to interpret important indicators such as time spent in the ED.

8

9 10

11

12

Competing interests None declared. 13

References 1

2

3

4 5

Department of Health and Ageing. Victorian emergency minimum dataset user manual. Victoria 2010–11. Victorian Government: Melbourne; 2011. Chalfin DB, Trzeciak S, Likourezos A, Baumann BM, Dellinger RP. Impact of delayed transfer of critically ill patients from the emergency department to the intensive care unit. Crit Care Med 2007; 35: 1477–83. doi:10.1097/01.CCM.0000266585.74905.5A Carter AW, Pilcher D, Bailey M, Cameron P, Duke GJ, Cooper J. Is ED length of stay before ICU admission related to patient mortality? Emerg Med Australas 2010; 22: 145–50. doi:10.1111/j.1742-6723.20 10.01272.x Secretary of State for Health. The NHS plan. Norwich: Department of Health; 2000. Department of Health and Ageing. Four hour national target. 2011 [cited 2011 June].

14 15 16 17

18

579

Stow PJ, Hart GK, Higlett T, George C, Herkes R, McWilliam D, Bellomo R. Development and implementation of a high-quality clinical database: the Australian and New Zealand Intensive Care Society Adult Patient Database. J Crit Care 2006; 21: 133–41. doi:10.1016/j.jcrc.2005.11.010 Australasian College for Emergency Medicine (ACEM). Policy on the definition of an admission. West Melbourne: ACEM; 2006 Available at: http://www.acem.org.au/media/policies_and_guidelines/P46_Defini tion_of_an_admission.pdf [verified June 2013]. ANZICS Centre for Outcome and Resource Evaluation. ANZICS CORE 2010 annual report. Melbourne: ANZICS Centre for Outcome and Resource Evaluation; 2010. Adult Patient Database Data Dictionary. 2010 Adult patient database data dictionary. Melbourne: ANZICS; 2010. Bureau of Health Information. Technical supplement: measures of emergency department performance and activity. Sydney: Australian Government; 2010. Liew D, Liew D, Kennedy M. Emergency department length of stay independently predicts excess inpatient length of stay. Med J Aust 2003; 179: 524–6. Mitra B, Cameron PA, Archer P, Bailey M, Pielage P, Mele G, Smit D, Newnham H. The association between time to disposition plan in the emergency department and in-hospital mortality of general medical patients. Intern Med J 2012; 42: 444–50. doi:10.1111/j.1445-5994. 2011.02502.x Asadollahi K, Hastings IM, Gill GV, Beeching NJ. Prediction of hospital mortality from admission laboratory data and patient age: a simple model. Emerg Med Australas 2011; 23: 354–63. doi:10.1111/ j.1742-6723.2011.01410.x Guly H, Higginson I. Obituary for the four-hour target. Emerg Med J 2011; 28: 179–80. doi:10.1136/emj.2010.105825 Lansley A. Abolition of the four-hour waiting standard in Accident and Emergency. 2010. Forero R, McDonnell G. Lessons from the 4-hour standard in England for Australia. Med J Aust 2011; 194: 268. Cameron PA, Schull MJ, Cooke MW. A framework for measuring quality in the emergency department. Emerg Med J 2011; 28: 735–40. doi:10.1136/emj.2011.112250 Australian Government. Expert panel: review of elective surgery and emergency access targets under the national partnership agreement on improving public hospital services- section 3. National Health Reform. Canberra: Australian Government; 2011.

www.publish.csiro.au/journals/ahr

Admission time to hospital: a varying standard for a critical definition for admissions to an intensive care unit from the emergency department.

Time spent in the emergency department (ED) before admission to hospital is often considered an important key performance indicator (KPI). Throughout ...
225KB Sizes 0 Downloads 6 Views