Journal of Electrocardiology

Vol. 25 Supplement

Robust Adaptive Parameter Estimators in Arrhythmia Detection

Peter M. Clarkson, PhD,* Qi Fan, MS,* Geoffrey A. Williamson, PhD,* and Robert Arzbaecher, PhDt

Abstract: The authors consider the statistical analysis of threshold crossing intervals, as applied to estimation of tachycardia rates from intracavitary electrograms. The authors developed a class of robust algorithms designed to produce minimum variance estimates for tachycardia rates. The authors formulated the

algorithms using order statistic filters, and obtained the minimum variance unbiased order statistic estimator. The potential gain in efficiency achieved by this approach is demonstrated via a representative example. The results indicated that the order statistics operator can produce dramatic reductions for typical errors in error variance as compared to linear estimators. Key words: tachycardia, intracavitary,

robust algorithms, order statistic filters.

Implantable devices for the termination of tachycardia and fibrillation rely on accurate and timely estimates of basic electrogram parameters in order to successfully distinguish between, on the one hand, arrhythmias requiring a response and, on the other, sinus tachycardia due to normal causes and other benign arrhythmias. However, there is considerable evidence that the detection algorithms currently employed in such devices may be flawed. False events, missed events, and, in patients with multiple arrhythmias, misclassification have all been observed. I-’ The limitations of existing detection algorithms stem at least in part from poor statistical efficiency and a lack of robustness in the parameter estimators. Currently popular parameter estimation

schemes are based on simple ad hoc procedures such as the local (running) mean of the raw parameter values. Such estimators are optimally efficient only for very limited classes of observation error, they degrade badly in the presence of even a few severe errors. These failings can produce inappropriate device responses with potentially lethal consequences for the patient. We demonstrate in this paper that order statistic (OS) based parameter estimators have improved robustness and efficiency properties, and thus, ameliorate some of these difficulties. Many of the basic parameters used in arrhythmia detection, including rate and regularity, can be obtained from measurements of threshold crossing intervals.’ The ease and simplicity with which these measurements may be implemented are key properties for this application. In this paper we consider the statistical analysis of threshold crossing intervals as applied to rate estimation. We generate minimum variance OS estimators and use these to illustrate issues of robustness and efficiency in the estimation process. We note that many of the results derived for this particular case apply equally to the estimation of regularity, and other parameters that can be obtained

*From the Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, Illinois. +From the Pritzker Institute of Medical Engineering, Zllinois Institute of Technology, Chicago, Illinois.

Supported in part by the National Science Foundation under grants MIP-89 11676 and MIP-9 102620, and by the National Institutes of Health under grant HL35554. Reprint requests: Peter M. Clarkson, PhD, Illinois Institute of Technology, ECE Department, 3301 South Dearborn, Chicago, IL 60616.

207

208

Journal of Electrocardiology

Vol. 25 Supplement

from threshold crossing analysis, and also to the estimation of parameters obtained from morphology measures such as correlation coefficient and “probability density function.”

Rate Estimation Using Threshold Crossing Intervals Rate estimates are commonly obtained by combining a number of consecutive threshold crossing interval measurements.5 One may view the observed threshold crossing intervals {xk}in a neighborhood or “window” of n consecutive observations as samples drawn from an unknown probability distribution fx(x). In this view, rate corresponds to the mean of the distribution. The observations may be expressed as (I) XII =s+vn

Order Statistic Filters Viewing the rate estimation as a filtering operation allows one to contemplate alternative estimator derived from nonlinear and data-adaptive filters. A particularly attractive class of nonlinear filters are those based on OS operations.6 An N point nonrecursive OS filter with input {xk} produces output {zk}according to the relation zk = -atx(k) = OSz{xk}“.

(4)

Here the components x(k)(i) of the vector x(k) are the N observations xk, xk _ 1, . . . , xk _ N + 1 ranked as x(k)(l) s x(k)(2) s --* 5 xtk,(N), and? = [a(l), a(2), a(n)]’ is a vector weighting these ranked values (Fi’g:2). Various OS filters may be defined. Examples include the median, for which (5) a(N + l/2) = 1,

a(i) = 0

; i # (N + 1)/2,

the outer mean, for which ; n = k, k -

1, . . . , k - N - 1. (6) a(1) = a(N) = l/2,

Here for a sample rate off,, the “true” interval corresponding to the rate r is given by s = f,/r. vk is the error associated with the measurement xk and has density fx(x - s). The sample average of the observed intervals is

5k= Nl i: x,,

(2)

n=k_-N+l

with 4 = f&k as the rate estimate. This estimator implicitly assumes that E{vk} = 0 over the window; otherwise the estimate will exhibit bias. One may interpret this moving average as a linear filtering operation in which the raw observations are input to a finite impulse response filter with coefficients (3)

a(i) = l/N

and with

OUtpUt

yk

; i = 1, 2, . . . , N, =

Sk

a(i) = 0

; i # 1, N,

and the trimmed mean, for which (7)

a(i) =

1 .i = M + l,...,N N - 2M’

-M,

a(i) = 0; otherwise. The sample average can also be viewed as an OS filter, and in fact is the only linear member of the class. Its coefficients satisfy equation (3). OS filters can be defined that meet criteria for a much broader class of inputs than is possible for linear filters. Additionally, many OS filters are robust to occasional large deviations from an assumed noise density.’ If the statistical distribution of the observations is known, then an OS operator may be formulated that is optimal in a minimum variance sense.

(Fig. 1).

Fig. 1. Threshold interval averaging as linear filtering. For

the sample average, a(i) = l/N for i = 1, 2, . . . , N.

Fig. 2. Order statistic (OS) filtering operation.

Robust Adaptive Parameter Estimators The minimum variance unbiased OS estimate of the interval s is obtained by minimization over a of (8)

J = E{(zk - s)‘}

subject to E{zk} = s,

where zk is defined by equation 4. Generally, the unbiasedness constraint is satisfied independent of the value of s, if and only if Cta- = F,

(9)

where C = [l n+,] and F = [I O]‘, with m(,, = E{v~~)},and 1 = [ 1, 1, . . . , 2]‘.8 The unbiasedness condition is thus equivalent to two constraints: a’ 1 = 1 (smoothing constraint), and s&,,, = 0 (orth6gI onality constraint). The minimum variance unbiased OS filter is then equivalently obtained from the minimization of (10)

J = E{(a’v(k))2} ; subject to C’a - = F. -The optimum filter aMv minimizing ( 10) may be obtained using Lagrange multipliers as (II)

i3MV

=

R(,/C(C’R(;,:C)-‘F,

where R(“) = E{V(k)Vt(k)} is the correlation matrix of the ordered err&s_If the inputs are independent identically distributed, then the corresponding minimum variance linear estimator for this case is the sample average defined by equation (3) aMv coincides with the linear estimator only for very restricted conditions, the primary example being when vk is gaussian. In other cases aMv has a different form and achieves a lower variance than the sample average. For example, if vk is Laplacian, the optimal estimator is the sample median in equation (5), while for uniform estimation noise, the optimum is the outer mean (6).9 Practically, while prior knowledge may give some indication of the error statistics, the precise distribution is usually unknown. For such conditions, the operator aMVin equation ( 11) must be estimated. For the problem of arrhythmia detection, due to the need for real-time estimation procedures, an iterative, continuously adaptive solution is appropriate. This involves updating the estimate of aMv as each new threshold crossing is detected. This also facilitates tracking of variations in rate and error distribution. Such an algorithm, dubbed an adaptive order statistic filter, is proposed in Williamson’ as an estimation procedure for signals of the form equation ( 1).

Efficiency in Rate Estimation One may examine the relative efficiency of OS and linear estimates using idealized models for the error densities. Rate is extremely sensitive to threshold

Clarkson et al.

l

209

level, so that given typical amplitude characteristics, undersensing is likely in both ventricular fibrillation and in atria1 fibrillation due to amplitude decreases, lo while oversensing may be a problem in sinus rhythm due to inappropriate threshold selection. Significant amplitude variations, which may cause undersensing or oversensing, have also been observed in monomorphic ventricular tachycardia. ’ ’ One may hypothesize that the type of error observed in the raw estimates is likely to fall .into one of two categories: I-small amplitude errors caused by local fluctuations in threshold crossing position, and II-large amplitude errors caused by false triggering due to secondary peaks. Neither source of error produces the gaussian noise distribution typically assumed in even the most sophisticated detectors.” This lack of gaussianity can result in severely degraded estimates, especially with regard to high-amplitude errors. As a simple example, assume that for each observation xk from equation ( 1), an error of amplitude d (positive or negative) occurs with probability a. We then have an error distribution of the form p(vk = -d)

= a

P(Vk = 0) = 1 - 2a

(12)

p(vk = +d) = a where 0 < a < 1. Given N independent observations, the optimum linear estimator of the rate corresponds to the sample average, and the error variance is uV2/N. Given R(“), the minimum variance unbiased OS operator is calculated from equation ( 11). Rev, may be calculated from consideration of the probability model (12). Consider N independent observations. The probability p that these N samples have 1, points with values vk = -d, l2 points with vk = 0, and ls = N - lr - 1, points with vk = + d, is given by (13)

P(ll,

12)

=

1;)

i”

-

1:)

d-lyl

- 2a)“. The corresponding (14)

R(v) =

x

sampled autocorrelation

2

Ni”

PO,,

matrix is

12)

ll=o12=o-

1 R13(11, 12)

R~~(llr

12)

hz(l,t

12

R2101r

12)

R22(1,,

12 1

R241r

12)

R,,(l,r

12)

Rs2(11,

12 )

R4,,

12)

where Rl,(llr

12)

=

dZCL1,.,,j,

R12(11r

12)

=

R21hr

121t

=

O[I~.I,I,

R,s(l~r

12)

=

Rsl(11,

121t

=

-d2C,,,.,s,r

210

Journal of Electrocardiology,

Vol. 25 Supplement

1

-50A 0

30

25

20

15

ability cxis varied. As the error probability increases, the minimum variance unbiased OS filter changes from median towards average, and finally to outer mean. At all stages, the variance associated with the OS operator is significantly lower than that for the linear filter. It cannot be claimed that this simple example accurately depicts a typical error distribution. However, the gains obtained are consistent with those that would occur with any significant outlier contaminated distribution, including those observed clinically.

Window Length (N=ZM+I)

Fig. 3. Error variances for linear and minimum variance unbiased OS estimators applied to observations from the probability law in equation ( 12). Error probability (Y = 0.05, d = 10, operator length N = 2M + 1, M = 1, 2, . . . . 31.

Rdhr 12) = %,,I~,,

R41r

12)

=

d2C~N-(l,+l,),N--(~,+~,)1.

are n X m matrices of ones C[n,ml and (hrnl and zeroes, respectively. As a numerical example, consider (Y= 0.05. Figure 3 shows the error variance for the averaging estimator versus that for the minimum variance unbiased OS operator, as a function of operator length N. Comparing the two curves we see that as the window length increases, the OS filter’s variance decreases exponentially. By contrast, as we have indicated, the linear filter’s variance decreases by a factor proportional to the window length. (This linear decrease in variance is hardly perceptible on the logarithmic scale of Fig. 3.) Hence, we observe a great improvement in efficiency for the OS filter when compared to the averaging operator used by existing devices. In Figure 4, the operator length is held constant (N = 7), while the error probwhere

5,

Robustness In addition to efficiency, we seek estimators that will exhibit robustness to occasional gross errors in the data. Some OS filters are generically robust to such outliers. Not all OS filters are implicitly robust, however. For example, while the outer mean operator is maximally efficient for independently identically distributed uniform observations, it is not robust to outliers from that distribution. Here, in addition to the generic robustness of many OS filters, the use of adaptive operators produces robust behavior. In the example of uniform errors, if the data are corrupted by occasional outliers, the coefficients of an adaptive operator iterate to a new form, one that is efficient with respect to the new, outlier corrupted data. This connection between robustness and adaptation has been previously noted. I3 However, the issue is controversial because adaptation emphasizes efficiency while robustness is primarily motivated by safety.’ Perhaps for still greater robustness in our adaptive operators we might consider constraining the adaptive updates to lie among those OS filters known to be outlier resistant, although clearly this would reduce efficiency for some error distributions.

I

Discussion

-250 I

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5 I

Impulse Noise Probability

Fig. 4.

Error variances for linear and minimum variance unbiased OS estimators applied to observations from the probability law in equation (12). Operator length N = 7.

We have seen that it is possible to obtain estimates for rate that are more efficient and generally more robust than those obtained by simple averaging. We must keep in mind that as with averaging, the estimation process assumes that rate is slowly varying. This is not a reasonable assumption in general, since many rhythms exhibit rapid beat-to-beat fluctuations. Moreover, even regular rhythms may experience step-like transitions. Analysis of the interval between successive threshold crossings produces some information about the signal, but it is not generally

Robust Adaptive

10.

References

11.

1. D Echt, K Armstrong, P Schmidt et al: Clinical experience, complications and survival in 70 patients with the automatic implantable cardioverter/defibrillator. Circulation, 7 1:289, 1985 2. PD Chapman, P Troup: The automatic implantable cardioverter defibrillator: evaluating suspected inappropriate shocks. J Am Co11Cardiol 7: 1075, 1986

Estimators

l

Clarkson

et al.

211

3. M Masterson, JD Maloney, B Wilkoff et al: Clinical

“rate” as viewed in the model ( 1). This is not a failing of the estimation scheme proposed here; simple averaging is equally or more limited. The problem is that the model is not valid. We are estimating the mean value of the distribution, but in such cases the mean value conveys limited information. However, other moments such as the variance or normalized variance ( “regularity”) may have greater utility.5 As with mean value estimation, one can formulate estimators of this quantity that have significant advantages in terms of efficiency and robustness, as compared to, for example, the sample variance. More generally, for rhythms with fluctuating zero-crossing intervals, a study of the statistics of the threshold-crossing intervals remains a useful exercise. However, the information content of the threshold-crossing distribution will reside not just in the first and second, but in all moments of the distribution. Clearly there is much that can be done to improve the estimation process by extension to the higher moments.

constant

Parameter

4.

5.

6.

7. 8.

9.

12.

13.

performance of automatic implantable cardioverter defibrillator: electrocardiographic documentation of 82 spontaneous discharges. J Am Co11Cardiol 11: 18, 1988 JE Poole, CL Troutman, J Anderson et al: Inappropriate and appropriate discharges of the automatic implantable cardioverter defibrillator. J Am Co11Cardiol 11:210, 1988 KL Ripley, TE Bump, RC Arzbaecher: Evaluation of techniques for recognition of ventricular arrhythmias by implanted devices. IEEE Trans Biomed Eng BME36:618, 1989 AC Bovik, TS Huang, DC Munson: A generalization of median filtering using linear combinations of order statistics. IEEE Trans Acoust Speech and Signal Processing ASSP-3 1: 1342, 1983 PJ Huber: Robust Statistics, Wiley, New York, 1981 GA Williamson, PM Clarkson: On signal recovery with adaptive order statistic filters. IEEE Trans Signal Processing SP-40:2622, 1992 HA David: Order Statistics, Wiley, New York, 1981 J Jenkins, KH Noh, A Guezemec et al: Diagnosis of atria1 fibrillation using electrograms from chronic leads: evaluation of computer algorithms. PACE 11: 622, 1988 JJ Langberg, WJ Gibb, DM Auslander, JC Griffin: Identification of ventricular tachycardia with use of morphology of the endocardial electrogram. Circulation 77: 1363, 1988 NV Thakor, Y Zhu, K Pan: Ventricular tachycardia and fibrillation detection by a sequential hypothesis testing algorithm. IEEE Tran Biomed Eng BME-37:837, 1990 RV Hogg: Adaptive robust procedures. J Amer Statist Ass 69:909, 1974

Robust adaptive parameter estimators in arrhythmia detection.

The authors consider the statistical analysis of threshold crossing intervals, as applied to estimation of tachycardia rates from intracavitary electr...
525KB Sizes 0 Downloads 0 Views