130

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 18, NO. 1, JANUARY 2014

Automatic Motion Analysis System for Pyloric Flow in Ultrasonic Videos Chaojie Chen, Yuanyuan Wang, Senior Member, IEEE, Jinhua Yu, Zhuyu Zhou, Li Shen, and Ya-Qing Chen

Abstract—Ultrasonography has been widely used to evaluate duodenogastric reflux (DGR). But to the best of our knowledge, no automatic analysis system was developed to realize the quantitative computer-aided analysis. In this paper, we propose a system to perform the automatic detection of DGR in the ultrasonic image sequences by applying the automatic motion analysis. The motion field is estimated based on image velocimetry. Then, an intelligent motion analysis is applied. For the DGR detection, the motion and structural information is combined to analyze the transploric motion of the fluid. In order to test the performance of the proposed system, we designed the experiment with the real and synthetic ultrasonic data. The proposed system achieved a good performance in the DGR detection. The automatic results were accordant with the gold standard in analyzing the fluid motion. The proposed system is supposed to be a promising tool for the study and evaluation of DGR. Index Terms—Computer-aided analysis, duodenogastric reflux (DGR) evaluation, ultrasonic videos.

I. INTRODUCTION UODENOGASTRIC reflux (DGR) is a pathophysiological phenomenon that the pyloric fluid incorrectly flows from the duodenum to the stomach due to defective valve function at pylorus. Physiological DGR has been implicated in the pathogenesis of many gastric diseases such as gastric ulcer, gastric cancer, and gastritis, etc. [1]. In clinical, physiological DGR is characterized by large reflux volume and frequent occurrence. Thus, assessing the severity of DGR is very crucial and can generate substantial insight into the pathophysiology of gastric diseases. Ultrasonography has been widely used in clinical gastrointestinal examination, with benefits of good temporal resolution, noninvasiveness, no radiation, and low cost. It also provides an access to observe the movement of pyloric fluid under physi-

D

Manuscript received December 6, 2012; revised April 13, 2013; accepted June 25, 2013. Date of publication July 3, 2013; date of current version December 31, 2013. This work was supported by the National Natural Science Foundation of China under Grant 61271071 and Grant 11228411), the National Key Technology R&D Program of China under Grant 2012BAI13B02, and Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant 20110071110017. C. Chen, Y. Wang, and J. Yu are with Department of Electronic Engineering, Fudan University, Shanghai 200433, China (e-mail: 10210720106 @fudan.edu.cn; [email protected]; [email protected]). Z. Zhou, L. Shen, and Y. Q. Chen was with Department of Ultrasound Diagnose, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200092, China (e-mail: [email protected]; shenlicn@ hotmail.com; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JBHI.2013.2272090

ologic conditions. The quantitation of DGR can be conducted with ultrasonography [2], [3]. With the ultrasonic probe positioned at the level of the transpyloric plane, antrum, pylorus, and the proximal duodenum can be observed simultaneously, which provides an intuitionistic way to see if DGR occurs. However, clinicians have to check the ultrasonic video manually to identify DGR. The assessing process is tedious, time consuming, and subject to the observer’s variability. It is meaningful to develop a computer-aided analysis system to simplify the clinician’s work and perform the more quantitative and objective DGR evaluation. Our study aims at building an automatic analysis system for DGR detection and evaluation. This manuscript reports the primary achievement in this system. Up to now, the system can perform the automatic detection of DGR in a given sequence, indicating when DGR occurs and the duration of DGR. Technically, this task can be described as the motion analysis of the pyloric fluid. It can be roughly divided into four stages: (region of interest) ROI extraction, motion estimation, motion classification, and further DGR analysis. Locating the ROI is the beginning in this application. Here, the ROI is the pyloric region. Its range is determined by an edge tracking strategy. To locate the ROI, current methods can be roughly categorized into two types: image-based methods and tracking-based methods. Image-based methods find the ROI according to the image itself. The task is usually done by using segmentation methods, which search for similar image-based features such as the intensity or texture. Segmentation methods vary from simple thresholding [4], [5] to more complicated methods such as region growing [6], [7] and level set [8], [9]. They are widely used in many applications such as the left ventricular function assessment [10], [11] and vascular assessment [12], [13]. However, these kinds of methods always acquire the good initiation. Though human interference is a common and effective choice, it decreases the processing efficiency. In order to achieve a high segmentation accuracy, the processing time is prone to be large. Thus, when dealing with image sequences or videos, tracking-based methods are preferred. In a tracking framework, the result in previous image is used to provide an initial estimation. For example, Riha et al. [14] proposed using the optical flow technique to monitor the artery border tissue movement automatically, and then fitting the ellipse model from these monitoring points to get the most suitable artery shape in each frame. Mansouri et al. [15] estimated the motion with a local search step for the best visual correspondences. The result of such a step was used to determine the speed of the flow that performed tracking within a level set formulation. Compared to imaged-based methods, the processing time of tracking-based

2168-2194 © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

CHEN et al.: AUTOMATIC MOTION ANALYSIS SYSTEM FOR PYLORIC FLOW IN ULTRASONIC VIDEOS

methods decreases greatly with a good initiation. Taking the context information into consideration contributes to improving the detection accuracy of tracking-based methods. However, on the other hand, it may also result in error accumulation. In the proposed system, an edge-based optical flow method [16] is used. It improves the tracking accuracy so that the influence of error accumulation can be effectively reduced. By tracking the gastroduodenal walls around pylorus and analyzing their shapes, the location of pylorus can be obtained, as well as the fluid region. The information will play an important role in further motion analysis. This strategy avoids identifying pylorus directly, thus is considerably fast. Here, the analysis target is the fluid motion. Since no prior motion information is available, it is essential to extract the motion feature of the pyloric fluid from the raw B-mode ultrasonic videos. It is realizable with image velocimetry techniques. In the area of image velocimetry, optical flow methods are widely used. The optical flow theory arises from the recognizing mechanism of motion. The optical flow can be considered as the apparent motion observed in a sequence of images. The motion estimation mimics the visual cognition. Various optical flow methods utilize the intensity consistency assumption to estimate the optical flow [17]–[19]. However, in many practical cases, the intensity consistency assumption is always violated so that the estimation is easily influenced by illumination changes. One aspect of improvement is to extend the brightness constancy assumption so that it is more physically plausible. Uras et al. [20] assumed the constancy of the gradient to suppress the influence of the global illumination change. Moln´ar et al. demonstrated that the cross correlation is a robust feature against the brightness and color illumination changes and proposed a linearized iterative scheme for cross-correlation-based variational optical flow [21]. Considering the influence of tissue motion and the special characteristics of ultrasonic images, a weighted crosscorrelation-based variational optical flow is used to estimate the motion of pyloric fluid [22]. The DGR detection and evaluation are based on the automatic analysis of the estimated motion filed. Motion analysis is widely used in physiological analysis. It is application dependent. Some need the motion information alone. For example, Wong et al. [23] analyzed motion of the turbulent blood flow by calculating the vorticity directly based on the estimated motion field and established a relationship between atrial vorticity and septal defect. Recently, fluid-structure interaction analysis has been widely used in research of hemodynamics [24]. The structural information has been taken into account in the motion analysis. The DGR analysis can also be considered as a fluid-structure interaction analysis since the target in the motion analysis is the transpyloric fluid. Both the motion and the structural information of pyloric region need to be considered. Because the movement of pylorus is very complex and there is no available motion model, the motion analysis is much more complicated in this case. We propose an adaptive guideline for the flow direction judgment based on instant structural recognition. Here a statistical strategy is adopted to make a more robust decision. Considering the uncertain cases, a state machine is used to generate the final certain decisions.

131

The goal of this paper is giving a detailed description of the whole system in the viewpoint of system. It shows how results of different steps are incorporated in the proposed system to complete the motion analysis. Previous studies about the algorithms used in the first two stages have already been reported in [16] and [22], which discuss the choice of tracking algorithm and image velocimetry algorithm, respectively. This paper mainly reports the remaining processing (the last two stages) in the DGR detection, as well as the validation of the system’s performance. To keep the consistency in introducing the proposed system, we briefly introduce the first two stages and put more focuses on their processing procedure. The readers may refer to [16] and [22] for more algorithm details and the experiments which demonstrate the superiority of these algorithms. The contributions of this paper lie on: 1, an adaptive guideline for the flow direction judgment is proposed based on instant structural recognition, which benefits the automatic motion analysis; 2, the complete framework of the automatic system is proposed. Based on the current output (the motion and direction information), further quantitative evaluation of DGR can be conducted in this framework. The rest of this paper is organized as follows: Section II provides a detailed description of the proposed system. Experiments and discussion are presented in Section III. A brief conclusion is drawn in Section IV. II. METHODOLOGY The primary objective of the proposed system is to provide an automated motion analysis for the pyloric fluid and detect DGR. The flowchart is given in Fig. 1. First, the locations of two gastroduodenal walls in the initial frame are inputted. Applying the shape analysis, the narrowest region is regarded as the center of pylorus and is used to provide the structural information for the further motion analysis. The region surrounded by the two gastroduodenal walls is regarded as the fluid region. This information will be later used in regional statistics. The ROI is also roughly determined by the walls. Then, the motion field is estimated. For each individual pixel in the fluid region, its motion direction is classified as positive, negative, or uncertain. The global flow direction is determined by the regional statistics and is saved for the further analysis. The procedures in the dotted block will be repeated until the end of the image sequence. For frames except the initial one, the locations of gastroduodenal walls are determined by tracking. Fig. 2 provides an insight of the processing in a frame. After the whole sequence is processed, the flow direction results are smoothed and a state machine is used to generate the final flow direction sequence. A. Step 1: Tracking the Gastroduodenal Walls Since the research subject is the pyloric fluid, it should be identified in the image. The location of pylorus is also needed for the DGR evaluation. The aforementioned information is essential to the further motion analysis. As the fluid region is an open region and there is no distinct feature of pylorus, directly segmenting the fluid region and detecting pylorus through the regional structure analysis are

132

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 18, NO. 1, JANUARY 2014

Fig. 2. Illustration of frame processing. After tracking the gastroduodenal walls, the motion filed is estimated and automatically analyzed. The flow direction F (t) is then determined by the statistical results and saved.

Fig. 1.

Scheme of the proposed system.

difficult and time consuming. An indirect way is adopted to locate pylorus. It is noticed that pylorus can be seen as a valve consisting of two fragments of protuberant gastroduodenal walls. Their closest place is supposed to be in the pyloric region and is considered as the center of pyloric region. Thus, by tracking the gastroduodenal walls and detecting the closest region, the position of pylorus can be located. In this strategy, the fluid region which is supposed to be surrounded by the two walls is determined as well. The joint prediction and segmentation method (JPS) is used to perform the tracking [16]. This method takes the feature of target into consideration during the motion estimation, thus can achieve a high tracking accuracy. At first, the gastroduodenal walls are manually traced in one still frame and then automatically tracked by the JPS method during the following processing. Assume that V1t−1 and V2t−1 are the upper and lower gastroduodenal walls, respectively, in the previous frame. The corresponding displacements (U1 and U2 ) between the previous frame and the current frame are estimated by the JPS method. Then, the locations of the gastroduodenal walls in the current frame are determined as V1t = V1t−1 + U1 and V2t = V2t−1 + U2 . The gastroduodenal walls are resampled

to accommodate the non-rigid deformation of the gastroduodenal walls. The distances between all the vertices are calculated. An adaptive resampling are adopted here, which deletes the vertices which are too close to each other (the distance less than 0.5δ) and interpolates the new vertices if those are too far to each other (the distance larger than 2δ). δ is a preset threshold to control the distance between the vertices. In our application, δ = 3. It is noticed that estimating the motion in the whole image is not necessary. Motion estimations of other tissues are redundant since only the motion in the pyloric region contributes to the DGR analysis. Thus, to improve efficiency, the initial images are clipped according to the location range of the tracked gastroduodenal walls to be the input images for the further motion estimation. A small extension is applied to avoid the possible occlusion caused by the target moving out of vision. Let It denote the current frame at time t and It+1 denote the next frame. V1t and V2t satisfy Vit ∈ [r1 :r2 , c1 :c2 ], i = 1, 2. The maximum motion magnitude is assumed to be Mm ax . The images are clipped as following:  I0 = It (r1 − Mm ax : r2 + Mm ax , c1 − Mm ax : c2 + Mm ax ) I1 = It+1 (r1 −Mm ax : r2 +Mm ax , c1 − Mm ax : c2 + Mm ax ) (1) The corresponding gastroduodenal walls in the clipped images are V1 and V2 

V1 (x, y) = V1t (x − r1 + Mm ax + 1, y − c1 + Mm ax + 1) V2 (x, y) = V2t (x − r1 + Mm ax + 1, y − c1 + Mm ax + 1) (2)

B. Step 2: Structure Recognition When positions of the gastroduodenal walls in the current frame are available, the positions of pylorus and the fluid region can be determined by applying the structural analysis. To detect pylorus, the minimum distance detection is applied. A distance

CHEN et al.: AUTOMATIC MOTION ANALYSIS SYSTEM FOR PYLORIC FLOW IN ULTRASONIC VIDEOS

Fig. 3. Illustration of the structural analysis: (a) the pylorics centers V u , V l is determined by applying the minimum distance detection; (b) the binary mask indicating the fluid region; (c) structural map. Region I indicates the duodenal fluid region. Region II indicates the gastric fluid region.

matrix D is firstly calculated ⎡ d1,1 · · · d1,n ⎢ .. D=⎢ . ⎣

where wm otion j and wspatial j are motion similarity weights and spatial proximity weights, respectively. They are defined in a similar way. To accommodate the possible long-range motion, a coarseto-fine strategy is used. The motion field is firstly estimated using the weighted cross-correlation-based optical flow method through the image pyramid, whose energy function is defined as follows:

I0 (x, y)I1 (x + u, y + v) E = − W 2

2 W I0 (x, y) W I0 (x + u, y + v) + λWstructural ((∇u)2 + (∇v)2 )

⎤ ⎥ ⎥ ⎦

(3)

dm ,1 · · · dm ,n where di,j =  V1 i – V2 j , V1 i is the ith point on V1 , V2 j is the jth point on V2 . Assuming that the minimum value in D is dm in , its corresponding point pair is supposed to be the center of pylorus. In practice, the positions of the point pairs which satisfy d < dm in + ε (ε is a preset range) are averaged to increase the robustness. Finally, we can get Vu (xu , yu ), Vl (xl , yl ) as the pyloric center points on V1 and V2 , respectively. An example is shown in Fig. 3(a). According to the prior structural knowledge, the area surrounded by V1 and V2 is the fluid area. A binary mask indicating the fluid region can be determined by the locations of V1 and V2 . Fig. 3(b) gives an example of the binary mask. The fluid region can be further divided into two parts by pylorus, as shown in Fig. 3(c). Region I, which denotes the region on the left side of pylorus, is the duodenal fluid region. Here, Region II represents the gastric fluid region. C. Step 3: Motion Estimation The motion information is the base of motion analysis. The more accurate the motion estimation is, the more accordant the analysis is to visual cognition. Ultrasonic image has notorious characteristics of the high noise level, shadow and etc. There also exist multimovements in an image. All these may influence the estimation accuracy of the optical flow methods. Therefore, a robust but accurate optical flow method is acquired to estimate the motion of the pyloric fluid. In the proposed system, after inputting the image pair I0 and I1 , a weighted cross-correlation based variational optical flow [22] is applied to estimate the motion field of I0 . The weighted cross-correlation is constructed according to the visual similarity matrix W = [wS j ], where wS j = wstructural j • wm otion j • wspatial j . wstructural j is the structural similarity. It gives more weights to the connection, whose two endpoints are likely to belong to the same object. Let N denote a zero-mean Gaussian with standard deviation of σi . S is the structural component of I0 . The definition of wstructural j is given below wstructural j (x, y, xj , yj ) = N (S(x, y) − S(xj , yj ); σi ) (4)

133

(5)

where u and v represent the motion along the lateral and vertical direction, respectively. The first term is the weighted cross correlation. The weighted addition operation W means that the components in the window will be multiplied by their corresponding weights before the addition. The second term is the motion smoothness constraint. Wstructural is a structural similarity map, which adaptively weights influences from the neighborhood points with (4). A further pixel-wise refinement is then applied. The energy function of the refining optical flow method is given by E = ρ(Ix u + Iy v + It ) + λWstructural ρ( (∇u)2 + (∇v)2 ) (6) where ρ is the Charbonnier penalty ρ(x) = (x2 + ε2 )a with a = 0.4 and ε = 0.001. The first term is the pixel-wise data term and second term is the motion smoothness constraint. After the motion estimation, the motion of each point in space is available. To better represent the unitary fluid motion, we propose the flow amplitude Afluid . It is assumed that though the motions in the fluid area vary in angles, their amplitudes are similar, and the mean motion amplitude in this area in a degree reflects the fluid volume passing through pylorus. Based on this assumption, an amplitude image A(x, y) is firstly calculated in the estimated motion field. According to the structural information in Step 2, the mean amplitude in the fluid region is then calculated to be Afluid . D. Step 4: Motion Classification In assessing DGR, the motion direction judgment is very crucial. The motion direction can be classified as positive, negative or uncertain. For each individual pixel in the fluid region, the relative motion direction with respect to pylorus should be taken into consideration, as well as its location. For example, if a pixel in the gastric fluid region moves towards pylorus, its motion direction is classified to be positive, otherwise, if it moves away from pylorus, its motion direction is classified to be negative. By converting the congenital knowledge into computer intelligence, we proposed a procedure as follows. For each point P (x, y) in the fluid region whose motion vector  = (u, v), do is U 1) If P is in the duodenal fluid region [Region I in Fig. 3(c)],  1 = Vu − the positive reference vectors are defined as V  P, V2 = Vl − P .

134

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 18, NO. 1, JANUARY 2014

Fig. 5. Output of the motion classification. Red, blue, and yellow colors denote that the motion direction is negative, positive, and uncertain, respectively. Fig. 4. Illustration of the motion classification. The variables used in the motion classification are marked. The motion direction is classified according to the relationships of the intersection angles.

2) Else, P is in the gastric fluid region [Region II in Fig. 3(c)], 1 = P − the positive reference vectors are defined as V  Vu , V 2 = P − Vl . 2 ,  and V 1 , V 3) Calculate the intersection angles between U which are denoted as θ1 , θ2 , respectively. 4) If θ1 + θ2 ≤ 180 − Δ (Δ is a preset angle threshold  to allow the certain angle deviation), the direction of U is classified to be positive. If θ1 + θ2 ≥ 180 + Δ, the  is classified to be negative. Otherwise, the direction of U  direction of U is “uncertain”. 5) If the direction is certain, then a) Search the point Wu on the upper gastroduodenal wall, which has the minimum distance to P . Similarly, search Wl . 3 = Wu − b) The possible wall motion is defined as V  P, V4 = Wl − P . Calculate θ3 and θ4 , which are 4 ,  and V 3 , V the intersection angles between U respectively. c) If any θ in the set {θ3 , 180 − θ3 , θ4 , 180 − θ4 }  is considered to have satisfies θ≤Δ, the motion U a large possibility to be influenced by the motion of the gastroduodenal walls, its direction information should be ignored in the further DGR analysis. The motion direction is classified to be uncertain. d) Save the motion direction results according to the position of P . Fig. 4 shows the above variables. The motion direction of a point is determined by its spatial location and its estimated motion. Thus, this motion direction judgment strategy is considered as adaptive. Δ controls the number of uncertainties. Experiment results showed that the flow direction was not sensitive to the choice of Δ. It is because that the system adopts the statistical strategy. It was observed that when Δ = 20◦ , the number of uncertainties was acceptable and the motion classification was well in accordance with the visual observation. Thus, Δ is preset to be 20◦ here.

E. Step 5: Regional Statistics and Decision-Making Fig. 5 gives a typical example output of the motion classification. It is observed that the inevitable fluid region extraction error and motion estimation error as well as the imperfectness of direction judging rules result in the disturbances in the pyloric flow direction judgment. Apart from these interferences, the fluid motion itself is complicated and fluid turbulences may happen in the fluid region. It is noticed that if a fluid turbulence happens, both positive and negative flows exist. Learning from visual recognition, the statistics strategy is adopted to capture the main motion direction and suppress the influence of the interferences. By calculating the directional histogram in a region, the main motion direction of this region is judged positive if points with the positive motion direction are more than the ones with the negative motion direction, and vice versa. Considering the gastric fluid region and duodenal fluid region is not necessarily equal, to avoid results manipulated by the region with larger area, the statistics is applied in the two regions, respectively. Then separate decisions are combined to generate the flow direction F (t) in the current frame. If two decisions are the same, the flow direction is certain. We mark positive direction cases with 1 and negative direction cases with −1. If two decisions are conflicted, the flow direction is uncertain, F (t) ←0. This case usually happens when there is no distinct flow passing through pylorus. The motions of the gastroduodenal walls dominate the fluid motion. To make a better mimic of visual recognition, a median filter is applied on the flow direction sequence. It can eliminate the abrupt flow direction changes in a degree by incorporating the context information. A filter size study was conducted using the real ultrasonic data. Its result showed that the proposed system achieved the best performance when the filter size was seven. F. Step 6: Further Processing In some cases, the flow direction remains uncertain. The uncertainty brings difficulty in the frame-by-frame analysis for DGR detecting. To eliminate the uncertain cases, a simple state machine is used. If the flow direction in the current frame is uncertain (F (t) = 0), it is supposed to be the same with that in the previous frame, Fcertain (t)← F (t–1), the motion state of

CHEN et al.: AUTOMATIC MOTION ANALYSIS SYSTEM FOR PYLORIC FLOW IN ULTRASONIC VIDEOS

135

Fig. 6. Simulation of ultrasonic images: (a) the echogenicity model; (b) the point scatterer image; (c) the synthetic ultrasonic image.

the pyloric fluid keeps unchanged. Otherwise, the motion state is updated, i.e. Fcertain (t)← F (t). In this way, we can get the final flow direction sequence Fcertain which can be used to automatically detect DGR. By searching Fcertain , any frame with a negative flow is considered to have a refluxed flow. So it is determined that DGR occurs. The duration of DGR is defined as the max number of the continuous frames with the negative flow. After processing an input sequence I(t), t ∈ [1, N ], the proposed system will output the final flow direction sequence Fcertain , as well as the number of complete DGR (NDGR ) in this sequence and the mean duration of DGR (TDGR ). If no DGR exists (i.e., NDGR = 0), TDGR ←0. TDGR = 0 indicates that the analyzed sequence has no DGR detected. If DGR occurs, the sequence in where reflux occurs will be extracted for further analysis, as well as the corresponding motion fields, which can help the medical experts to assess the severity of DGR. The extra measures Afluid is available, which provides an intuitionistic insight to the severity of DGR. The further study will be on extracting reflux volume related measures from the estimated motion field to assess the severity of DGR. III. EXPERIMENTS AND DISCUSSION A. Performance in Synthetic Ultrasonic Data In this section, the evaluation experiments were conducted using the synthetic ultrasonic data. The use of the synthetic ultrasonic data allows us to quantify the accuracy of the estimated motion field, which would not be possible to obtain from the real data. The synthetic ultrasonic images were generated using the method in [25]. An echogenicity model was created as shown in Fig. 6(a). The point scatterers image T and the simulated ultrasonic image I0 are shown in Fig. 6(b) and (c), respectively. It was assumed that the imaging system had a linear and space invariant point spread function (PSF). The simulated ultrasonic system had the transducer frequency of 4 MHz. The PSF dimensions in the axial and lateral directions were 0.378 mm, while the sample size was 0.27 mm/pixel. To get the corresponding image under a certain motion field M (x, y), T was firstly warped according to M (x, y). The warped image was then used to generate the simulated ultrasonic image I1 . 1) Evaluating the Processing in a Single Frame: The intermediate results of a single frame processing were tested in this section. The accuracy of motion estimation and classification were evaluated here. To evaluate the motion estimation

Fig. 7. Evaluating in a single frame (the first experiment):(a) the underlying motion field; (b) the angle error map; (c) the output of motion classification; (d) the motion vectors in the rectangle marked in (c).

quantitatively, the amplitude error (AE ) and the angle error (θE ) were calculated. Denoting a 3D motion U = (u, v,1), the definition of AE and θE are given below

ˆ

− U  U · 100 (7) AE = U    ˆ U, U

(8) θE = arccos

ˆ

U · U  ˆ define real and estimated motion, respectively. where U and U The first experiment adopted the motion field as shown in Fig. 7(a). It mimicked the simplest situation, where the fluid moved smoothly from the stomach to the duodenum. In this experiment, AE = 0.9 ± 5.8(%), θE = 1.7 ± 6.9(◦ ), which are significantly small. This is because the underlying motion field satisfies the assumption in the motion estimation quite well. Fig. 7(b) shows the corresponding angle error map. It is easy to find that the main errors lie on the edges, where the motion estimations in tissue and fluid region are prone to influence each other. The estimation of motion angle is accurate for most of the spatial points. Fig. 7(c) is the output of motion classification. In the statistical fluid region, the majority of motion directions were correctly classified. However, in some fluid places, there existed “uncertain” directions. Fig. 7(d) shows the motion vectors in the rectangle marked in Fig. 7(c). It is observed that the uncertainties in the middle of Fig. 7(d) are mainly caused by the inaccuracy of motion estimation. Though in this case, the inaccurate estimations were judged uncertain and were excluded in determining the flow direction, the simple strategy cannot always perform effectively, which may bring disturbance into the decision making.

136

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 18, NO. 1, JANUARY 2014

Fig. 8. Evaluating in a single frame (the second experiment): (a) the underlying motion field; (b) the angle error map; (c) the output of motion classification; (d) the motion vectors in the rectangle marked in (c).

The second experiment mimicked a more complex situation. As shown in Fig. 8(a), a vortex exists in the motion field. In this experiment, AE = 1.4 ± 8.3 (%), θE = 3.4 ± 10.8(◦ ). The accuracy of motion estimation decreases when the fluid motion is complex. Obvious angle errors lie on where the motion smoothness assumption violates, as shown in Fig. 8(b). Because of the existence of vortex, the motions of some points contradict to the global fluid motion. In Fig. 8(d), which is the motion field in the rectangle marked in Fig. 8(c), it is observed that most of the inconsistent motion directions are correctly detected. These conflicts cannot be eliminated by improving the motion estimation. Current motion analysis strategy is efficient to get the global fluid motion direction by applying statistics and incorporating context information. It is believed that considering these interferential cases in the motion classification can further improve the robustness of motion analysis. 2) Evaluating Performance in a Synthetic Ultrasonic Sequence: As the true fluid motion was not available, we followed the design idea of Ledesma-Carbayo et al. [26] and proposed a simplified motion model as shown in (9) M (x, y, t) = h(x, y) · g(t)

(9)

where h(x, y) is the spatial-dependent term and g(t) is the temporal term. We used the simplest spatial term in Fig. 7(a). The amplitudes of the fluid motion and the tissue motion are one and zero, respectively. The temporal term used in the experiment is shown in Fig. 9(a), which satisfies g(t) = 1.2 sin(2πt/25), t ∈ [1, 36]. Reflux occurs and lasts a certain time in the middle of the generated sequence. The error metrics in the whole sequence are AE = 1.5 ± 2.6(%), θE = 2.3 ± 4.5(◦ ). Fig. 9(b) is the amplitude error image, which is averaged through the sequence. The error contribution is consistent with previous observation in a single frame. Fcertain is shown in Fig. 9(c), as well as the gold standard. Fcertain perfectly matches with the gold standard.

Fig. 9. Evaluation in synthetic ultrasonic sequence: (a) the temporal term g(t) of the fluid motion model; (b) the averaged amplitude error image; (c) the comparison between Fc e rta in (blue circle) and M c e rta in (red cross); (d) the comparison between A fl u id (t) (blue circle) and g(t) (red cross).

These results demonstrate that the proposed system can successfully detect the fluid motion and classify its direction. So the proposed system is competent to detect DGR. Fig. 9(d) gives the comparison result of Afluid (t) and g(t). Afluid (t) is linearly correlated with g(t). Their correlation coefficient is 0.99, which indicates that in this case Afluid can well reflect the amplitude changes of the fluid motion. Afluid (t) may contribute to the further estimation of reflux volume. B. Performance in real data The real ultrasonic dataset consisted of ten ultrasonic sequences, which came from Xinhua Hospital, affiliated to Shanghai Jiaotong University School of Medicine. The clinical gastrointestinal examination used a Siemens SEQUOIA 512 with the ultrasonic probe operated at 4.0 MHz at a frame rate of 27 frames per second. The pixel size was 0.27 mm/pixel. The flow directions from the manually frame-by-frame checking were used as the reference. It is noticed that in some complex cases, even the manual checking cannot tell the exact flow direction. As a result, the “uncertain” decisions can also be found in results of the manual checking. We input results of the manual checking to the state machine and used the output as the gold standard, Mcertain . Table I lists the comparisons between the manual and automatic results in DGR detection. In eight cases, the results of the automatic analysis and the manual checking are almost unanimous. However, in the sequence IV and X, the automatic analysis detects fake DGRs. It is more intuitive to compare Fcertain , Mcertain directly. Fig. 10 shows four typical cases. (a) and (c) represent the good DGR detections, the automatic system can accurately detect the frames where reflux exists and get no fake detection in the positive flow direction frames. Some problems occur in

CHEN et al.: AUTOMATIC MOTION ANALYSIS SYSTEM FOR PYLORIC FLOW IN ULTRASONIC VIDEOS

137

TABLE I COMPARISON IN DGR DETECTION

in this short duration is hard to be described. Together with the application of state machine, these uncertainties contribute to the obvious mismatch between Fcertain and Mcertain on the beginning and ending period as shown in Fig. 10(b). Another kind of obvious mismatch is shown in Fig. 10(d). The automatic analysis failed to make the correct flow direction decisions in some frames. Thus, a fake DGR was detected. It is observed that the fake DGRs are always caused by influence of the tissue motion. Since many fake DGRs share the character of a short duration, while the pathologic DGR, which is the target DGR needed by the medical experts, has a long duration, one way to eliminate the fake detection is applying a threshold on the DGR durations. Currently, only the direction information is used in the classification of the motion direction. Extra information such as the motion amplitude and the motion of nearby tissues should be considered for further improving the automatic analysis. The main computational load of the proposed system lies in Step 2. For a pyloric region with n pixels, the computational complexity is O(n3 ).The automatic analysis was implemented in Matlab and was executed on a 1.79-GHz Intel Core 2 Duo CPU. It averagely needs 35 s to process a frame. Though the current system hardly satisfies the real-time demands, this automatic process can be accelerated by, for example, implementing in C or combing with GPU. The study of accurate but faster algorithms is undergoing as well. The experiment results are quite promising and demonstrate the satisfactory ability of the proposed system in the DGR detection. However, it is noticed that there are always many uncertain decisions in F , which should be eliminated as many as possible to increase the robustness of the automatic motion analysis. The detection efficiency and accuracy of the current system is not sufficiently high. In some sequences, the proposed system detects fake DGRs. To avoid the fake DGR detection, the system should achieve zero FP, i.e. let the specificity equal to 1. Improvement can be on (a) improving the motion estimation accuracy and (b) perfecting the motion analysis strategy.

Fig. 10. Evaluation in real ultrasonic sequence: (a)–(d) Four representative comparison results between Fc e rta in (blue cross) and M c e rta in (red circle).

IV. CONCLUSION

(b) and (d). According to the gold standard, we can get the True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN) in the detection of the negative flow frame. They are 969, 27, and 14, respectively, out of 1010 cases. The accuracy, which is defined as (TP + TN)/(TP + TN + FP + FN), is 0.959, The sensitivity, which is defined as TP/(TP + FN), is 0.938. The specificity, which is defined as TN/(TN + FP), is 0.966. By analyzing the results above, we can draw the conclusion that there is good coherence between the results of the automatic analysis and the manual checking with the accuracy of 0.959. In the majority of frames, the final flow direction decisions made by the automatic analysis and the manual checking are accordant. Comparing Fcertain and Mcertain , it is noticed that the main difference lies on the beginning and ending period of DGR. Since it is observed that the fluid always stops flowing through pylorus for a while before the flow direction changes, the fluid motion

We present an automatic analysis system that contributes to realizing the computer-aided DGR analysis. The proposed system can detect the occurrence of DGR and get the duration of DGR. The results have good accordance with the manual checking. It is a promising tool to analyze DGR as it can provide more objective and intuitionistic quantitative analysis. It also benefits the study of DGR. For example, analyzing the relationship between the DGR occurrence and the pyloric motion helps to study when DGR is prone to occur. REFERENCES [1] M. Fein, J. Maroske, and K. H. Fuchs, “Importance of duodenogastric reflux in gastro-oesophageal reflux disease,” Brit. J. Surg., vol. 93, no. 12, pp. 1475–1482, Dec. 2006. [2] J. Fujimura, K. Haruma, J. Hata, H. Yamanaka, K. Sumii, and G. Kajiyama, “Quantitation of duodenogastric reflux and antral motility by color doppler ultrasonography: study in healthy volunteers and patients with gastric ulcer,” Scandinavian J. Gastroenterol., vol. 29, no. 10, pp. 897–902, Oct. 1994.

138

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 18, NO. 1, JANUARY 2014

[3] P. M. King, R. D. Adam, A. Pryde, W. N. McDicken, and R. C. Heading, “Relationships of human antroduodenal motility and transpyloric fluid movement: non-invasive observations with real-time ultrasound,” Gut, vol. 25, pp. 1384–1391, 1984. [4] J. Zhang and J. Hu, “Image segmentation based on 2D Otsu method with histogram analysis,” Int. Conf. Comput. Sci. Softw. Eng., vol. 6, pp. 105– 108, Dec. 2008. [5] S. Wanga, F. Chung, and F. Xiong, “A novel image thresholding method based on Parzen window estimate,” Pattern Recog., vol. 41, no. 1, pp. 117– 129, Jan. 2008. [6] R. Du and H. J. Lee, “An improved region growing method for scalp and skull extraction based on mathematical morphology operations,” in Proc. IEEE Int. Congr. Image Signal Process., Oct. 2011, vol. 3, pp. 1201–1204. [7] J. Mohanalin, P. K. Kalra, and N. Kumar, “A simple region growing technique for extraction of ROI from mammograms,” IUP J. Syst. Manag., vol. VIII, no. 4, pp. 56–62, Nov. 2010. [8] A. Belaid, D. Boukerroui, Y. Maingourd, and J. F. Lerallut, “Implicit active contours for ultrasound images segmentation driven by phase information and local maximum likelihood,” in Proc. IEEE Int. Symp. Biomed. Imag. (From Nano to Macro), 2011, pp. 630–635. [9] M. Rousson and N. Paragios, “Prior knowledge, level set representations & visual grouping,” Int. J. Comput. Vis., vol. 76, no. 3, pp. 231–243, 2008. [10] K. U. Juergens, H. Seifarth, F. Range, S. Wienbeck, M. Wenker, W. Heindel, and R. Fischbach, “Automated threshold-based 3D segmentation versus short-axis planimetry for assessment of global left ventricular function with dual-source MDCT,” Amer. J. Roentgenol., vol. 199, no. 2, pp. 308–314, Aug. 2012. [11] Y. Lu, P. Radau, K. Connelly, A. Dick, and G. Wright, “Segmentation of left ventricle in cardiac cine MRI: an automatic image-driven method,” Functional Imag. Modeling Heart, vol. 5528, pp. 339–347, 2009. [12] F. Molinaria, G. Zengb, and J. S. Suric, “A state of the art review on intima– media thickness (IMT) measurement and wall segmentation techniques for carotid ultrasound,” Comput. Methods Programs Biomed., vol. 100, no. 3, pp. 201–221, Dec. 2010. [13] D. Vukadinovic, R. Manniesing, S. Rozie, R. Hameeteman, T. T. de Weert, A. an der Lugt, and W. J. Niessen, “Segmentation of the outer vessel wall of the common carotid artery in CTA,” IEEE Trans. Med. Imag., vol. 29, no. 1, pp. 65–76, Jan. 2010. [14] K. Riha and I. Potucek, “The sequential detection of artery sectional area using optical flow technique,” in Proc. 8th WSEAS Int. Conf. Circuits, Systems, Electronics, Control Signal Process., 2009, pp. 222–226. [15] A. R. Mansouri, “Region tracking via level set PDEs without motion computation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 947–961, Jul. 2002. [16] C. Chen, Y. Wang, J. Yu, Z. Zhou, Li. Shen, and Y. Chen, “Tracking pylorus in ultrasonic image sequences with edge-based optical flow,” IEEE Trans. Med. Imag., vol. 31, no. 3, pp. 843–855, Mar. 2012. [17] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell., vol. 17, no. 1–3, pp. 185–283, Aug. 1981. [18] M. Black and P. Anandan, “The robust estimation of multiple motions: parametric and piecewise-smooth flow fields,” Comput. Vis. Image Understanding, vol. 63, no. 1, pp. 75–104, Jan. 1996. [19] D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun. 2010, pp. 2432–2439. [20] S. Uras, F. Girosi, A. Verri, and V. Torre, “A computational approach to motion perception,” Biol. Cybern., vol. 60, no. 2, pp. 79–87, 1988. [21] J. Moln´ar, D. Chetverikov, and S. Fazekas, “Illumination-robust variational optical flow using cross-correlation,” Comput. Vis. Image Understanding, vol. 114, no. 10, pp. 1104–1114, Oct. 2010. [22] C. Chen, Y. Wang, J. Yu, Z. Zhou, Li. Shen, and Y. Chen, “Weighted correlation based variational optical flow for flow detection in ultrasonic video,” Med. Phys., vol. 40, no. 5, pp. 052901-1–052901-12, Apr. 2013. [23] K. K. L. Wong, R. M. Kelso, S. G. Worthley, P. Sanders, J. N. Mazumdar, and D. Abbott, “Noninvasive cardiac flow assessment using high speed magnetic resonance fluid motion tracking,” PLoS ONE, vol. 4, no. 5, p. e5688, May 2009. [24] R. Torii, N. B. Wood, N. Hadjiloizou, A. W. Dowsey, A. R. Wright, A. D. Hughes, J. Davies, D. P. Francis, J. Mayet, G. Yang, S. A. M. Thom, and X. Y. Xu, “Fluid-structure interaction analysis of a patient-specific right coronary artery with physiological velocity and pressure,” Commun. Numerical Method Eng., vol. 25, no. 5, pp. 565–580, Feb. 2009. [25] Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Trans. Image Process., vol. 11, no. 11, pp. 1260–1270, Nov. 2002.

[26] M. J. Ledesma-Carbayo, J. Kybic, A. Santos, M. S¨uhling, P. Hunziker, and M. Unser, “Spatio-temporal nonrigid registration for ultrasound cardiac motion estimation,” IEEE Trans. Med. Imag., vol. 24, no. 9, pp. 1113– 1125, Sep. 2005. Chaojie Chen received the B.S. degree from Zhejiang University, Hangzhou, China, in 2010, and the M.S. degree from Fudan University, China, in 2013. Her research interests include areas of medical signal processing.

Yuanyuan Wang (SM’03) received the B.Sc., M.Sc., and Ph.D. degrees in electronic engineering from Fudan University, Shanghai, China, in 1990, 1992, and 1994, respectively. During 1994 to 1996, he was a Postdoctoral Research Fellow with the School of Electronic Engineering and Computer Science, University of Wales, Bangor, U.K. In May 1996, he was with the Department of Electronic Engineering, Fudan University as an Associate Professor, where he was promoted to Full Professor in May, 1998. He is currently the Director of Biomedical Engineering Center and Vice Dean of Information Science and Engineering School, Fudan University. He is also the author or coauthor of 6 books and 400 research papers. His research interests include medical ultrasound techniques and medical signal processing. Jinhua Yu received the Ph.D. degree from Fudan University, Shanghai, China, in 2008. She was a Postdoctoral Fellow at the University of Missouri, Columbia, from 2008 to 2010. She is currently an Associate Researcher at Fudan University. Her current research interests include medical image analysis and ultrasound imaging.

Zhuyu Zhou received the Master’s degree from Shanghai Jiaotong University School of Medicine, China. She is an ultrasonography doctor, in the Department of Ultrasonography, Chongming branch of Xinhua hospital affiliated to Shanghai Jiaotong University School of Medicine. Her interests are in the areas of ultrasonography diagnose of gastrointestinal.

Li Shen received the Graduate degree from the Medical School of Chongming, Shanghai, China, in 1976. Since 1987, he has been the Director of Department of Ultrasound, Xinhua hospital, Shanghai Jiao Tong University School of Medicine Chongming Branch, Shanghai, and is involved in the interventional diagnose and treatment of liver cancer. He became the Visiting Scholar of Peking University School of Oncology in 2002. His current research interests include gastrointestinal ultrasound and contrast-enhanced ultrasonography. Ya-Qing Chen received the Medical degree in medical imaging from Shanghai Jiaotong University School of Medicine, Shanghai, China, in 2005. She is currently the Chief Physician of the Department of Ultrasound in Medicine, Xinhua Hospital, Shanghai Jiaotong University School of Medicine. Her research interests include areas of ultrasound diagnosis of tumors.

Automatic motion analysis system for pyloric flow in ultrasonic videos.

Ultrasonography has been widely used to evaluate duodenogastric reflux (DGR). But to the best of our knowledge, no automatic analysis system was devel...
700KB Sizes 0 Downloads 0 Views