Physics in Medicine & Biology

Related content

PAPER

A navigation system for flexible endoscopes using abdominal 3D ultrasound To cite this article: R Hoffmann et al 2014 Phys. Med. Biol. 59 5545

- 2D/3D registration of US to CT Johann Hummel, Michael Figl, Michael Bax et al. - A Kinect™ camera based navigation system for percutaneous abdominal puncture Deqiang Xiao, Huoling Luo, Fucang Jia et al. - Topical Review Terry M Peters

View the article online for updates and enhancements.

This content was downloaded from IP address 134.53.225.202 on 28/11/2017 at 03:32

Institute of Physics and Engineering in Medicine Phys. Med. Biol. 59 (2014) 5545–5558

Physics in Medicine and Biology doi:10.1088/0031-9155/59/18/5545

A navigation system for flexible endoscopes using abdominal 3D ultrasound R Hoffmann, M Kaar, Amon Bathia, Amar Bathia, A Lampret, W Birkfellner, J Hummel and M Figl Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, 1090 Wien, Austria E-mail: [email protected] Received 8 May 2014, revised 17 July 2014 Accepted for publication 28 July 2014 Published 29 August 2014 Abstract

A navigation system for flexible endoscopes equipped with ultrasound (US) scan heads is presented. In contrast to similar systems, abdominal 3D-US is used for image fusion of the pre-interventional computed tomography (CT) to the endoscopic US. A 3D-US scan, tracked with an optical tracking system (OTS), is taken pre-operatively together with the CT scan. The CT is calibrated using the OTS, providing the transformation from CT to 3D-US. Immediately before intervention a 3D-US tracked with an electromagnetic tracking system (EMTS) is acquired and registered intra-modal to the preoperative 3D-US. The endoscopic US is calibrated using the EMTS and registered to the preoperative CT by an intra-modal 3D-US/3D-US registration. Phantom studies showed a registration error for the US to CT registration of 5.1 mm ± 2.8 mm. 3D-US/3D-US registration of patient data gave an error of 4.1 mm compared to 2.8  mm with the phantom. From this we estimate an error on patient experiments of 5.6 mm. Keywords: image guided surgery, endoscopic intervention, image registration (Some figures may appear in colour only in the online journal) 1. Introduction Advances in imaging technology and instrumentation allow minimally-invasive surgery to be performed through very small incisions. Gastrointestinal endoscopy has evolved from a diagnostic imaging modality to one that is well-positioned for minimally-invasive diagnostic 0031-9155/14/185545+14$33.00  © 2014 Institute of Physics and Engineering in Medicine  Printed in the UK & the USA

5545

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

and therapeutic procedures, including the administration of compounds for the treatment or palliation of gastrointestinal tumours and in fine-needle aspiration of pancreatic masses. By biopsying the lesion through a segment of the intestinal wall, the risk of needle-track seeding is minimized. Navigation systems for endoscopes mostly register the available endoscopic video with preoperative CT images (Gergel et al 2011, Brouwer et al 2012). An interesting approach was presented by Kukuk (2003) who used a real-time deformable model for flexible instruments inserted into tubular structures. A three-dimensional endoscopic ultrasound system with convex scanning echo endoscope to diagnose and navigate for endoscopic puncture provides another solution (Koizumi et al 2003). To detect the position of the probe and to monitor the shape of the scope inside the body, a fibre optic tracking system was used. Another promising approach consists in image registration of the video image using a preoperative virtual endoscopy system (Mori et al 2001). Unfortunately, it seems to be too unstable for the use on an operative site. Another approach comes along with the use of integrated ultrasound scan heads (Hummel et al 2008). These US images are usually used to look behind the boundaries of body cavities to enable endoscopic needle biopsies. Because of the poor image quality of the miniaturized US scan heads, image fusion with pre-operative CT volume data sets seems to be desirable. The challenge of such navigation systems is the registration problem between US image and preoperative CT images (Sun et al 2007, Wein et al 2008, Nam et al 2012). 3D-US guidance systems have recently experienced a broad field of applications (Sato and Nakamura 2012). The possibilities of clinical implementations reach from needle placement (Bluvol et al 2008), 3D ultrasound guidance of surgical robotics (Long et al 2012) and Laparoscopic ultrasonography (Lirici et al 1994) to breast biopsy systems (Nelson et al 2012). An additional application with 3D-US comes with the feasibility to calculate an intrasubjective elastic registration between two US images taken at different therapeutic stages (Khallaghi et al 2012). This method allows for correction of tissue deformation of the relevant patient regions which can be observed between the CT imaging and the intervention. Our method applies two additional abdominal 3D-US images, one at the CT site, the other one immediately before intervention at the intervention room. This allows to replace a direct 2D/3D registration from the endoscopic US to the pre-operative CT by an intra-modal 3D-US/3D-US registration and tracker calibrations. Therefore, the applied US/US registration establishes the ‘link’ between the intervention room and the CT room. As the first US image data set has to be taken before or after the CT image acquisition, we refer to this US image as a ‘preoperative’ 3D-US image. To link this preoperative US image with the CT image, a CT calibration procedure is necessary which can be accomplished using a simple spherephantom. Since the CT is calibrated the preoperative US image can immediately be fused to the preoperative CT by means of the optical tracking system (OTS). Given the transformation from the optical tracking coordinate system to the CT coordinate system, we are able to determine the complete transformation from the preoperative 3D-US image to the preoperative CT image data set without image registration. Before intervention takes place another 3D-US image—the ‘interventional’ US image - has to be acquired. In this case, the 3D-US scan head is calibrated and tracked using an EMTS. Then a 3D3D registration between the preoperative and the interventional 3D-US image data set allows to transform from the CT image space to the EMTS coordinate system. The last registration step consists of the calibration of the endoscopic US image plane with the EMTS. Finally, we can transform points from CT to the 2D endoscopic US and vice verse in real time. Figure 1 shows the setup described above.

5546

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 1.  The full transformation chain to transform from the US coordinate system to

the coordinate system of the CT. The transformation of at least three US points defines the required plane in the CT volume. Here, the registration between the 3D-US images and the 3D CT images is accomplished via the tracking system and an intramodal 3DUS registration.

2.  Materials and methods The materials and methods section  is divided into seven parts: first,we describe the trans­ formation chain from a point in the endoscopic US space into the pre-operative CT-space. In the second part the technical equipment is presented. The third part contains the necessary ultrasound calibrations. In parts four and five, the centre pieces of this work are described, namely CT-US registration and the intra-modal 3D-US/3D-US registration. Part six presents an overview of the clinical workflow, where we show how the system would be integrated into the clinical routine. In the last subsection, we show how the error analysis of this complex system can be performed. 2.1.  Transformation chain

During the intervention, a sensor of the EMTS is mounted on the endoscopic US system, called s1EMTS, another EMTS sensor is mounted on the 3D-US system’s scanhead, called s2EMTS. To display the reformatted oblique slice from the CT-Volume which corresponds to the endoscopic US image plane, one has to transform at least three points from the US image plane to the coordinate system of the CT volume data set. Mathematically, any point PUS in the endoscopic US image can be transformed to a point PCT in the CT space as described by Hummel et al (2008): PCT = TUSint → CT × Ts2EMTS → USint

(1) × Ts1EMTS → s2EMTS × TUSendo → s1EMTS × PUSendo.

or P CT = TUSendo → CT × PUSendo (2) TUSendo → s1EMTS represents the transformation from the pixels of the US image plane to the sensor space of the EMTS and is provided by a freehand 2D-US calibration. Ts1EMTS → s2EMTS is provided by the EMTS and establishes the relation between the two sensor coordinate systems. Ts2EMTS → USint is the transformation from the EMTS sensor on the 3D-US scanhead to the 3D-US image and is given by a 3D-US calibration. 5547

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 2.  The phantom used for the experimental measurements. The plastic spheres

(marked by ‘S’) are used for the calculation of the target registration error (TRE). The flexible endoscope (marked with ‘E’) tracked with an electromagnetic probe (marked with ‘P’) is inserted into the plastic tube. To ensure good coupling, a water-filled balloon (marked with ‘B’) is put over the tip of the endoscope.

TUSint  →  CT finally represents the transformation from 3D-US space to the CT coordinate system which has to be determined by the following two steps. First, the link between intervention room and CT room is found by the 3D-US/3D-US registration from the 3D-US image USint taken immediately before the endoscopic intervention to the preoperative 3D-US image USpre taken at the CT room ( TUSint → USpre ). The transformation TUSint → CT can then be written as TUSint → CT = TUSpre → CT × TUSint → USpre (3)

The remaining transformation TUSpre → CT is given by the CT-calibration and a 3D-US calibration. A sketch of the whole transformation chain is given in figure 1. 2.2.  Technical equipment

For electromagnetic tracking, we used the Aurora (NDI, Waterloo, Canada) EMTS, which was applied for the flexible instrument as well as for the US probe at the intervention site. The optical tracking system (OTS) Polaris (NDI, London, Canada) was used for calibration and tracking of the 3D-US probe in the CT room and for the calibration of the CT. 3D-US images were acquired with a GE Volusion E8 (General Electrics, Fairfield, USA) equipped with an GE RAB6-D 4D scanhead. The flexible endoscope was a GF-UCT140-AL5 (Olympus, Tokyo, Japan) which has an electrical curved linear array scanhead with a 180° scanning range. The CT volume data sets were acquired on a Somatom Sensation Open (Siemens, Erlangen, DE) reformatting a slice thickness of 1 mm. For evaluation we used a phantom (see figure  2) built by SHELLEY Medical Imaging Technologies (London, CA). It consists of a central tube and several bifurcations made of soft silicone placed inside a box which can be filled with water both inside and outside of the tubes. Additionally, small plastic spheres were fixed rigidly on the tube and the bottom of the container which enable the determination of target registration errors (TRE). For experiments, the phantom was filled with a 7% ethanol-water mixture to ensure the correct speed-of-sound as expected by the ultrasound device (Duck 1990, NIST 2014). Then the 3D abdominal US can be applied for registration. After this procedure, the tracked flexible endoscope can be inserted into the tube and the TRE can be evaluated.

5548

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 3.  The left image shows the phantom and the ultrasound scan head mounted on a mechanical arm. The right image shows the phantom with some geometric figures arranged.

2.3.  US calibrations 2.3.1.  2D endoscopic US calibration.  For calibration of the 2D endoscopic US, we applied a method from Hummel et al (2008). They proposed the use of a wire phantom and 2D3D registration from a segmented 2D-US scan to a volume of the wire phantom. This procedure results in the transformation TUSendo → s1EMTS in equation (1). 2.3.2.  3D-US calibration.  The phantom applied for 3D-US calibration consisted of a 5 mm thick base plate with pyramids and cones arranged thereon, see figure 3. A sensor of the tracking system (OTS or EMTS) was mounted on the scan head and the phantom was scanned from ten different directions. As shown in figure 3 on the left, the scan head was fixed using a mechanical arm to avoid errors caused by motion. The US images were then registered to each other applying the 3D-US/3D-US registration from section 2.5. Figure 5 shows a checkerboard image before (left) and after (right) successful registration. The resulting equations are of the form

T USi → USj = Ts → US × TOTS → s Posj × Ts Posi → OTS × TUS → s (4)

Equation (4) can be deduced by following the arrows of figure 4. TUSi → USj represents the transformation from US image i to j, it is given by the 3D-US/3D-US −1 registration. TUS → s and Ts → US = TUS → s, is the calibration matrix we are looking for while TsPosi  →  OTS and TOTS → s Posj are given by the tracking system and denote the transformation from the sensor at positions Posi and Posj to the OTS camera. A multiplication of equation (4) −1 with TUS gives i → USj  −1 1 (5) Serror = TUS × Ts → US × TOTS → s Posj × Ts Posi → OTS × T−s → US i → USj where Serror would be the unit matrix given that there are no inaccuracies from the tracker data and the US/US registration and the right calibration matrix is known. To find the optimal solution for Ts → US, we minimized the Frobenius norm of the difference of matrix Serror to the unit matrix by means of MATLAB routines (The MathWorks Inc., MA, USA). 5549

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 4.  Typical registration result used for calibration process with phantom. The left figure shows a slice of the unregistered volumes in a checkerboard grid (fixed and moving image alternating), the right hand figure shows the same slice after registration.

This procedure was done with the sensors of both tracking systems and resulted in the transformations Ts2EMTS → USint in equation (1) and TUSpre → sOTS in equation (6). 2.4.  Preoperative 3D-US to 3D-CT registration

Registration of the preoperative CT and 3D-US volumes was done indirectly, by using the OTS. We calibrated the CT relative to an optical reference frame (ORF), see the left image in figure 6. For this purpose, we acquired a 3D-CT scan of our SHELLEY phantom and of a simple box phantom where five small holes were drilled on top of the frame. The coordinates of the holes of the box phantom can easily be determined with respect to the ORF using a calibrated stylus. Since the coordinates of these holes can be also determined in the CT volume data set, a point-to-point registration can be applied to determine the transformation from CT image space to the ORF, TCT → ORF. Then the SHELLEY phantom was scanned with the optically tracked 3D-US device (= preoperative US image) and the corresponding tacker data TORF → sOTS were stored (see the right image in figure 6). Therefore, the full transformation from the 3D-USpre to the CT can be calculated as T USpre → CT = TORF → CT × TsOTS → ORF × TUSpre → sOTS (6)

where TUSpre → sOTS is given by the above mentioned 3D-US calibration. Note, that TCT → USpre is fixed and represents a transformation between the two image modalities. To use the CT volume information in the intervention room, an additional US image has to be taken before the intervention starts (= interventional US image). 2.5.  Registration of pre-operative US to interventional US

The Insight Segmentation and Registration Toolkit (ITK) (Schroeder et al 2005), was used for the registration task. We applied ITK’s multi-resolution registration with four levels and 35 steps each to speed up registration and increase robustness. According to Kaar et al (2013), the mutual information metric defined in Mattes et al (2001) was revealed to be the most accurate 5550

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 5.  Concept of 3D-US calibration with the figurative phantom.

Figure 6.  CT calibration is done using an optical reference frame (ORF) attached to the

CT and a box with several drilled holes visible in the CT and measured by a stylus. In a second step, the SHELLEY phantom was scanned with both the CT and a calibrated 3D-US scanhead. In combination, this allowed to transform from 3D-US to CT.

and reliable one for 3D-US registration. Before the registration process was started, we calculated Laplacian (LOG, Laplacian of Gaussian) and gradient images. These were merged with the original grey level images to the so-called importance images (Foroughi et al 2006) by summing up weighted voxel values. The weights for each component image were chosen with respect to the optimal 3D/3D registration results. In a next step, low quality image areas close to the scan head were masked out. The transformation between the volumes was assumed to be rigid. 5551

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

2.6.  Clinical workflow

The preoperative 3D-US volume will be acquired immediately before or after the CT scan. As the 3D-US and CT calibrations are accomplished in advance and independent of the particular patient (see sections 2.3 and 2.4),the additional machinery in the CT room is confined to the tracker camera and the 3D-US scanner. The 3D-US scanhead and the endoscopic US probe are calibrated with respect to a magnetic tracking system before intervention takes place. After a successful registration of the preoperative 3D-US volume to the intraoperative 3D-US volume, all necessary transformations are given. In endoscopic procedures, the patient is typically positioned laterally while the CT scan is taken in a supine position which results in a shift of the abdominal organs. Small regions as scanned by the 3D-US are assumed to move rigidly and movement can therefore be found by the 3D-US/3D-US registration.

2.7.  Evaluation errors and experiments

To evaluate the particular calibration and registration steps, a TRE was calculated by N

1 TRE = (7) ∑‖PB, k − TA→ B × PA,k‖2 N k=1

where a point in the coordinate system A is transformed into the coordinate system B by the transformation TA → B. The small plastic spheres mounted rigidly relative to the plastic tube inside the water tank were used as targets (see figure 2). 2.7.1.  3D3D registration error.  We acquired three 3D-US images of the SHELLEY phantom from different tracked probe positions, each time tracked by the OTS and the EMTS simultaneously. Then the error matrix Serror as defined in formula (5) was calculated. The norm the of the translational part of this matrix is referred as TRE 3D3Dphantom. In this context, two errors were evaluated: TREphantom,ots where only tracking data from the OTS was used and TREphantom,emts, where one data set from the OTS and one from the EMTS were applied. The latter reflects the actual tracking situation, while the first was calculated to evaluate the 3D-US/3D-US registration. To prove the feasibility of our system on real patient data, we acquired abdomen images of three volunteers each scanned three times from various poses. Then the error analysis for the 3D3D registration was repeated as described above when both tracking systems were applied. The resulting error was labelled consistently with TREpatient,emts. 2.7.2. CT-US registration.  For evaluation of the preoperative 3D-US to 3D-CT calibration TUS → CT, a TRE was calculated applying equation (7) (TREUS → CT). The whole procedure was repeated three times and 10 fiducials were applied with each calibration. 2.7.3.  System evaluation.  To evaluate the whole transformation chain, targets (= spheres on

the SHELLEY phantom) from the endoscopic US plane where transformed to the coordinate system of the pre-operative CT (=  TREsystem,phantom), i.e. the TRE of the transformation TUSendo  →  CT as defined in equation  (2). For this assessment procedure, we chose a typical CT-US calibration result from section 2.7.2 and applied the 3D3D registration. Five 5552

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 7.  A checkerboard image before and after registration US/US registration of the SHELLEY phantom.

Figure 8.  Sagittal and coronal checkerboard images of abdominal US.

fiducials which were identifiable on the endoscopic US image were applied to calculate the corresponding TREs. 3. Results 3D/3D registration is highly accurate between two 3D-US phantom images where TREphantom,ots resulted as 1.83  mm  ±  0.23  mm and TREphantom,emts was 2.77  mm  ±  0.45  mm. Both TRE resulted as an aggregate of three 3D-US/3D-US registrations and 11 identified spheres in total. Figure 7 shows a checkerboard of such a registration procedure before (left) and after (right) successful registration. With respect to patient data the TREpatient,emts was 4.1 mm ± 1.2 mm. Figure 8 shows a checkerboard of a successful registration along the transversal and sagittal axes of an abdominal US image. Figure 9 displays the registration progress of the six degrees of freedom for the 5553

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 9.  The propagation of the translation (x, y, z) on the left and the rotation parameters (vector component of the quaternion) on the right during the registration process of a successful 3D/3D patient registration is shown. The black, grey and white graphs show the projection to the (x, y), (x, z) and (z, y) principle planes (for the quaternion (a, b), (a, c), (b, c)).

Figure 10.  Fiducials shown in all modalities (from left to right: 3D US image, 2D endoscopic US and CT).

transformation matrix. Starting from initial conditions, the approach of each coordinate to the final value can be tracked. The TRE for the CT-US registration resulted to 1.8 mm ± 0.4 mm. The error reflecting the complete transformation chain TREsystem,phantom was found to be 5.1 mm ± 2.8 mm. Figure 10 shows applied fiducials in the CT, the 3D-US and for further evaluations also in the endoscopic US. Figure 11 gives an example of image fusion. A half sphere (20 mm diameter) was mounted in the phantom representing a structure that can easily be seen in both modalities. Figure 12 shows a cross-section of an air-filled vessel. In the CT, this vessel can be properly determined, whereas in the endoscopic ultrasound image only the part in direction of the endoscope can be seen. Moreover, ultrasound artefacts can be observed—diffraction of the sound wave at both sides of the half sphere. 4. Discussion The evaluation of our navigation system was done by use of a phantom and resulted in an acceptable error (Zaaroor et al 2001, Appelbaum et al 2013). To estimate the expected system error for a patient study (TREsystem,patient) where abdomen images are used instead of the 5554

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Figure 11.  View of a sphere with an endoscopic 2D ultrasound image merged to the

corresponding oblique slice of the CT. The image region in the white frame is zoomed on the right.

Figure 12.  View of a sphere with an endoscopic 2D ultrasound image merged to the

corresponding oblique slice of the CT. The image region in the white frame is zoomed on the right.

SHELLEY phantom, we assumed an error propagation as described in Figl et al (2013). For the whole transformation chain error propagation gives:  2 2 (8) TRE 2prop = TRE 22Dus + 2 * TRE2EMTS + TRE 3Dus + TRE 3D3D + TRE2US → CT where the different TREs can be found in table 1. TRE2EMTS appears twice, first because of an additional error with respect to the 3D-US calibration with the EMTS and second because of the EMTS tracking itself. Because the tracking error from the OTS is a magnitude smaller (≈0.3 mm), it was omitted in this analysis. Taking these numbers, the calculated TREprop arises to 4.7  mm for the phantom evaluation, which is a good accordance to the measured error TREsystem,phantom which was 5.1  mm. As we have measured TRE 3D3Dphantom and TRE 3D3Dpatient, we can substitute TRE 3D3Dphantom with TRE 3D3Dpatient in equation  (8) to get an estimation of the system error in the case of patient data. In this case, equation (8) gives TREprop,patient = 5.0 mm. For this purpose, we need the particular TREs from equation (8). For TRE2Dus, we found 2.7  mm (Hummel et al 2008), TRE3Dus was 1.2  mm (Kaar et al 2013), and TREtracker,emts 1.1 mm. Now we can replace TRE 3D3Dphantom with TRE 3D3Dpatient and get TREsystem,patient = 5.6 mm. 5555

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Table 1.  The table shows the TREs used in equation (8). Additionally, the source and

the part of the transformation are indicated.

Name

Transformation

Value (mm)

Taken from

TRE2Dus TREtracker,emts TRE3Dus TRE 3D3Dphantom TRE 3D3Dpatient TREUS → CT

Tsensor EMTS→ USendo TEMTS → sensor EMTS TUSpre → sensorOTS TUSpre → USint TUSpre → USint TUS → CT

2.7 1.1 1.2 2.8 4.1 1.8

Hummel et al (2008) NDI (2013) Kaar et al (2013) Results section

Although the whole navigation system would work with an EMTS alone, we established the OTS where is was possible (i.e. where a free line of sight is available). The reason lies in the fact that the OTS tracking error is a magnitude lower than the error of the EMTS (see above) and the EMTS tracker measurements are also distorted by the presence of metallic objects (e.g. the CT scanner (Hummel et al 2006)). On the interventional site, we use only the EMTS to avoid an additional registration between the OTS and the EMTS. Additionally, this avoids limitations to the interventionists as they do not have to keep attention on the free line of sight needed for an OTS. The endoscopic ultrasound calibration error TRE2Dus crucially depends on the distance between endoscopic scan head and fiducials. As shown in Hummel et al (2008), the error varies from 2.7 mm at a distance of about 15 mm to an error of up to 5 mm at a distance of 6–7 cm. This error is a major part of the entire system error TREsystem. In this study, we applied fiducials at distances between 10 and 25 mm which reflects the task of taking biopsies behind body cavities. Another problem arising by the use of fiducials is the additional localization error when the fiducial had to be identified in the endoscopic US image. In the case of a TRE2Dus greater than 5 mm, the resulting TREsystem was omitted because of the ambiguity in distinguishing between artefacts and fiducials. An important issue concerning navigation systems in soft tissue is deformation. The use of abdominal 3D-US images provides a promising approach to deal with this challenge. The computed importance images can immediately be applied for intra-subject, intra-modality elastic registration of 3D ultrasound images as described in Foroughi et al (2006). The GE US scanner used in this work compounds 3D images by sweeping an array of sensors across a predefined angle applying a pyramidal3D scan. The resulting 2D images are then merged together to a volume in polar coordinates which is then converted to a Cartesian volume by use of an interpolation algorithm. As a consequence, the further the pixels are away from the scan head, the more artefacts are generated. These artefacts negatively impact the registration success. Alternatively, a 3D technique scanning equidistant 2D image plane would decrease errors due to interpolation in deeper body regions. Consequently,errors resulting from this 3D scanning aspect should decrease with the use of such a rectilinear scan head. 5. Conclusions Our method overcomes the problem of direct 2D/3D registration from endoscopic US to the pre-operative CT by an intra-modal US/US registration and tracker calibrations. The result of the phantom study and its interpolation to patient application resulted in an error of about 5 mm and is therefore suitable for clinical usage. 5556

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

Acknowledgments The research was funded by the Austrian Science Fund (FWF): L625-N15. The local ethic committee approved this study with the application number 128/2011. All patients contributing their data for this work gave their permission. References Appelbaum L, Solbiati L, Sosna J, Nissenbaum Y, Greenbaum N and Goldberg S 2013 Evaluation of an electromagnetic image-fusion navigation system for biopsy of small lesions: assessment of accuracy in an in vivo swine model Acad. Radiol. 20 209–17 Bluvol N, Sheikh A, Kornecki A, Ddel R F, Downey D and Fenster A 2008 A needle guidance system for biopsy and therapy using 2D ultrasound Med. Phys. 35 617–28 Brouwer O, Buckle T, Bunschoten A, Kuil J, Vahrmeijer A, Wendler T, Valds-Olmos R, van der Poel H and van Leeuwen F 2012 Image navigation as a means to expand the boundaries of fluorescenceguided surgery Phys. Med. Biol. 57 3123–36 Duck F 1990 Physical Properties of Tissue (San Diego: Academic) Figl M, Kaar M, Hoffman R, Kratochwil A and Hummel J 2013 An error analysis perspective for patient alignment systems Int. J. Comput. Assist. Radiol. Surg. 6 849–56 Foroughi  P, Abolmaesumi  P and Hashtrudi-Zaad  K 2006 Intra-subject elastic registration of 3D ultrasound images Med. Image Anal. 10 713–25 Gergel I, Hering J, Tetzlaff R, Meinzer H and Wegner I 2011 An electromagnetic navigation system for transbronchial interventions with a novel approach to respiratory motion compensation Med. Phys. 38 6742–53 Hummel  J, Figl  M, Bax  M, Bergmann  H and Birkfellner  W 2008 2D/3D registration of endoscopic ultrasound to CT volume data Phys. Med. Biol. 53 4303–16 Hummel  J, Figl  M, Birkfellner  W, Bax  M and Sahidi  R 2006 Evaluation of a new electromagnetic tracking system using a standardized assessment protocol Phys. Med. Biol. 56 N205–10 Kaar  M, Figl  M, Hoffmann  R, Birkfellner  W, Stock  M, Georg  D, Goldner  G and Hummel  J 2013 Automatic patient alignment system using 3D ultrasound Med. Phys. 40 41714 Khallaghi S, Leung C, Hastrudi-Zaad K, Foroughi P, Nguan C and Abolmaesumi P 2012 Experimental validation of an intrasubject elastic registration algorithm for dynamic-3D ultrasound images Med. Phys. 39 5488–97 Koizumi N, Sumiyama K, Suzuki N, Hattori A, Tajiri H and Uchiyama A 2003 Development of a new three-dimensional endoscopic ultrasound system through endoscope shape monitorin Stud. Heal. Technol. Inform. 94 168–70 Kukuk M 2003 An ‘optimal’ k-needle placement strategy given an approximate initial needle position Medical Image Computing and Computer-Assisted Intervention vol 2878, (Berlin: Springer) pp 116–23 Lirici  M, Caratozzolo  M, Urbano  V and Angelini  L 1994 Laparoscopic ultrasonography: limits and potential of present technologies Endosc. Surg. Allied Technol. 2 127–33 Long J, Lee B, Guillotreau J, Autorino R, Laydner H, Yakoubi R, Rizkala E, Stein R, Kaouk J and Haber G 2012 Real-time robotic transrectal ultrasound navigation during robotic radical prostatectomy: initial clinical experience Urology 80 608–13 Mattes  D, Haynor  D, Vesselle  H, Lewellen  T and Eubank  W 2001 Non-rigid multimodal image registration Proc. SPIE 4322 Med. Imag. 2001: Image Processing pp 1609–20 Mori K, DeguchiJ D, Hasegaw , Suenaga Y, Toriwaki J, Takabatake H and Natori H 2001 A method for tracking the camera motion of real endoscope by epipolar geometry analysis and virtual endoscopy system Medical Image Computing and Computer-Assisted Intervention vol 2208 (Berlin: Springer) pp 1–8 Nam  W, Kang  D, Lee  D, Lee  J and Ra  J 2012 Automatic registration between 3D intra- operative ultrasound and pre-operative CT images of the liver based on robust edge matching Phys. Med. Biol. 57 69–91 NDI 2013 www.ndigital.com/medical/aurora-techspecs.php Nelson T, Tran A, Fakourfar H and Nebeker J 2012 Positional calibration of an ultrasound image-guided robotic breast biopsy system J. Ultrasound Med. 31 351–9 5557

R Hoffmann et al

Phys. Med. Biol. 59 (2014) 5545

NIST 2014 National institute of standards and technology www.nist.gov Sato  I and Nakamura  R 2012 Positioning error evaluation of GPU-based 3D ultrasound surgical navigation system for moving targets by using optical tracking system Int. J. Comput. Assist. Radiol. Surg. 8 379–93 Schroeder  W, Ibanez  L, Ng  L and Cates  J 2005 ITK Software Guide: the Insight Segmentation and RegistrationToolkit (New York: Kitware) Sun  Y, Kadoury  S, Li  Y, John  M, Resnick  J, Plambeck  G, Liao  R, Sauer  F and Xu  C 2007 Image guidance of intracardiac ultrasound with fusion of pre-operative images MICCAI 10 60–70 Wein W, Brunke S, Khamene A, Callstrom M and Navab N 2008 Automatic CT-ultrasound registration for diagnostic imaging and image-guided intervention Med. Imag. Anal. 12 577–85 Zaaroor M, Bejerano Y, Weinfeld Z and Ben-Haim S 2001 Novel magnetic technology for intraoperative intracranial frameless navigation: in vivo and in vitro results Neurosurgery 48 1107–8

5558

A navigation system for flexible endoscopes using abdominal 3D ultrasound.

A navigation system for flexible endoscopes equipped with ultrasound (US) scan heads is presented. In contrast to similar systems, abdominal 3D-US is ...
2MB Sizes 0 Downloads 7 Views