Australas Phys Eng Sci Med DOI 10.1007/s13246-014-0275-8

TECHNICAL PAPER

Multi-mode navigation in image-guided neurosurgery using a wireless tablet PC Weiwei Deng • Fang Li • Manning Wang Zhijian Song



Received: 12 October 2013 / Accepted: 4 May 2014 Ó Australasian College of Physical Scientists and Engineers in Medicine 2014

Abstract The authors present a tablet-based image-guided neurosurgery system that transmits navigation information from a host to a movable tablet PC via a wireless local area network, and displays this information on the tablet screen. With this new system, surgeons can obtain standard navigation information on the tablet screen to avoid large view switching between the surgical field and the navigation screen of the computer monitor. In addition, this system can also provide additional navigation information by displaying arbitrary sectional images and maximum intensity projection images on the tablet screen. These images are generated according to the position and orientation of the tablet screen, and can be used to locate intracranial tumor for preoperative planning and intraoperative procedures in neurosurgery. The average tracking error was approximately 1.25 mm. A dry skull specimen study verified the feasibility of the proposed system. Furthermore, an actual patient’s surgery with this system showed its clinical applicability. Keywords Image-guided neurosurgery  Display  Mobile  Maximum intensity projection

W. Deng  F. Li  M. Wang (&)  Z. Song (&) Digital Medical Research Center, Shanghai Medical College, Fudan University, PO Box 251, 138 Yixueyuan Road, Shanghai 200032, China e-mail: [email protected] Z. Song e-mail: [email protected] W. Deng  F. Li  M. Wang  Z. Song Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China

Introduction An image-guided neurosurgery system (IGNS) can help neurosurgeons make the surgical plans, locate the position of instruments and reduce the damage in normal tissues, and has become a routine device in neurosurgical operating rooms [1–4]. The basic principle of IGNS is to guide an operation by using the preoperative images of the patient, primarily through tracking actual surgical tools and displaying their relative positions to the patient by displaying corresponding virtual tools on the images [5]. The images that encapsulate overlaid virtual tools are displayed on a navigation screen of a conventional IGNS. However, the relative position between the surgical tools and the images is displayed on the navigation screen, and it is usually difficult to understand them in the context of actual surgical field, especially after draping the patent’s head. For example, the surgeons cannot directly see the relative position between the tip of actual surgical tool and the head of the patient just from the navigation screen, and thus in their mind, they have to convert the relative position between the virtual surgical tools and the images into the real world. This type of guidance affects the surgeon’s concentration on the surgical field. Besides that, this conversion may also cause a certain degree of guidance error. In addition, standard navigation mode of a conventional IGNS can only display standard sectional images (such as axial, sagittal and coronal images) passing through the tip of the tracked surgical tool, and it cannot locate a plane. Although the system can display three planes in the trajectory navigation mode, it only shows a sectional image that is perpendicular to the trajectory, as well as two other mutually perpendicular sectional images that are both perpendicular to the first image, so the surgeons could not obtain the actual positions and orientations of these three

123

Australas Phys Eng Sci Med

sectional images in real world. Augmented reality (AR) technology was introduced into IGNS to integrate the navigation information into the surgical field to facilitate the understanding of navigation information [6–8]. The first medical AR system was based on an operating microscope, which was presented in 1986 [9]. Besides, there are some other commonly used display devices for AR applications, such as head-mounted displays (HMD), AR Windows and so on. Sielhorst et al. [10] reviewed these display devices, and also discussed their advantages and disadvantages. In a nutshell, the display devices limit the surgical area, or bring inconvenience to the surgeon, there are not extensively used in practical clinical applications. In this paper, we presented a new IGNS using a mobile tablet PC and wireless router to avoid the above-mentioned two problems, and we used the term Tablet-IGNS to describe this mobile and wireless pattern of IGNS. With this new system, the surgeons can directly observe arbitrary sectional images on the tablet screen according to its position and orientation. Furthermore, this system can also display maximum intensity projection (MIP) images on the tablet screen. MIP is a volume rendering method for threedimentional (3D) data that projects in the visualization plane the voxel with maximum intensity that fall in the way of parallel rays traced from a viewpoint to the plane of projection. This type of projection method is widely used for Computerized Tomography Angiography (CTA) and Magnetic Resonance Angiography (MRA) images to understand the anatomy of blood vessels [11–13]. When the tablet moves, the sectional image or MIP image can be updated accordingly in real time. In this study, a dry skull specimen was used to verify the feasibility of the TabletIGNS, and on this basis, an actual gliomas resection case was conducted by using the Tablet-IGNS to evaluate the clinical applicability.

space, transforms their positions into the image space and generates corresponding virtual surgical tools on the images. Therefore, the surgeons can locate the actual surgical tools in the patient space by observing the virtual tools on the images on the navigation screen. In addition to all of the components of the conventional IGNS, our Tablet-IGNS also includes a wireless router, a tablet PC, and a tracking adapter affixed on the tablet. The entire framework of the Tablet-IGNS can be seen in Fig. 1. The left box is a conventional IGNS. The middle box demonstrates the working principle of a conventional navigation system, and the right box is a movable tablet mounted with a fixed adapter. The adapter defines a tablet space and can be tracked by the Polaris system. The Tablet-IGNS can implement all of the functions of a conventional IGNS; furthermore, it also has two new features by using the tablet as both a locating device and a display device. In the TabletIGNS, the Polaris system tracks the patient’s head and the tablet simultaneously transforms the positions of the points in the tablet space to the patient space, and finally to the image space. Two types of navigation images are generated according to the position of the tablet in the image space, and then they are transmitted to the tablet and displayed on the tablet screen. We use a wireless router to construct an end-to-end wireless local area network (WLAN), in which the host is treated as a server that computes all of the coordinate transformations and transmits the position of any point in the tablet space to the image space. The host generates navigation images and sends them to the client. At the other end, the tablet is treated as a client and as a display device that receives images from the host and displays them on the tablet screen. Since the tablet is laid beside the surgeon, it’s very convenient for the surgeon to observe the navigation images on the tablet screen. Modes of the Tablet-IGNS

Materials and methods System components and framework In this study, we used a conventional IGNS developed by our own group. This system consists of a HP workstation (HP Z620, USA), a 19-inch touch screen, an infrared optical tracking system (PolarisÒ SpectraÒ, NDI, Canada), several digitizing probes, reference frames, and other accessories. Before surgery, the navigation system constructs an image space from preoperative imaging data of the patient, and then registers it to a patient space by pointbased registration method. The patient space is defined by a reference frame that is fixed beside the head of the patient and can be tracked by the Polaris system. During surgery, the Polaris system tracks actual surgical tools in the patient

123

The Tablet-IGNS can display canonical sectional images on the screen of the monitor, as the conventional IGNS does. Besides, this new system can also show other navigation images on the tablet screen. There are two new types of navigation images which are both generated according to the position and orientation of the tablet, and they are arbitrary sectional images and MIP images, respectively. Therefore, with two new types of navigation images, there are two new navigation modes in Tablet-IGNS. Navigation with arbitrary sectional images In this mode, an adapter was first fixed on the tablet to construct a tablet space. We measured the coordinates of the four corner points A, B, C and D of the tablet screen in the tablet space and defined a screen rectangle by the

Australas Phys Eng Sci Med

Fig. 1 Framework of the entire Tablet-IGNS. The left box is a conventional IGNS; the middle box demonstrates the framework of a conventional IGNS; the right box is a tablet with a fixed tracking adapter

Fig. 2 Diagram of generating sectional image according to the position and orientation of the tablet

four corner points. Then we extended the screen rectangle along one edge of the tablet to obtain a virtual screen rectangle with the same size, and the four corresponding corner points were labeled as A0 , B0 , C0 and D0 . Figure 2 illustrates how a sectional image is generated according to the position of the tablet and is displayed in real-time on the tablet screen. We transformed the virtual screen

rectangle into the image space and generated a sectional image by resampling the volume data on the virtual screen rectangle. Finally, we transmitted the sectional image to the tablet and displayed it on the tablet screen. The surgeon can move the tablet so that the virtual screen rectangle sweeps the entire head of the patient and the sectional image updated accordingly in real time. In this

123

Australas Phys Eng Sci Med

Fig. 3 Illustration of the procedure of obtaining a MIP image by projecting the volume data on the tablet screen. The red point indicates the voxel with maximum intensity on projecting rays

way, sectional image with any direction and angle could be obtained. Navigation with MIP images This mode gives us the second new feature of this system, and that is presenting MIP images on the tablet screen. In this mode, the tablet screen is considered as the visualization plane, and the MIP images are generated by projecting the volume data of the patient onto the tablet screen (see Fig. 3). Similar to the procedures in the previous mode, we defined a screen rectangle by the four corner points A, B, C, and D of the tablet screen. The TabletIGNS tracked the position of the screen rectangle and transformed it to the image space, and then this system generated a corresponding virtual screen rectangle with same size in the image space. Each pixel on the virtual screen rectangle sent a ray to pass through the volume data of the head of the patient. The voxel of the volume data with maximum intensity (indicated with red points in Fig. 3) along each ray is projected onto the virtual screen rectangle to generate a MIP image. Finally, the MIP image is transmitted to the tablet and displayed on the screen. When the surgeon moves the tablet, the MIP image changes accordingly, Therefore, MIP image with any orientation can be obtained.

123

Evaluation of the Tablet-IGNS In our study, the Polaris system tracked the position of the tip of the surgical tool to generate the standard sectional images passing through the tip. Similarly, the Polaris system tracked the position of the tablet screen to generate the sectional image and MIP image with arbitrary orientation. To evaluate this new system, we need first confirm that the tracked positions of the navigation images are correct, so that the navigation images are exactly rendered. However, this depends on the overall tracking accuracy of the TabletIGNS. Here we used a Coordinate Measuring Machine (CMM; FARO Gage, USA) to evaluate the overall tracking accuracy of the Tablet-IGNS. The FARO Gage offers high measurement accuracy of 0.018 mm. Therefore, the measurement results of the FARO Gage can be used as the ground truth for evaluating the overall tracking accuracy of the Tablet_IGNS. Firstly, we fixed the reference frame and the tablet on the table near the CMM. The tablet was equipped with an adapter for tracking. We used the CMM to measure the coordinates of the three reflective marker spheres on the reference frame so as to construct a plane with defined origin O, x-axis and y-axis. And thus, a 3D reference space could be established with the plane by the CMM through right-hand rule. Then we recorded the coordinates of the

Australas Phys Eng Sci Med

Fig. 4 Demonstration of the components that are used for computing the overall tracking error of the Tablet-IGNS Table 1 Tracking errors of the corner points of the tablet screen Points

Mean error (mm)

No. 1

1.2343

No. 2

1.5802

No. 3

1.2930

No. 4

0.8894

four corner points A, B, C, and D of the tablet screen in this reference space defined by the CMM. Figure 4 shows all of the components that are used for the system evaluation. On the other hand, we used the Polaris system to track the adapter on the tablet and the reference frame. The Polaris system transformed the coordinates of the four corner points in the adapter space to the reference space. Thus, we could record another set of coordinates of the four corner points A, B, C, and D in the reference space defined by the Polaris system. The tracking accuracy was determined by the following formula: error ¼

Fig. 5 Displaying sectional images according to the position and orientation of the tablet. These two images are extracted from a video which dynamically shows the images on the tablet change when the tablet is moved. a The tablet was placed in a vertical position; b the tablet was placed in a horizontal position

n 1X di ðmmÞ n i¼1

where n indicates the number of the corner points of the tablet screen and di is the distance in mm between the ith pair of corresponding corner points, which are measured in two different ways. In our study, n equals 4, and di indicates the tracking error of each corner point.

Results

Fig. 6 Displaying the MIP image generated by projecting the volume data on the tablet screen. This image is extracted from a video which shows that the projection image on the tablet can be dynamically changed with the movement of the tablet

Measurement of tracking accuracy To reduce artificial measurement error, we did the measurement 12 times for each corner point in each of the two

ways. Table 1 presents the mean errors of each corner point, and the average error of all of the four corner points is approximately 1.25 mm.

123

Australas Phys Eng Sci Med

the head of the patient to locate the tumor. Figure 7 shows one sectional image with the intracranial gliomas (colored in red), which is the intersection plane of the extension of the tablet screen with the patient’s head. Moving the tablet up and down, the surgeon could obtain the upper and lower boundaries of the gliomas. When the tablet was held in vertical position, the surgeon could obtain the left and right boundaries by moving it back and forth.

Discussion Fig. 7 Displaying the specific information of the intracranial gliomas (shaded in red) at the position where the extension of the tablet and the patient’s head intersect

Skull specimen study Before conducting a clinical experiment, we used a dry skull specimen to verify the feasibility of the proposed system. CT scanning was performed for the specimen, and we obtained 263 axial slices with a 512 9 512-pixel matrix size in DICOM format. The in-plane and inter-plane resolutions were 0.44 and 0.63 mm, respectively. After that, we performed neuronavigation on the skull specimen and generated sectional images and MIP images according to the position and orientation of the tablet. Figure 5 demonstrates the displaying of two different sectional images of the skull specimen on the tablet screen according to different positions of the tablet. Figure 6 illustrates the displaying of a MIP image generated by projecting the volume data of the skull specimen on the tablet screen. Clinical experience The Tablet-IGNS was used in an actual gliomas resection surgery to demonstrate the feasibility of the technology in clinical environment. The day before surgery, the patient underwent MR scanning with artificial markers affixed on the head, which were used for point-base image-to-patient registration. After registration and before making an incision, the surgeon placed the tablet to the position where the extension plane along the tablet screen might across the intracranial tumor, and the sectional image was displayed on the tablet screen. The surgeon moved the tablet at arbitrary direction and angle to sweep the entire head of the patient, and directly observed the position and size of the tumor on the tablet. During surgery, the tablet was wrapped in a sterile bag and supported by a stand or held by an assistant in the sterile field. When necessary, the surgeon could also move the tablet around

123

In this paper, we presented a multi-mode IGNS, which is termed Tablet-IGNS. The tablet used in the system is regarded as both a locating device and a display device. The Tablet-IGNS can generate arbitrary sectional images or MIP images according to the position and orientation of the tablet and display them on the tablet screen. By moving the tablet around the head of the patient, the surgeon can directly associate the navigation images displayed on the tablet screen with the actual surgical field. We used the FARO Gage CMM to measure the average tracking error, which was approximately 1.25 mm. The skull specimen study verified the feasibility of the Tablet-IGNS, and the clinical experience with one actual patient indicated that this system is applicable in actual surgery. One issue in the Tablet-IGNS is the network delay. We connected the tablet to the host with a WLAN, which is constructed by a wireless router. The network delay is regarded as the time difference between the moments when the tablet moves to a new place to the moment when the corresponding sectional image or MIP image displays on the tablet screen. Before practical use, we did a data transmission test for the WLAN and connected the tablet to a desktop with the WLAN. The average date transmission rate was 6.55 Mbps. This rate can meet the refreshing requirement of the navigation system. When the tablet moves very fast, the images displayed on the tablet screen may not update in time. Therefore, in practical use, the surgeon should move the tablet in a low speed, or place the tablet on a fixed holder to avoid the error from network delay. Furthermore, a high-performance wireless router can be used to transfer the data faster and in a more stable manner. An additional aspect is to consider the sterilization of the tablet in practical clinical applications. There were some other studies focusing on using a mobile display device in operating rooms. For example, a sterile draped Apple iPod touch was used as the monitor of the BrainLAB-DASH System for total knee replacement (TKR) and total hip replacement (THR) [14]. An iPad was used as the video-output source and, simultaneously, as the computer

Australas Phys Eng Sci Med

and display device to show enhanced virtual reality of the patient’s anatomy for percutaneous nephrolithotomy (PCNL) [15, 16]. Their study goals might be different from ours, but the way to sterilize the mobile device could be the same. In our study, before surgery, the tablet does not need be sterilized, the surgeon can hold the tablet and move it to obtain arbitrary sectional image or MIP image for surgical planning. Then during the surgery, the tablet can be wrapped in a sterile bag for intraoperative use. The platform of the tablet is not limited, and it can be an Android tablet, a Windows 7 tablet or an iPad. The only requirement is that it must be able to connect to the wireless network. Of course, the software used in different platforms will be quite different. On the basis of our skull specimen study and one clinical experience, we believe that using mobile and wireless tablet to enhance the performance is feasible in imageguided neurosurgery. In our future work, we will focus on the demonstration of the feasibility of the Tablet-IGNS in more actual neurosurgeries, such as intracranial hematoma puncture and so on. Acknowledgments This study is partly supported by Project 81101128 and 81271670 sponsored by the National Natural Science Foundation of China, National High Technology Research and Development Program (No. 2012AA02A606), Key Technologies R&D Program of China (No. 2012BAI14B05), Project 12441901600 supported by the Science and Technology Committee of Shanghai Municipality, and the Mingdao Project for medical graduate students supported by Fudan University. The authors do not have any personal or institutional financial interest with the devices described in this article.

References 1. Gumprecht HK, Widenka DC, Lumenta CB (1999) BrainLab VectorVision Neuronavigation System: technology and clinical experiences in 131 cases. Neurosurgery 44(1):97–104 (discussion 104–105) 2. Grunert P, Darabi K, Espinosa J, Filippi R (2003) Computeraided navigation in neurosurgery. Neurosurg Rev 26(2):73–99. doi:10.1007/s10143-003-0262-0 3. Peters TM (2006) Image-guidance for surgical procedures. Phys Med Biol 51(14):R505–R540. doi:10.1088/0031-9155/51/14/R01

4. Cleary K, Peters TM (2010) Image-guided interventions: technology review and clinical applications. Annu Rev Biomed Eng 12(12):119–142. doi:10.1146/annurev-bioeng-070909-105249 5. Wang MN, Song ZJ (2011) Classification and analysis of the errors in neuronavigation. Neurosurgery 68(4):1131–1143 6. Blackwell M, Nikou C, DiGioia AM, Kanade T (2000) An image overlay system for medical data visualization. Med Image Anal 4(1):67–72 7. Kockro RA, Tsai YT, Ng I, Hwang P, Zhu CG, Agusanto K, Hong LX, Serra L (2009) Dex-Ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery 65(4):795–807. doi:10.1227/01.Neu.0000349918.36700.1c 8. Agusanto K, Zhu CG, Kockro RA (2005) Augmented realityenhanced operation microscope with multi-modal volume visualization for neurosurgery. Int Congr Ser 1281:1347. doi:10.1016/ j.ics.2005.03.103 9. Roberts DW, Strohbehn JW, Hatch JF, Murray W, Kettenberger H (1986) A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope. J Neurosurg 65(4):545–549 10. Sielhorst T, Feuerstein M, Navab N (2008) Advanced medical displays: a literature review of augmented reality. J Disp Technol 4(4):451–467. doi:10.1109/Jdt.2008.2001575 11. Schaffler GJ, Sorantin E, Groell R, Gamillscheg A, Maier E, Schoellnast H, Fotter R (2000) Helical CT angiography with maximum intensity projection in the assessment of aortic coarctation after surgery. AJR Am J Roentgenol 175(4):1041–1045 12. Uchiyama Y, Yamauchi M, Ando H, Yokoyama R, Hara T, Fujita H, Iwama T, Hoshi H (2006) Automated classification of cerebral arteries in MRA images and its application to maximum intensity projection. Conf Proc IEEE Eng Med Biol Soc 1:4865–4868. doi:10.1109/IEMBS.2006.260438 13. Sakai O, Shen Y, Nakashima N, Takata Y, Ogawa C, Azemoto S (1994) Maximum-intensity-projection CT angiography for evaluating head and neck tumors: usefulness of helical CT and auto bone masking method. Nihon Igaku Hoshasen Gakkai Zasshi 54(14):1421–1423 14. Ba¨this H, Shafizadeh S, Banerjee M, Banerjee M, Bracke B iPod based navigation in TKR and THR-first experience and results of the pilot study. In: 11th annual meeting of the international society for computer assisted orthopaedic surgery, London, UK, June 15th to 19th 2011 15. Rassweiler JJ, Muller M, Fangerau M, Klein J, Goezen AS, Pereira P, Meinzer HP, Teber D (2012) iPad-assisted percutaneous access to the kidney using marker-based navigation: initial clinical experience. Eur Urol 61(3):628–631. doi:10.1016/j.eur uro.2011.12.024 16. Muller M, Rassweiler MC, Klein J, Seitel A, Gondan M, Baumhauer M, Teber D, Rassweiler JJ, Meinzer HP, Maier-Hein L (2013) Mobile augmented reality for computer-assisted percutaneous nephrolithotomy. Int J Comput Ass Rad 8(4):663–675

123

Multi-mode navigation in image-guided neurosurgery using a wireless tablet PC.

The authors present a tablet-based image-guided neurosurgery system that transmits navigation information from a host to a movable tablet PC via a wir...
1MB Sizes 2 Downloads 3 Views