JOURNAL OF LAPAROENDOSCOPIC & ADVANCED SURGICAL TECHNIQUES Volume 23, Number 12, 2013 ª Mary Ann Liebert, Inc. DOI: 10.1089/lap.2013.0304

Technical Report

Towards an Autonomous Robot for Camera Control During Laparoscopic Surgery Brady W. King, PhD,1 Luke A. Reisner, PhD,2 Abhilash K. Pandya, PhD,2 Anthony M. Composto, BS,2 R. Darin Ellis, PhD,3 and Michael D. Klein, MD1

Abstract

Introduction: During laparoscopic surgery, the surgeon currently must instruct a human camera operator or a robotic arm to move the camera. This process is distracting, and the camera is not always placed in an ideal location. To mitigate these problems, we have developed a test platform that tracks laparoscopic instruments and automatically moves a camera with no explicit human direction. Materials and Methods: The test platform is designed to mimic a typical laparoscopic working environment, where two hand-operated tools are manipulated through small ports. A pan-tilt-zoom camera is positioned over the tools, which emulates the positioning capabilities of a straight (0) scope placed through a trocar. A camera control algorithm automatically keeps the tools in the view. In addition, two test tasks that require camera movement have been developed to aid in future evaluation of the system. Results: The system was found to successfully track the laparoscopic instruments in the camera view as intended. The camera is moved and zoomed to follow the instruments in a smooth and consistent fashion. Conclusions: This technology shows that it is possible to create an autonomous camera system that cooperates with a surgeon without requiring any explicit user input. However, the currently implemented camera control behaviors are not ideal or sufficient for many surgical tasks. Future work will be performed to develop, test, and refine more complex behaviors that are optimized for different kinds of surgical tasks. In addition, portions of the test platform will be redesigned to enable its use in actual laparoscopic procedures.

Introduction

D

uring traditional laparoscopic surgery, the surgeon currently must instruct a camera operator to move the camera. During robotic surgery, the surgeon has to shift his or her attention from the instruments and manually move the camera arm of the robot. This can potentially result in nonoptimal views of the surgery (e.g., having the tools outside the field of view). These camera control processes distract the surgeon when he or she needs to be focused on performing an operation, potentially leading to inefficiencies during laparoscopic surgery. We have developed a test platform that tracks the tips of laparoscopic instruments and automatically moves a camera with no explicit human direction. This robot has the potential to function as well as an expert laparoscopic camera operator, enabling laparoscopic surgery without a human camera

holder. Additionally, we have developed tasks that enable objective evaluation of the system. They will allow us to analyze camera control schemes and devise improvements to the platform. Materials and Methods The test platform is designed to mimic a typical laparoscopic working environment, where two hand-operated tools have access to a surgical site through small ports (Fig. 1). A wall prevents the user from having a direct view of the site. Instead, a pan-tilt-zoom camera is positioned over the tools, whose output is viewable on a monitor. The camera system emulates the positioning capabilities of a straight (0) scope placed through a trocar. To enable tool tracking, colored markers are applied to the tips of laparoscopic instruments. The video stream from the

1 Department of Pediatric Surgery, Children’s Hospital of Michigan, Detroit, Michigan. Departments of 2Electrical and Computer Engineering and 3Industrial and Systems Engineering, Wayne State University, Detroit, Michigan. A brief abstract summarizing this work was presented at the 2013 meeting of the Pacific Association of Pediatric Surgeons in Lovedale, NSW, Australia.

1027

1028

KING ET AL.

FIG. 1. Test platform for autonomous camera movement. The system’s current camera control behavior is to automatically track the color-coded tools of the subject and center the camera’s view on the two tools by panning and tilting. The system also zooms in or out (based on the distance between the tool tips) to keep both tips in view. camera is captured and analyzed by a computer algorithm, which locates the colored markers in the image (Fig. 2). The camera is then instructed to move based on the located markers and a set of rules that forms a ‘‘movement scheme.’’ Currently, one basic movement scheme has been implemented. It consists of the following simple rules:  When both tool tips are in the view, the system first calculates the centroid of the tools and then pans and tilts to keep the centroid near the center of the camera’s view. C To minimize extraneous camera motion, no movement is performed until the centroid is sufficiently far from the center of the camera view.  If both tool tips are near the center of the view, the system zooms in. Conversely, if both tool tips are near opposite edges, the system zooms out.  If any tool leaves the view, the system stops all movement. Fundamentals of Laparoscopic Surgery (FLS; Society of American Gastrointestinal and Endoscopic Surgeons, Los Angeles, CA) is a program used to train and evaluate

surgeons in terms of their laparoscopic skills. It contains a manual skills component that uses a simulator to represent a laparoscopic environment. The manual skills portion of FLS consists of five different tasks, including peg transfer, precision cutting, intracorporeal knot tying, extracorporeal knot tying, and placement of a ligating loop. These FLS tasks require no camera movement, but they do provide a good benchmark for evaluating performance on surgical tasks.1–4 Therefore, two FLS tasks (peg transfer and precision cutting) were modified to require camera movement. These tasks were used to validate the operation of the system, and they will be used in the future to further develop and evaluate the camera movement system. The modified peg transfer task simply places the pegs further apart, necessitating camera movement to complete the transfer (Fig. 3). The modified precision cutting task uses a much larger circle, requiring camera movement to see the entire circle (Fig. 4). In addition, instead of cutting a piece of gauze, modified laparoscopic instruments with pencils on their ends are used to trace within segments of the circle. The participant must alternate between using the instrument in the left hand and the instrument in the right hand to trace the segments. This was chosen over cutting because gauze of the needed size is not readily available. Moreover, time constraints during execution of the cutting task would limit the number of trials that could be completed during future testing. For both modified tasks, the time taken for task completion and the number of errors can be recorded, which will provide objective measures of performance. Results

FIG. 2. Screenshot from the camera control software. Here, the tips of the two instruments have been located (boxes), and the centroid of the tools has been calculated (cross).

The system was found to successfully perform the intended camera control behaviors during the execution of the modified FLS tasks. The results are best demonstrated by an associated video.5 In the video, a novice performs a peg transfer

TOWARDS AN AUTONOMOUS ROBOT FOR CAMERA CONTROL

1029

FIG. 3. The modified peg transfer task with a 6-inch (15.24-cm) ruler shown for scale. When inside the test platform, the camera cannot capture both sets of pegs at the same time, necessitating camera movement to complete a transfer.

task with the system to illustrate the rapid response of the camera to movements of both instruments (in any direction). In addition, the video shows how the camera zooms in when the tool tips come together and zooms out when they are sufficiently far apart. The camera’s movements are smooth and consistent throughout all parts of the demonstration. Discussion Robotic technology and task modeling have advanced to the point where seamless, intelligent, and automatic laparoscope movement may be achieved with minimal human intervention. An automated robotic camera control system has the potential to eliminate the shortcomings of existing camera control mechanisms. If the automated system can intelligently infer the surgeon’s desired camera movements, it can move the camera accurately without increasing the workload of the surgeon. Current machines that enhance laparoscopic surgical skills, such as the da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA), do not operate autonomously or intelligently. Instead, they are master–slave devices that require the surgeon to initiate every action. They simply use a computer and specialized mechanics to provide certain enhancements, such as tremor filtration, motion scaling, or increased dexterity. In contrast, the developed camera system has intelligent behaviors that enable it to cooperate with the surgeon based only on inputs from the camera view.

FIG. 4. The modified precision ‘‘cutting’’ task. Participants draw within the black and white segments of the circle using modified laparoscopic tools with pencils on their ends. When inside the test platform, the camera cannot capture the entire circle, necessitating camera movement to complete the task.

There are some machines used in medicine that can be classified as autonomous robots. For instance, there are several devices, such as the ROBODOC (Curexo Technology Corp., Fremont, CA), that work entirely independently to prepare bony surfaces for artificial joints based on preoperative imaging. Another example is the CyberKnife (Accuray, Sunnyvale), which focuses radiation on specific lesions found in imaging data from the patient. This device even performs real-time imaging to alter its position in response to the patient’s respiratory movements. The AESOP (Intuitive Surgical) was one of the first robots used in clinical surgery. This robot holds an endoscope and is guided in its movement by voice, hand, or foot controls. When it was used in surgery, the AESOP functioned effectively, but its setup and dismantlement were cumbersome and time-consuming. It only has one speed when operating under voice control, and that speed is considerably slower than that of most surgeons. With a human camera holder, the camera is moved quickly to the area of interest and then slowed down to focus more closely. In a sense, a human can provide a continuously adjusting motion scale in real time. Another limitation of the AESOP is that its voice control interface is also relatively slow. The surgeon must hail the robot with a voice command, provide a movement instruction, wait for the field of view to change, and then tell the robot to stop when the correct view is achieved. Initially, the camera system presented in this work utilized the AESOP. However, this design was abandoned because the robot did not provide sufficiently fine control over the camera’s movement. The pan-tilt-zoom camera used in the current design offers adequate camera control for this preliminary work, but it cannot be used in actual laparoscopic procedures. Therefore, we are currently constructing a new robotic positioner for the camera that can be used in laparoscopic procedures and will be compatible with angled scopes. We are also investigating techniques to improve the flexibility of the instrument tracking. Note that it may be possible to integrate the camera system with an existing surgical robot, such as the da Vinci Surgical System. Other research groups have created automated camera control systems. Similar to our system, these groups used various forms of image-based tracking to identify the tips of the surgical tools.6–13 In general, the testing of these systems was limited to subjective feedback about their performance, and the rationales for their chosen camera movement schemes were not fully explained. Omote et al.14 created a system similar to ours, and it was tested in 20 cases of laparoscopic cholecystectomy. Their evaluation was thorough and favorable, but it focused on the overall procedure instead of

1030 individual surgical tasks and corresponding camera movements. We believe that specific, objective information about the behavior of the automated camera system is needed to accurately evaluate its performance and devise improvements to its design. The presented camera system currently has a very basic movement scheme that essentially keeps both instruments in the camera’s view at all times. This corresponds to the simplest instruction that could be given to a novice camera holder. Such a movement scheme may be adequate in some situations, but it is not ideal or sufficient for many tasks. For example, in many dissections (such as cholecystectomies and appendectomies), one instrument is kept out of the field of view (providing traction) while the other instrument is dissecting tissue. We recognize that there are some limitations in both the work of the cited research groups and our presented preliminary work. First of all, the movement schemes are very simple and thus insufficient to handle all of the various tasks that must be completed during a surgery. In addition, the design and evaluation of the schemes should be performed in a more rigorous manner. Our future work will focus on addressing these limitations. To develop more complex movement schemes that accommodate other types of surgical activities, we will interview surgeons and analyze their responses using techniques from the field of human factors engineering. The results will define new camera control behaviors that will be added to the software of our automated system. In addition, we will perform objective testing (using the modified FLS tasks described in Materials and Methods) to evaluate the system’s movement schemes. By capturing the nuances of camera operation from experts and carefully testing and refining our work, we hope to realize the goal of an autonomous laparoscopic camera control system. Disclosure Statement B.W.K., L.A.R., A.K.P., and M.D.K. have applied for a patent (through Wayne State University) that may cover some aspects of the presented system. It is entitled ‘‘Intelligent Autonomous Camera Control for Robotics with Medical, Military, and Space Applications’’ and can be found as WIPO publication number WO 2012/078989. References 1. Fraser SA, Klassen DR, Feldman LS, Ghitulescu GA, Stanbridge D, Fried GM. Evaluating laparoscopic skills: Setting the pass/fail score for the MISTELS system. Surg Endosc 2003;17:964–967. 2. Fried GM, Feldman LS, Vassiliou MC, et al. Proving the value of simulation in laparoscopic surgery. Ann Surg 2004; 240:518–528.

KING ET AL. 3. McCluney AL, Vassiliou MC, Kaneva PA, et al. FLS simulator performance predicts intraoperative laparoscopic skill. Surg Endosc 2007;21:1991–1995. 4. Vassiliou MC, Ghitulescu GA, Feldman LS, et al. The MISTELS program to measure technical skill in laparoscopic surgery: Evidence for reliability. Surg Endosc 2006; 20:744–747. 5. King BW, Reisner LA, Pandya AK, Composto AM, Ellis RD, Klein MD. Demonstration of work towards an autonomous robot for camera control during laparoscopic surgery. J Laparoendosc Adv Surg Tech B Videosc 2013 (submitted for publication). 6. Casals A, Amat J, Laporte E. Automatic guidance of an assistant robot in laparoscopic surgery. Paper presented at the IEEE International Conference on Robotics and Automation, April 22–28, 1996, Minneapolis, MN. 7. Fortney DR. Real-Time Color Image Guidance System. Research Report. Santa Barbara, CA: Department of Electrical and Computer Engineering, University of California, 2000. 8. Ko S-Y, Kim J, Lee W-J, Kwon D-S. Compact laparoscopic assistant robot using a bending mechanism. Adv Robot 2007;21:689–709. 9. Ko S-Y, Kwon D-S. A surgical knowledge based interaction method for a laparoscopic assistant robot. Paper presented at the 13th IEEE International Workshop on Robot and Human Interactive Communication, September 22, 2004, Kurashiki, Okayama, Japan. 10. Lee C, Wang YF, Uecker DR, Wang Y: Image analysis for automated tracking in robot-assisted endoscopic surgery. Paper presented at the 12th IAPR International Conference on Pattern Recognition, October 9–13, 1994, Jerusalem, Israel. 11. Uecker DR, Lee C, Wang YF, Wang Y. Automated instrument tracking in robotically assisted laparoscopic surgery. J Image Guid Surg 2005;1:308–325. 12. Wei G-Q, Arbter K, Hirzinger G. Automatic tracking of laparoscopic instruments by color coding. Paper presented at the Computer Vision, Virtual Reality, and Robotics in Medicine and Medical Robotics and Computer-Assisted Surgery, March 19–22, 1997, Grenoble, France. 13. Wei G-Q, Arbter K, Hirzinger G. Real-time visual servoing for laparoscopic surgery: Controlling robot motion with color image segmentation. IEEE Eng Med Biol Mag 1997;16:40–45. 14. Omote K, Feussner H, Ungeheuer A, et al. Self-guided robotic camera control for laparoscopic surgery compared with human camera control. Am J Surg 1999;177:321–324.

Address correspondence to: Brady W. King, PhD Department of Electrical and Computer Engineering Wayne State University 5050 Anthony Wayne Drive, Room 3100 Detroit, MI 48202 E-mail: [email protected]

Towards an autonomous robot for camera control during laparoscopic surgery.

During laparoscopic surgery, the surgeon currently must instruct a human camera operator or a robotic arm to move the camera. This process is distract...
175KB Sizes 0 Downloads 0 Views