Augmented Reality for Robotic Surgical Dissection - Final Report

32
Augmented Reality for Robotic Surgical Dissection Milind Soman Research Assistant Dr. Prasad Shastry Principal Investigator Department of Electrical and Computer Engineering Bradley University Peoria, IL 61625 2014

Transcript of Augmented Reality for Robotic Surgical Dissection - Final Report

Page 1: Augmented Reality for Robotic Surgical Dissection - Final Report

Augmented Reality for Robotic Surgical Dissection

Milind Soman

Research Assistant

Dr. Prasad Shastry

Principal Investigator

Department of Electrical and Computer Engineering

Bradley University

Peoria, IL 61625

2014

CardioVox

Page 2: Augmented Reality for Robotic Surgical Dissection - Final Report

Abstract

This project is about studying and developing a new technology for surgical dissection with the help of robotics. The recently used technologies are based on 3D reconstruction of specific organs from CT scan images and perform surgeries with the help of a robotic machine having end points.

The proposed new technology is based on 3D reconstruction of organs from provided CT scan images with the help of open source software. This technique is clean and patient friendly and shall open new gates in the field of medical applications. The interaction of a robot with human body is highly innovative and may become the future trend in medical field.

In this project, several methods are proposed for 3D reconstruction of human body organs such as heart, lungs or brain which can be achieved with the help of CT scan images or X-ray images were implemented. In this report, the results obtained after simulation are presented and discussed. This project was sponsored by CardioVox, Inc.

i

Page 3: Augmented Reality for Robotic Surgical Dissection - Final Report

Acknowledgments

I am extremely thankful to Dr. Prasad Shastry for guiding and mentoring me throughout the project.

I would like to express my deepest gratitude to Dr. Anthony Nunez, President and CEO, CardioVox, Inc., Illinois, for giving me such an excellent opportunity to work for him in the field of biomedical engineering and robotics.

I would also like to thank Dr. Elton Mustafaraj from OSF Saint Francis Medical Centre, Peoria, IL for helping me in this project.

I would like to thank God and my parents who always encouraged me and appreciated my work.

ii

Page 4: Augmented Reality for Robotic Surgical Dissection - Final Report

TABLE OF CONTENTS

Page No.

Abstract i

Acknowledgments ii

Chapter 1: Introduction 1

1.1 Objective 1

1.2 Augmented Reality 1

1.3 Use of Augmented Reality in Medical field 2

1.4 Proposed Idea 3

1.5 Concluding Remarks 4

Chapter 2: Methodology 5

2.1 Introduction 5

2.2 Using Robotics 5

2.2.1 The da Vinci Surgical system 6

2.2.2 Difficulties 8

2.3 Google HATH 8

2.3.1 Google Glass 8

2.3.2 Magic Mirror 9

2.3.3 Difficulties 10

2.4 Using Matlab 11

2.5 Conclusions

Chapter 3: Simulations and results 12

iii

Page 5: Augmented Reality for Robotic Surgical Dissection - Final Report

3.1 Introduction 12

3.2 3D Slicer 12

3.3 InVesalius 3.0 12

3.4 Simulations 12

3.5 Conclusions 16

Chapter 4: Summary and Conclusions 17

References 18

Appendix

A. Selected Patents 19

iv

Page 6: Augmented Reality for Robotic Surgical Dissection - Final Report

Chapter 1

Introduction

1.1 Objective

The goal of this project is to design a technique for robotic surgical dissection. This requires a new technique to be developed so as to reduce the human effort in surgical dissection. The involvement of robotics in surgical dissection is the present technological advancement in the field of medicine. Robotic surgery is considered as minimally invasive surgery, which means instead of using large surgical equipment for incision we use miniaturized surgical equipments. It also includes head mounted displays, handheld displays, projection displays and image overlay systems. Lists of proposed ideas are discussed further in this report.

1.2 Augmented Reality

Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented), by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Artificial information about the environment and its objects can be overlaid on the real world. [2]

Fig.1 Augmented Reality in Social Networking [1]

1

Page 7: Augmented Reality for Robotic Surgical Dissection - Final Report

1.3 Use of Augmented Reality in Medical Field

Augmented reality is gradually being used for more practical purposes apart from providing entertaining digital content to the users of smart devices. One such important use of AR is in the field of medicine. This technology is going to play an important role in the future of medicine. In September 2013, a shoulder replacement surgery was performed by a team of surgeons from the University of Alabama, using Google Glass and virtual AR technology VIPAAR (Fig.2). It can be hoped that similar augmented reality products will be developed and used by doctors in the future.

Fig.2 VIPAAR Technology

This technology is rapidly gaining importance among medical professionals. It can be used to make complex surgical procedures easier. Doctors and other healthcare professionals applauded this technology for this ability. With proper platform, doctors not only can perform surgeries easily but also provide postoperative care. There are essentially three different types of displays of augmented reality. They are:

The head mounted display (HMD) is worn on the head or attached to a helmet. This display can resemble goggles or glasses. In some instances, there is a screen that covers a single eye.

The handheld device is a portable computer or mobile Smartphone such as the iPhone.

Spatial display makes use of projected graphical displays onto fixed surfaces.

2

Page 8: Augmented Reality for Robotic Surgical Dissection - Final Report

1.4 Proposed Idea

From the previous section it is clear how AR can be helpful and advantageous for medical science. It can be implemented in several ways for robotic surgical dissection. In this section, an idea is proposed to achieve the objective. Basically, the first objective is to have a 3-dimensional volume-specific rendering model obtained from CT scan images of a patient`s body and with the help of it one has to perform surgical dissection either by projecting it or having it on a display. To achieve this, three different ideas are suggested in the following:

Having a robotic surgical dissection machine (the da Vinci surgical system) with two work stations, one operated by human being and other the robot itself with two end points used for operation.

Implement head mounted displays such as Google Glass or some other AR glasses to have a view of the 3D reconstructed model with proper registration points for surgeon to perform the operation.

By using Matlab, one can construct a 3D model with the help of an array of CT scan images and can project it on a dummy body having registration points.

Fig.3 Three slices of CT scan image and reconstructed 3D model (InVesalius 3.0)

3

Page 9: Augmented Reality for Robotic Surgical Dissection - Final Report

1.5 Concluding Remarks

In this chapter, the Augmented Reality concept and its use in medical field for robotic surgical dissection has been presented. Also proposed are the plans to achieve the objective. The proposed plans are discussed later in this report with their advantages and disadvantages.

4

Page 10: Augmented Reality for Robotic Surgical Dissection - Final Report

Chapter 2

Methodology

2.1 Introduction

In this section, as stated earlier, to achieve the objective with suggested three different ideas namely using robotics, with head mounted display such as Google Glass and by using Matlab are discussed with their advantages and disadvantages.

2.2 Using Robotics

Robotic surgery, computer-assisted surgery, and robotically-assisted surgery are terms for technological developments that use robotic systems to aid in surgical procedures. Robotically-assisted surgery was developed to overcome the limitations of minimally-invasive surgery and to enhance the capabilities of surgeons performing open surgery.

In robotic assisted surgery, instead of using the surgical instruments directly, surgeon can do it in one of these five ways to control the instruments. The surgeon can use either telemanipulator or the help of computer control. A telemanipulator is a remote manipulator that allows the surgeon to perform the normal movements associated with the surgery whilst the robotic arms carry out those movements using end-effectors and manipulators to perform the actual surgery on the patient. In computer-controlled systems the surgeon uses a computer to control the robotic arms and its end-effectors, though these systems can also still use telemanipulator for their input. One advantage of using computerized method is that the surgeon does not have to be present in the operating room, but can be anywhere in the world, leading to the possibility for remote surgery.

Fig.4 Robotic Surgical System Prototype

5

Page 11: Augmented Reality for Robotic Surgical Dissection - Final Report

The following steps are proposed in this project are:

Select a programming language (C, C++ or JAVA) for 3D reconstruction or any free source software (3D Slicer, InVesalius 3.0 etc)

Assign Registration points (Ribs were selected for this).

Take Probe and touch Ribs.

Need of X, Y and Z co-ordinates.

Calculate Spatial Area.

Calculate Circumcentre or Median.

Use CT scan images as input.

Link CT scan images to registration points.

The user operated Robotic machine has a display in which the operator has a view of 3D model of the organ which is obtained from CT scan images.

User operates the machine with either joystick or with Robotic hand (with actuators) which operates the end point of the machine.

The display must be of high resolution to get a clear view of the human anatomy.

The 3D display includes the registration points which give the operator knowledge of all dimensions and surroundings of specific organ.

A human user operating the robotic system with a 3D (virtual) view of specific organ in real time with the registration points and other specification is presented in Fig.4. The da Vinci system is one of the best example of this type of system.

2.2.1 The da Vinci Surgical System

The da Vinci Surgical System is a robotic surgical system made by the American company Intuitive Surgical. Approved by the Food and Drug Administration (FDA) in 2000, it is designed to facilitate complex surgery using a minimally invasive approach, and is controlled by a surgeon from a console. The system is commonly used for prostatectomies, and increasingly for cardiac valve repair and gynecologic surgical procedures. According to the manufacturer, the da Vinci System is called "da Vinci" in part "because Leonardo da Vinci invented the first robot", Da Vinci also used anatomical accuracy and three-dimensional details in his works.

6

Page 12: Augmented Reality for Robotic Surgical Dissection - Final Report

The da Vinci System consists of a surgeon’s console that is typically in the same room as the patient, and a patient-side cart with four interactive robotic arms controlled from the console. Three of the arms are for tools that hold objects, and can also act as scalpels, scissors, bovies, or unipolar or hi, the surgeon must first use the system's weight to judge how hard it should work. Then he/she uses the console’s master controls to maneuver the patient-side cart’s three or four robotic arms (depending on the model). The instruments’ jointed-wrist design exceeds the natural range of motion of the human hand; motion scaling and tremor reduction further interpret and refine the surgeon’s hand movements. The da Vinci System always requires a human operator, and incorporates multiple redundant safety features designed to minimize opportunities for human error when compared with traditional approaches.

The da Vinci System has been designed to improve upon conventional laparoscopy, in which the surgeon operates while standing, using hand-held, long-shafted instruments, which have no wrists. With conventional laparoscopy, the surgeon must look up and away from the instruments, to a nearby 2D video monitor to see an image of the target anatomy. The surgeon must also rely on his/her patient-side assistant to position the camera correctly. In contrast, the da Vinci System’s ergonomic design allows the surgeon to operate from a seated position at the console, with eyes and hands positioned in line with the instruments. To move the instruments or to reposition the camera, the surgeon simply moves his/her hands. [5]

Fig.5 The da Vinci Surgical system [6]

7

Page 13: Augmented Reality for Robotic Surgical Dissection - Final Report

By providing surgeons with superior visualization, enhanced dexterity, greater precision and ergonomic comfort, the da Vinci Surgical System makes it possible for more surgeons to perform minimally invasive procedures involving complex dissection or reconstruction. For the patient, a da Vinci procedure can offer all the potential benefits of a minimally invasive procedure, including less pain, less blood loss and less need for blood transfusions. Moreover, the da Vinci System can enable a shorter hospital stay, a quicker recovery and faster return to normal daily activities.

2.2.2 Difficulties

Difficult to implement.

High Cost.

Difficult to have perfect registration points. (The most important requirement).

Computer program needed for 3D reconstruction of CT scan images is a difficult task.

2.2 Google HATH

2.2.1 Google Glass

The second plan proposed to achieve the objective is named as Google HATH which uses the latest technology in Augmented Reality field i.e. “Google Glass”. Google Glass can be considered as a head mounted display with an in-built camera to get the 3D view. Following points are proposed:

The goal with Google Glass is to develop an Android based application that allows for CT scan generated 3D Image to overlay when operating with hands.

To use the Android Development Kit as the basic tool kit.

To integrate the 3D view of the CT scan images on to the Google Glass screen.

To establish hands and fingers as the registration point.

Develop the projection of 3D CT scan image on Google Glass screen.

Registration points can be assigned using motion sensors which can be mounted on overhead light in Operating Room (OR).

8

Page 14: Augmented Reality for Robotic Surgical Dissection - Final Report

Integration of the registration points with 3D images.

Fig.6 Google Glass with specifications [8]

2.2.2 Magic Mirror

Using the metaphor of a magic mirror, miracle shows an augmented reality overlay of a CT dataset onto a user standing in front of a big screen. The movement of the person is being tracked using Microsoft Kinect. The system could be used for anatomy education or patient consulting. The Microsoft Kinect provides a color and a depth image. Using OpenNI and Prime sense NITE we can get the skeleton of a person standing in front of the Kinect in real-time. We register a CT dataset to this skeleton and do an Augmented Reality overlay of the CT and the person. Using context and focus visualization one can only show the CT through a virtual window, while still showing the person. The Kinect is positioned next to a big screen. When standing in front of the system, the system acts like a magic mirror that allows seeing inside you. Such a system can be used for educational purposes to teach anatomy. The system is currently a prototype. It does not show the CT of the person in front of the monitor, but a CT of another person.

9

Page 15: Augmented Reality for Robotic Surgical Dissection - Final Report

Fig.7 Magic mirror technology [3]

2.2.3 Difficulties

The difficulties which were faced while following the proposed plan are listed in the following:

Selection of registration points on hands with the help of motion sensor is a tough task.

Faced some difficulties to implement the programming language for Android software.

One has to be familiar with Android platform and JAVA programming language.

Difficult to improvise Google glass for this specific application.

2.4 Using Matlab10

Page 16: Augmented Reality for Robotic Surgical Dissection - Final Report

2.4.1 Matlab R2008b

MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. Developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and FORTRAN.

The steps to attain the objective with the help of Matlab are listed below:

Use of Volume specific 3D rendering model.

Import the model to Matlab.

Take a dummy body (like T-shirt) as a patient`s body and mark registration points on that.

Take a video of the model with the registration points and match the registration points on CT scan images.

Overlay the 3D volume model on the dummy so that we have a body with registration points and 3D reconstructed model.

There is also an alternative approach to achieve the objective which one can follow. The steps are following:

Import an array of images in Matlab.

With the help of Image processing toolbox and Graphical User Interface obtain the 3D reconstructed model through CT scan images.

Overlay the 3D model on the dummy body as discussed earlier.

2.5 Conclusions

In this section, the three proposed plans are discussed with their thorough description and implementation. Also discussed the advantages of the proposed ideas and the difficulties faced while implementing the ideas.

Chapter 3

11

Page 17: Augmented Reality for Robotic Surgical Dissection - Final Report

Simulations and results

3.1 Introduction

In this chapter, topics presented and discussed are the simulations performed through proposed plans in Ch.2 to attain the objective and its results. The software used for simulation are 3D Slicer, InVesalius 3.0 and Matlab R2008b.

3.2 3D Slicer

3D Slicer (Slicer) is a free, open source software package for image analysis and scientific visualization. Slicer is used in a variety of medical applications, including autism, multiple sclerosis, systemic lupus erythematosus, prostate cancer, schizophrenia, orthopedic biomechanics, COPD, cardiovascular disease and neurosurgery. [7]

3.3 InVesalius 3.0

InVesalius is free medical software used to reconstruct structures of the human body. Based on two-dimensional images, acquired using Computed tomography or Magnetic resonance imaging equipment, the software generates virtual three-dimensional models correspondent to anatomical parts of the human body. After reconstructing three-dimensionally DICOM images, the software allows the generation of STL (stereo lithography) files. These files can be used for Rapid Prototyping.

3.4 Simulations

The first simulation was done on 3D Slicer which is free source software for 3D reconstruction of DICOM images. Fig.8 shows a 3D model of human face with three slices which are Axial, sagittal, and coronal planes. After obtaining the 3D model we can also adjust the opacity of the model. The result is shown in Fig.9.

12

Page 18: Augmented Reality for Robotic Surgical Dissection - Final Report

Fig.8 3D model of human face (3D Slicer)

Fig.9 Opacity adjustment of the 3D model

13

Page 19: Augmented Reality for Robotic Surgical Dissection - Final Report

Fig.10 The white matter surface as well as left and right optic nerve

The next simulation was done in InVesalius3.0 which is also free source software for 3D reconstruction of DICOM images. The simulation is done on CT scan images of a patient. The result gave us a clear 3D reconstructed model which is one of the objectives we want to achieve.

Fig.11 Simulation and result from InVesalius 3.0

14

Page 20: Augmented Reality for Robotic Surgical Dissection - Final Report

Fig.12 3D model of Human chest with ribs and other parts.

Later, simulations are also performed in Matlab according to the proposed idea. According to proposed idea, imported single image and an array of CT scan images in Matlab and then tried to convert it to 3D model with the help of Graphical user Interface (GUI). Fig.13 shows a single DICOM image in Matlab.

Fig.13 Single DICOM image in Matlab R2008b

15

Page 21: Augmented Reality for Robotic Surgical Dissection - Final Report

Fig.14 An array of DICOM images in Matlab R2008b

3.5 Conclusions

In this chapter the simulations on DICOM images have been presented. The result from 3D Slicer and InVesalius3.0 gives us 3D model required as per our objective while results from Matlab are still incomplete. Work is still going on to get the 3D model from DICOM images through GUI.

16

Page 22: Augmented Reality for Robotic Surgical Dissection - Final Report

Chapter 4

Summary and Conclusions

One can conclude from the simulation and results that the proposed ideas did not work. 3D reconstructed model was obtained from free source software but still work is going on obtaining it through Matlab and GUI.

Simulations were performed on different DICOM images.

Obtained the 3D reconstructed model from 3D Slicer and Invesalius3.0

Result shows that DICOM images are successfully imported to Matlab.

3D model from GUI is not obtained, which is an essential point in the proposed idea.

Finally, to achieve the final objective what discussed above for augmented reality for surgical dissection it is recommended that practical experiments are to be continued to get a 3D model from Matlab and GUI.

17

Page 23: Augmented Reality for Robotic Surgical Dissection - Final Report

References

1. http://memeburn.com/2009/10/the-future-of-social-networking-a-concept-investigation/

2. http://www.mathworks.com/company/newsletters/articles/accessing-data-in-dicom-files.html

3. http://en.wikipedia.org/wiki/Augmented_reality

4. http://campar.in.tum.de/Chair/ProjectKinectMagicMirror

5. http://www.augmentedrealitytrends.com/augmented-reality/medical-augmented-reality.html

6. http://www.davincisurgery.com/da-vinci-surgery/da-vinci-surgical-system/

7. http://en.wikipedia.org/wiki/Da_Vinci_Surgical_System

8. http://www.slicer.org/slicerWiki/images/a/ae/Slicer4minute-tutorial_SoniaPujol-mj.pdf

9. http://mhadegree.org/will-google-glass-revolutionize-the-medical-industry/

10. http://newtech.about.com/od/softwaredevelopment/a/Applications-Of-Augmented-Reality.htm

11. United States patent (#6,310,619 B1) “Virtual Reality, Tissue specific body model having user-variable tissue-specific attributes and a system implementing the same” by Robert W.Rice.

12. United States patent (#8,515,576 B2) “Surgical robot and robotic controller” invented by Kenneth L.Lipow and Dennis Gregoris.

18

Page 24: Augmented Reality for Robotic Surgical Dissection - Final Report

Appendix - A

Selected Patents

19