Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation...

10
Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1 , Camilo Molina 2 , Alexander Chang 1 , Judy Huang 2 , Chien-Ming Huang 1 , and Peter Kazanzides 1 1 Dept. of Computer Science, Johns Hopkins University, Baltimore MD 21218, USA 2 Dept. of Neurosurgery, Johns Hopkins Hospital, Baltimore MD 21287, USA {eazimi1,pkaz}@jhu.edu Abstract. Inadequate skill in performing surgical tasks can lead to medical errors and cause avoidable injury or death to the patients. On the other hand, there are situations where a novice surgeon or resident does not have access to an expert while performing a task. We therefore propose an interactive ecosystem for both training and practice of surgical tasks in mixed reality, which consists of authoring of the desired surgical task, immersive training and practice, assessment of the trainee, and remote coaching and analysis. This information-based ecosystem will also provide the data to train machine learning algorithms. Our interactive ecosystem involves a head-mounted display (HMD) ap- plication that can provide feedback as well as audiovisual assistance for training and live clinical performance of the task. In addition, the remote monitoring station provides the expert with a real-time view of the scene from the user’s perspective and enables guidance by providing annotation directly on the user’s scene. We use bedside ventriculostomy, a neurosur- gical procedure, as our illustrative use case; however the modular design of the system makes it expandable to other procedures. Keywords: Surgical Training and Assessment · Medical Augmented Reality · Surgical Simulation and Modeling · Artificial Intelligence 1 Introduction The complexity of medical interventions are continuously increasing. However, due to working-hour restrictions, increasing costs, and ethical concerns regarding patient safety, clinical training opportunities are continuously decreasing [18]. Similarly, although modern tertiary care hospitals are built upon a hierarchy of novice to expert training levels, a novice resident may not always have a senior resident or attending immediately available for help in the setting of an emergency. This is more likely in rural areas where often there are not enough skilled surgeons to provide expert supervision or assistance. The above factors present an opportunity to develop a practical technological solution that can support training and operation, including the ability to instan- taneously connect a novice to a remote expert. Computer-based applications and

Transcript of Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation...

Page 1: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

Interactive Training and Operation Ecosystemfor Surgical Tasks in Mixed Reality

Ehsan Azimi1, Camilo Molina2, Alexander Chang1, Judy Huang2,Chien-Ming Huang1, and Peter Kazanzides1

1 Dept. of Computer Science, Johns Hopkins University, Baltimore MD 21218, USA2 Dept. of Neurosurgery, Johns Hopkins Hospital, Baltimore MD 21287, USA

{eazimi1,pkaz}@jhu.edu

Abstract. Inadequate skill in performing surgical tasks can lead tomedical errors and cause avoidable injury or death to the patients. Onthe other hand, there are situations where a novice surgeon or residentdoes not have access to an expert while performing a task.We therefore propose an interactive ecosystem for both training andpractice of surgical tasks in mixed reality, which consists of authoring ofthe desired surgical task, immersive training and practice, assessment ofthe trainee, and remote coaching and analysis. This information-basedecosystem will also provide the data to train machine learning algorithms.Our interactive ecosystem involves a head-mounted display (HMD) ap-plication that can provide feedback as well as audiovisual assistance fortraining and live clinical performance of the task. In addition, the remotemonitoring station provides the expert with a real-time view of the scenefrom the user’s perspective and enables guidance by providing annotationdirectly on the user’s scene. We use bedside ventriculostomy, a neurosur-gical procedure, as our illustrative use case; however the modular designof the system makes it expandable to other procedures.

Keywords: Surgical Training and Assessment · Medical AugmentedReality · Surgical Simulation and Modeling · Artificial Intelligence

1 Introduction

The complexity of medical interventions are continuously increasing. However,due to working-hour restrictions, increasing costs, and ethical concerns regardingpatient safety, clinical training opportunities are continuously decreasing [18].Similarly, although modern tertiary care hospitals are built upon a hierarchyof novice to expert training levels, a novice resident may not always have asenior resident or attending immediately available for help in the setting of anemergency. This is more likely in rural areas where often there are not enoughskilled surgeons to provide expert supervision or assistance.

The above factors present an opportunity to develop a practical technologicalsolution that can support training and operation, including the ability to instan-taneously connect a novice to a remote expert. Computer-based applications and

Page 2: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

2 E. Azimi et al.

augmented reality (AR) systems are increasingly popular to support the trainingof medical professionals, as they can result in new educational opportunities [5].Due to recent technical advances in commercial optical see-through head-mounteddisplay (OST-HMD) devices, there has been a considerable increase in their usefor augmented reality applications and their specifications have become suitablefor medical applications [12,15].

HMDs have been used in the medical domain for treatment, education,rehabilitation and surgery [6, 7, 9]. An early HMD research effort focused ongiving the surgeon an unobstructed view of the anatomy which is rendered insidethe patient’s body [17]. With the advent of Google Glass around 2013, manyresearch groups started to explore using an HMD as a replacement for traditionalradiology monitors [2,19]. The use of a HMD to visualize volumetric medical datafor neurosurgery planning was presented in [8]; together with a haptic device,the system allows the user to scroll through the image slices more intuitively.More recently, a generalized real-time streaming system based on OST-HMDswas proposed for image-guided surgeries, including percutaneous screw fixationof pelvic fractures [13].

Our prior work includes picture-in-picture visualization for neurosurgerynavigation on a custom HMD [3,16] and the use of OST-HMDs for training twoemergency medical tasks: needle chest decompression and initiating an intravenousline [4]. Our experience with the latter effort led to the conceptualization ofthe interactive surgical training and operation system described herein. Weextend our prior approach by adding modules for content generation, remotemonitoring, smart assessment, and procedure analysis. The HMD-based trainingapplication is designed to be independent of the training procedure; thus, althoughoriginally developed for emergency medical procedures, in this paper we considerventriculostomy, which is a neurosurgical procedure that involves insertion of adrain within a cerebral ventricle for cerebrospinal fluid diversion for a variety ofurgent indications. We then discuss how such technology can change the futureof medical training, including by providing training data for machine learningalgorithms in an artificial intelligence module.

2 System Architecture

The overall schematic of the training and practice ecosystem is shown in Fig. 1.This structure allows the expert surgeon to intuitively create a training moduleand the trainees to then use it for practice. It also provides real-time correctiveassistance and access to the expert. We go through each element of this systemand the way it operates with the other components.

2.1 Training Module Generation

As depicted in Fig. 1A, this authoring tool allows a skilled surgeon to intuitivelycreate his/her desired training module step-by-step using voice commands, withthe resulting workflow and visual elements serialized into a data asset. The dashed

Page 3: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

Interactive Training in Mixed Reality 3

Fig. 1: Schematic overview of the interactive surgical training and operationecosystem and its interconnections, (A) shows the application for generatingtutorials, (B) is the HMD-Based immersive environment for practice and training,(C) is the automatic assessment module, (D) is the analysis of the procedure inthe OR setting, (E) is the remote monitoring station for the expert, and (F) isthe offline analysis (AI) based on data from multiple trainees.

line in the figure separating this module from the rest of the system indicatesthat it does not need to be in real-time communication with its neighboringcomponents.

2.2 HMD-Based Training and Practice

We previously developed a software framework which can provide augmentedreality guidance that is agnostic to the procedure [4]. The workflow of eachprocedure is abstracted and represented as a sequence of steps, with associatedtext and visual elements. As shown in Fig. 1B, during training or actual operationof the procedure, the workflow is dynamically loaded and parsed on the OST-HMD. The trainee or practitioner can then use voice commands to go throughthe training steps and hide or show their desired visualizations. This mode ofinteraction does not involve mouse or keyboard and allows the user to focus andperform bi-manual tasks without compromising sterility. In each step, the usercan see the corresponding instruction, the correct position and orientation of thetool (registered to the anatomy) if required, as well as additional informationsuch as medical imaging (CT, MRI, etc.) if needed. The training/operation cansupport an individual user as well as multiple users in a collaborative setting.

Page 4: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

4 E. Azimi et al.

2.3 Smart Assessment

The smart assessment component (Fig. 1C) uses a number of metrics to determineperformance of the trainee and provide real-time feedback during the task. Thesemetrics can include the correct positioning of an instrument or its insertion angleor even its very presence or motion pattern during a phase of the surgery. Thisrequires software to track the motion of the instrument, which can utilize theHMD cameras, including depth camera if available, and may be facilitated by theplacement of AR tags or other markers on the instrument. In some cases, it maybe necessary to use an external tracking system that would then be registeredto the HMD. The feedback can be turned on and off based on the skill leveland/or instructor’s decision in the training or evaluation phase. It is capable ofboth warning the user with audiovisual cues or providing guidance to the userfor correction.

2.4 Procedure Analysis

There are many instances where an operation fails despite having a highly skilledsurgeon and the logistics in the OR may contribute to its success. This module,which is depicted in Fig. 1D, collects data from different sources and sensorsin the simulation or operative field, and provides post-training or post-surgeryanalysis. It differs from Smart Assessment in that it looks into the ‘surgery’ inthe context of the OR setting rather than merely the ‘surgeon’ as an individual.The feedback is not provided during the task, and instead focuses on overallperformance, the events in the OR, comparing to a database of similar proceduresor in collaboration with others. Collected data can include eye-gaze, surgeon’slocation and head motion and other pertinent sensors during the surgery, inaddition to the data record from the surgery. This data can also be providedto the Artificial Intelligence component (Fig. 1F) for longer-term improvements,including training of machine learning algorithms using data from multiple users.This component also processes the data provided by the expert in the remotestation as well as the smart assessment for the trainee’s performance.

2.5 Remote Monitoring Station

As illustrated in Fig. 1E, an expert, who is typically a skilled surgeon, monitorsthe trainee’s actions and is able to provide real-time feedback in the form ofdirect annotation on the trainee’s screen as well as other types of audiovisualcues. This differs from a video call because the expert has a first person view ofthe field, in 3D, and can provide feedback or annotation in the HMD’s immersive3D environment. Furthermore, communication is bidirectional and the traineecan also ask questions from the expert. The remote monitoring station can beadvantageous in situations where additional expert assistance is critical; forexample, when a particular phase of the surgery is too complex for automatedassessment (e.g., via the Smart Assessment module) or when there is a scarcity ofexperts in a particular field, which is especially common in rural areas. The expert

Page 5: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

Interactive Training in Mixed Reality 5

is able to evaluate and score the user performance for each step of the procedure.The platform subsequently saves the record and sends it to the analyzer (Section2.4) and artificial intelligence (Section 2.6) for further processing.

2.6 Artificial Intelligence

Recorded data from multiple users is provided to the Artificial Intelligence (AI)module for the purpose of process improvement. This data can be used to furthertrain machine learning algorithms within the Smart Assessment and ProcedureAnalysis modules, thereby increasing their ability to provide feedback duringor immediately after the procedure. For example, the gaze tracking data canindicate where on the HMD screen and in the OR the surgeon is paying moreattention; AI methods may be able to use this data to identify a novice surgeonand provide additional guidance.

3 Implementation

To verify our proposed surgical training and practice system, we selected ven-triculostomy, or external ventricular drainage, which is a surgical procedure toalleviate raised intracranial pressure by inserting a tube through the skull intothe ventricles to divert cerebrospinal fluid. It is done by surgically penetrating theskull, dura mater, and brain such that the ventricle of the brain is accessed. Thisis one of the most frequent and standardized procedures in neurosurgery. However,many first and subsequent punctures miss the target, and suboptimal placementor misplacement of the catheter is common [14]. The trajectory of the cathetermust be perpendicular to the skull at the entry point. Such 3D geometricalconstraints along with the described complexities make this procedure an idealcandidate for augmented reality mediated 3D visualization to enable more accu-rate targeting and higher success rates. Additionally, ventriculostomy, like mostneurosurgical procedures, can be conceptualized and segmented into critical taskcomponents, which can be simulated independently or in conjunction with othermodules to recreate the experience of a complex neurosurgical procedure [10].Our proposed system can be used both for training on a mannequin as well asduring the real operation.

3.1 System Setup

The system setup is shown in Fig. 2b, which is also a snapshot of the trainingenvironment taken from the remote station. To simulate this neurosurgicalprocedure, a skull model (A) was fixed by a Mayfield clamp (B). Two ARTags(C) were attached on the skull so that the camera can localize the designatedlandmarks. Two markers were used so that if one is occluded or out of the fieldof view, the camera can still see the other one.

Page 6: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

6 E. Azimi et al.

(a) (b)

Fig. 2: (a) Representation of display anchored images and text (top), and featureanchored objects (bottom); (b) Remote monitoring station and system setup

The HMD software was developed using the cross-platform game engineUnity3, along with C#. It was then deployed to Microsoft HoloLens4. Thesoftware on the remote monitoring station was developed using Python. It iscross-platform and can run on a PC, tablet, or other device.

The visual elements displayed on the HMD can be categorized based on theirproperty (text, image, 3D object) and on their display space (display-anchored,feature-anchored), as shown in Fig. 2a. The location of a display-anchored objectis defined with respect to 2D screen coordinates. For the HMD wearer, it will befixed despite the user’s head movement. For feature-anchored overlays, displaycalibration is performed so that the trainee can visualize the overlay in the correctpose with respect to the real world [11].

3 Unity: https://unity3d.com/4 Microsoft HoloLens: https://www.microsoft.com/en-us/hololens

(a) (b)

Fig. 3: Three components of the proposed ecosystem: (a) Tutorial generationby expert (top), angular calibration of needle (bottom). (b) HMD-based ARassistance for needle pose correction.

Page 7: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

Interactive Training in Mixed Reality 7

3.2 Tutorial Generation

The training/operation module relies on a serialized data asset (JSON) thatencodes the workflow. This module helps an expert surgeon to create this dataasset. The expert wears the HMD and using voice commands starts generating theinstructions step by step and adds the image (display-anchored) using the frontcamera of the HoloLens and uses a marker to create a landmark (feature-anchored)for the desired step, as shown in Fig. 3a.

3.3 HMD-Based Training and Operation

Once the tutorial is generated, the trainee can use it as shown in Fig. 3b. Here,the HMD view of the trainee for one sample step is depicted, where the bluetext is the instruction for this step, green indicates the correct orientation ofthe needle, and the yellow arrow guides the user toward the correct pose for theneedle. The top right of the screen shows the orientation error, time on taskas well as a red warning bar. Medical imaging data (CT, MRI, etc.) are alsoavailable and can be loaded by the user’s voice command, eliminating the needto look at a separate monitor.

3.4 Remote Monitoring Station

A snapshot of the remote monitoring station is shown in Fig. 2b. The GUI enablesthe expert to stream the trainee or operator’s view and send audiovisual messagesto the HMD user. It communicates with the HMD wirelessly through the Internetusing TCP/IP to receive HMD camera frames, and UDP to provide feedback orannotate the trainee’s view on the HMD. The delay is approximately 50-100 ms(mostly for streaming images) in a local network setup, which is sufficient for thiscommunication.

3.5 Smart Assessment

In the current implementation, Smart Assessment is integrated with the remotestation and handles the tool tracking and networking. In ventriculostomy, itmeasures the error in the pose of the catheter at the entry point and assiststhe trainee to correct the pose using audiovisual cues. Tool tracking is done bycomputer vision and calculates the sagittal and coronal angles of the tool todetermine if the needle is perpendicular to the skull. The needle is detected usinga Hough line detector and segmentation in HSV color space. Measurements arecalibrated using a goniometer as ground truth and interpolation (Fig. 3a).

3.6 Procedure Analysis

Eye-gaze trackers (Pupil-labs5) are mounted on the HMD. They depict thereal-time gaze of the user to the remote station and aggregate both 2D and

5 Pupil-labs:https://pupil-labs.com/

Page 8: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

8 E. Azimi et al.

3D gaze-data in the form of a heat-map and save it for analysis. Location ofthe operators/trainees in the OR is also tracked and saved. Other relevant datafrom the surgery is also fed to this module. Another method in the applicationlogs each user’s performance and its corresponding score based on the smartassessment metrics or expert feedback and saves it in a separate JSON file.

3.7 Artificial Intelligence

The Artificial Intelligence component will rely on data collected during thetraining/operation procedure and is an item for future work. For example, wecan attempt to assess surgical skill from a heat map of the gaze-tracking data, asother studies have suggested that skill level and eye-gaze can be related [1].

4 Discussion

Modern surgical training has recognized the value of limiting weekly residencytraining hours. However, the limit in training hours combined with the everincreasing amount of medical knowledge an operator must master creates asignificant challenge–how does one train residents to perform complex surgicalprocedures safely and independently if the time allotted to train residents isdecreased? In order to bridge the gap of decreased clinical encounters to promotethe safe delivery of care and the trainees’ overall well-being, a solution that iscapable of both training and supervising surgeons in training is necessary. Theproposed system, presented in the context of a bedside ventriculostomy, has thepotential of not only training residents how to properly execute this procedurebut to also serve as a real-time platform to connect the training resident to anexpert during a live clinical scenario.

If successful, the present platform can be applied to a large variety of bedsidesterile clinical procedures that residents are expected to perform independentlyearly in their training. Examples of other applicable procedures that could besimulated for training or supervised during live clinical execution include: lumbarpunctures, lumbar drains, chest tube insertion, central line insertion, intraosseousline insertion, arterial line placement, intubation, pleurocentesis, etc. All of thelatter procedures, when taught conventionally, can be broken down into discretesteps that make them well suited to be adapted and presented in our trainingsimulation platform. Similarly, all of the latter procedures can benefit from aremote clinical expert that can help a novice troubleshoot a difficult procedureby assisting with the small nuances that can only come from experience.

5 Conclusion

In this work, a comprehensive system for training and performing surgical tasks inmixed reality was introduced. The system includes an authoring module, trainingand practice setup, smart assessment, and a remote station for the expert. These

Page 9: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

Interactive Training in Mixed Reality 9

have been implemented for bedside ventriculostomy and our next step is toconduct a user study with neurosurgery residents. We also plan to extend thisplatform for other types of procedures that can contribute to resident trainingand education. Moreover, the capture of data from the HMD, sensors, and remoteexpert further enables process improvements via data mining and training ofartificial intelligence.

6 Acknowledgement

Patrick Myers, Benjamin Pikus, Prateek Bhatnagar and Allan Wang assistedwith the software development.

References

1. Ahmidi, N., Hager, G.D., Ishii, L., Fichtinger, G., Gallia, G.L., Ishii, M.: Surgicaltask and skill classification from eye tracking and tool motion in minimally invasivesurgery. In: Intl. Conf. on Medical Image Computing and Computer-AssistedIntervention (MICCAI). pp. 295–302. Springer (2010)

2. Armstrong, D.G., Rankin, T.M., Giovinco, N.A., Mills, J.L., Matsuoka, Y.: A heads-up display for diabetic limb salvage surgery: a view through the google lookingglass. Journal of Diabetes Science and Technology 8(5), 951–956 (2014)

3. Azimi, E., Doswell, J., Kazanzides, P.: Augmented reality goggles with an integratedtracking system for navigation in neurosurgery. In: Virtual Reality Short Papersand Posters (VRW). pp. 123–124. IEEE (2012)

4. Azimi, E., Winkler, A., Tucker, E., Qian, L., Doswell, J., Navab, N., Kazanzides,P.: Can mixed-reality improve the training of medical procedures? In: IEEE Engin.in Medicine and Biology Conf. (EMBC). pp. 112–116 (July 2018)

5. Barsom, E.Z., Graafland, M., Schijven, M.P.: Systematic review on the effectivenessof augmented reality applications in medical training. Surgical Endoscopy 30(10),4174–4183 (Oct 2016)

6. Chen, L., Day, T., Tang, W., John, N.W.: Recent developments and future challengesin medical mixed reality. In: IEEE Intl. Symp. on Mixed and Augmented Reality(ISMAR). pp. 123–135 (2017)

7. Cutolo, F., Meola, A., Carbone, M., Sinceri, S., Cagnazzo, F., Denaro, E., Esposito,N., Ferrari, M., Ferrari, V.: A new head-mounted display-based augmented realitysystem in neurosurgical oncology: a study on phantom. Computer Assisted Surgery22(1), 39–53 (2017)

8. Eck, U., Stefan, P., Laga, H., Sandor, C., Fallavollita, P., Navab, N.: Exploring visuo-haptic augmented reality user interfaces for stereo-tactic neurosurgery planning.In: Intl. Conf. on Medical Imaging and Augmented Reality (MIAR). pp. 208–220.Springer (2016)

9. Kersten-Oertel, M., Jannin, P., Collins, D.L.: DVV: a taxonomy for mixed realityvisualization in image guided surgery. IEEE Transactions on Visualization andComputer Graphics 18(2), 332–352 (2012)

10. Lemole Jr, G.M., Banerjee, P.P., Luciano, C., Neckrysh, S., Charbel, F.T.: Vir-tual reality in neurosurgical education: part-task ventriculostomy simulation withdynamic visual and haptic feedback. Neurosurgery 61(1), 142–149 (2007)

Page 10: Interactive Training and Operation Ecosystem for Surgical ......Interactive Training and Operation Ecosystem for Surgical Tasks in Mixed Reality Ehsan Azimi 1, Camilo Molina 2, Alexander

10 E. Azimi et al.

11. Qian, L., Azimi, E., Kazanzides, P., Navab, N.: Comprehensive tracker based displaycalibration for holographic optical see-through head-mounted display. arXiv preprintarXiv:1703.05834 (2017)

12. Qian, L., Barthel, A., Johnson, A., Osgood, G., Kazanzides, P., Navab, N., Fuerst, B.:Comparison of optical see-through head-mounted displays for surgical interventionswith object-anchored 2D-display. Intl. J. of Computer Assisted Radiology andSurgery (IJCARS) 12(6), 901–910 (2017)

13. Qian, L., Unberath, M., Yu, K., Fuerst, B., Johnson, A., Navab, N., Osgood, G.:Towards virtual monitors for image guided interventions-real-time streaming tooptical see-through head-mounted displays. arXiv preprint arXiv:1710.00808 (2017)

14. Raabe, C., Fichtner, J., Beck, J., Gralla, J., Raabe, A.: Revisiting the rules forfreehand ventriculostomy: a virtual reality analysis. Journal of Neurosurgery pp. 1–8(2017)

15. Rolland, J.P., Fuchs, H.: Optical versus video see-through head-mounted displaysin medical visualization. Presence: Teleoperators and Virtual Environments 9(3),287–309 (2000)

16. Sadda, P., Azimi, E., Jallo, G., Doswell, J., Kazanzides, P.: Surgical navigationwith a head-mounted tracking system and display. Studies in Health Technologyand Informatics 184, 363–369 (2012)

17. Sauer, F., Khamene, A., Bascle, B., Rubino, G.J.: A head-mounted display systemfor augmented reality image guidance: Towards clinical evaluation for iMRI-guidedneurosurgery. In: Intl. Conf. on Medical Image Computing and Computer-AssistedIntervention (MICCAI). pp. 707–716. Springer-Verlag (2001)

18. Stefan, P., Habert, S., Winkler, A., Lazarovici, M., Furmetz, J., Eck, U., Navab,N.: A mixed-reality approach to radiation-free training of C-arm based surgery.In: Intl. Conf. on Medical Image Computing and Computer-Assisted Intervention(MICCAI). pp. 540–547. Springer (2017)

19. Yoon, J.W., Chen, R.E., Han, P.K., Si, P., Freeman, W.D., Pirris, S.M.: Technicalfeasibility and safety of an intraoperative head-up display device during spineinstrumentation. Intl. J. of Med. Robotics and Comp. Assisted Surg. 13(3) (2017)