Abstract
Purpose
Increased robotic surgery is attended by increased reports of complications, largely due to limited operative view and lack of tactile sense. These kinds of obstacles, which seldom occur in open surgery, are challenging for beginner surgeons. To enhance robotic surgery safety, we created an augmented reality (AR) model of the organs around the thyroid glands, and tested the AR model applicability in robotic thyroidectomy.
Methods
We created AR images of the thyroid gland, common carotid arteries, trachea, and esophagus using preoperative CT images of a thyroid carcinoma patient. For a preliminary test, we overlaid the AR images on a 3-dimensional printed model at five different angles and evaluated its accuracy using Dice similarity coefficient. We then overlaid the AR images on the real-time operative images during robotic thyroidectomy.
In 2000, the da Vinci Surgical System (Intuitive Surgical Inc., Sunnyvale, CA, USA) was approved by the U.S. Food and Drug Administration (FDA) for general laparoscopic surgery. Robotic surgery offers 3-dimensional (3D) imaging, sophisticated endowrist movement, tremor reduction, and excellent ergonomics, compared to endoscopic remote-access surgery. Following its adoption in 2007, robotic thyroidectomy began to replace endoscopic remote-access approaches [12345].
Despite its advantages, robotic thyroidectomy is challenging for inexperienced surgeons for several reasons, including high cost, insufficient chance of training, and most importantly, safety issues. During robotic thyroidectomy, surgeons cannot use palpation. Moreover, identification of neighboring structures covered by tissue is often difficult, potentially leading to injury causing severe damage. Serious complications associated with robotic thyroidectomy have been reported, including brachial plexus injury, tracheal perforation, and flap perforation [6789]. Owing to safety concerns, FDA revoked the use of the robot in thyroid surgery in 2011 [10].
Within the medical field, augmented reality (AR) is an image guided technology in which computer generated images are superimposed onto the live video feed of the surgical view, to provide a composite view in real time. Incorporation of AR in laparoscopic or robotic surgery aids the surgeon to more easily perceive anatomical structures intuitively which helps to identify structures covered or hidden by surrounding tissue. Although there have been attempts to introduce AR laparoscopic surgery, the manufacturers of robotic systems have not yet released an AR-merged robotic system. Thus, at present AR is not readily available.
In this study, we created AR images using preoperative CT images and developed a program to control the AR images. We then superimposed the generated AR images onto the real-time operative view during robotic thyroidectomy.
The patient was a 45-year-old female diagnosed with papillary thyroid carcinoma. We conducted a preoperative CT scan (IQon Spectral CT, Philips Healthcare, Best, the Netherland) and the patient underwent left thyroid lobectomy using a bilateral axillo-breast approach with the da Vinci Si system (Intuitive Surgical, Sunnyvale, CA, USA), on August 7, 2017. This study was approved by the Institutional Review Board of Seoul National University (approval number: H-1710-022-889).
We created a 3D computer-aided design (CAD) model of the target organs (thyroid glands, common carotid arteries, trachea, and esophagus) from the subject's CT DICOM files using open source software Seg3D (v.2.2.1, NIH Center for Integrative Biomedical Computing at the University of Utah Scientific Computing and Imaging Institute, Salt Lake City, UT, USA). The functions of this software include visualization, segmentation, 3D reconstruction, and quantification of DICOM data. For 3D volume segmentation, we used a thresholding method using Hounsfield units (HUs) of the organs. The threshold values were 105–452 HU for the thyroid glands and common carotid arteries, −2,676 to −242 HU for the trachea, and −590 to 405 HU for the esophagus. The software segmented target organs from each CT section to create 3D reconstructed images. Fig. 1 shows the overall procedures for constructing 3D models including segmentation, 3D reconstruction, and creating 3D printing models and 3D CAD models.
We converted the 3D reconstruction result into Surface Tesselation Language file format, standard for triangulated representation of a 3D CAD model. We developed image registration software using MATLAB software (MATLAB R2017a, MathWorks Inc., Natick, MA, USA) to overlay the 3D CAD model on the intraoperative image. The functions of the image registration software included zoom-in, zoom-out, translation, roll rotation, pitch rotation, yaw rotation, and color/transparency adjustment.
To validate the image registration software, we made a 3D printing model of the target organs using a 3D printer (FDM 3D, Cubicon, Seongnam, Korea) with different color filaments. The image registration applied on the 3D printing model is shown in Fig. 2. Registration performance, which is the similarity between the 3D CAD Model and the 3D printed model, was quantitatively assessed with the Dice similarity coefficient (DSC) on the 2-dimensional camera view. DSC is calculated by the presence/absence formula, which is (2 × |A ∩ B| / (|A| + |B|)) where A and B are 2 different objects. The DSC was evaluated at 5 different angle views (30°, 45°, 60°, 75°, and 90°).
To enable the operative field to be simultaneously visualized on the laptop monitor and the da Vinci monitor, we connected the monitor cable from the robotic system to a laptop computer. Using image registration software, during the surgery we overlaid the 3D CAD model on the surgical image displayed on the laptop monitor. The model was controlled and overlaid by an assistant.
Table 1 shows the mean DSC value was 0.987 ± 0.003. The score ranged from 0.984 to 0.9908 in 5 repetitions of the test.
Fig. 3 shows the 3D CAD model overlaid on the surgical images during the surgery as well as the 3D CAD model's location and level of translation and rotation. During the initial stage of surgery, the model was overlaid on the muscles to show the approximate location of the organs (Fig. 3B). After exposing the trachea, the model was more accurately overlaid on the surgical field (Fig. 3D). During surgery, the model was intermittently displayed on a laptop monitor at the surgeon's request. The surgeon performed the procedures while referencing the model. After left thyroid lobectomy, we evaluated the integrity of the adjacent structures using the display on/off function (Fig. 3F).
Because AR integrates computer graphics and real images into a single and unified view, it has potential to overcome the obstacles caused by limited operative view. A number of trials have applied AR in laparoscopic surgeries [11121314]. However, there are several technical barriers to AR in laparoscopic or robotic surgery.
First, recognition of anatomical landmarks, essential for overlaying AR images on top of the target organs, is often difficult. This difficulty is more obvious during surgery on intraperitoneal organs because the target organs continuously move owing to respiration or patient position. Furthermore, the application of AR on deformable organs, such as the liver or pancreas, is challenging because, the overlaid AR image is likely to become separated from the operative image when deformity occurs. To avoid such difficulties, nondeformable organs such as the kidney or adrenal gland have been used as target organs in AR registration [15]. Alternatively, navigation aids have been useful to guide manual registration [16]. For deformable organs, AR is of limited applicability unless the AR registration is able to respond in real time to changes in the organs' shape. Several studies suggested adaptable registration methods in deformable organs. One study used the liver edges as landmarks to develop an automated delineation method to overlay an AR image on the liver during laparoscopic liver surgery [17]. Moreover, the application of AR is still limited in clinical settings because it increases surgical time significantly and registration is still inaccurate.
To our knowledge, this is the first study to apply AR in robotic thyroidectomy. We selected thyroidectomy to test AR on robotic surgery because of its popularity in Korea and technical applicability. Robotic thyroidectomy is one of the most commonly performed robotic procedures owing to its cosmetic advantages compared to conventional open thyroidectomy [5]. Moreover, it was technically easy to apply AR to this procedure because, although the thyroid gland is deformable, the surgical procedure required total resection of a lobe of the thyroid gland without partial preservation, and there was no requirement to trace the thyroid gland. Moreover, neighboring organs (common carotid arteries, trachea, and esophagus), which often lie beyond the operative view and must be protected during surgery, are nondeformable and easy to visualize in AR.
Before we applied AR images on the real-time surgical image, we superimposed the AR image on a 3D printing model and quantitatively evaluated the accuracy of manual registration using DSC. The DSC value was satisfactory compared to a previous study [18]. Then we overlaid the AR images on the real time surgical view. We also demonstrated that the AR image location yielded accurate registration. The AR image location information may be useful for developing automatic or semiautomatic registration algorithms.
There are several limitations in this study. First, the AR images were overlaid manually. Manual registration is advantageous because, compared to automatic registration, it is easy to apply and can be certified as a clinical product. However, automatic registration is more attractive because it is convenient and reduces surgical time. In laparoscopic surgeries, methods of automatic registration that recognize landmarks automatically using point-based, surface-based, or volume-based approaches are evolving rapidly. Thus, by recognizing easily noticeable landmarks such as the trachea or thyroid cartilage, automatic registration may become possible in robotic thyroidectomy.
Second, we used a separate monitor to operate the AR, and the surgeon had to pause the surgery to view the AR image. This was inevitable because the integration of AR on the robotic monitor is currently impossible due to a license issue. Because several companies are developing robotic surgical systems, integration will most likely be possible in the future.
Lastly, the current technology lacks a tracking system. We were not able to update the AR image with camera motion and needed to reset the image registration whenever the operative angle changed. The integrated movement of AR may develop via an optical tracking system or tracking anatomical landmarks [19]. However, at present all tracking technologies are impractical because the equipment is costly and needs extensive calibration, a main cause of inaccurate registration [20].
In conclusion, organs that should be protected during robotic thyroidectomy were clearly visualized using AR based on preoperative CT images. Although current AR in robotic thyroidectomy has limited clinical applicability due to technical and commercial limitations, AR in robotic surgery has several advantages that may enhance surgical safety.
ACKNOWLEDGEMENTS
This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (Ministry of Science, ICT & Future Planning, NRF-2016R1E1A1A01942072).
References
1. Kang SW, Jeong JJ, Nam KH, Chang HS, Chung WY, Park CS. Robot-assisted endoscopic thyroidectomy for thyroid malignancies using a gasless transaxillary approach. J Am Coll Surg. 2009; 209:e1–e7.
2. Terris DJ, Singer MC, Seybt MW. Robotic facelift thyroidectomy: II. Clinical feasibility and safety. Laryngoscope. 2011; 121:1636–1641.
3. Kim HY, Chai YJ, Dionigi G, Anuwong A, Richmon JD. Transoral robotic thyroidectomy: lessons learned from an initial consecutive series of 24 patients. Surg Endosc. 2018; 32:688–694.
4. Chai YJ, Suh H, Woo JW, Yu HW, Song RY, Kwon H, et al. Surgical safety and oncological completeness of robotic thyroidectomy for thyroid carcinoma larger than 2 cm. Surg Endosc. 2017; 31:1235–1240.
5. Chai YJ, Song J, Kang J, Woo JW, Song RY, Kwon H, et al. A comparative study of postoperative pain for open thyroidectomy versus bilateral axillo-breast approach robotic thyroidectomy using a self-reporting application for iPad. Ann Surg Treat Res. 2016; 90:239–245.
6. Landry CS, Grubbs EG, Warneke CL, Ormond M, Chua C, Lee JE, et al. Robot-assisted transaxillary thyroid surgery in the United States: is it comparable to open thyroid lobectomy? Ann Surg Oncol. 2012; 19:1269–1274.
7. Kandil EH, Noureldine SI, Yao L, Slakey DP. Robotic transaxillary thyroidectomy: an examination of the first one hundred cases. J Am Coll Surg. 2012; 214:558–564.
8. Ban EJ, Yoo JY, Kim WW, Son HY, Park S, Lee SH, et al. Surgical complications after robotic thyroidectomy for thyroid carcinoma: a single center experience with 3,000 patients. Surg Endosc. 2014; 28:2555–2563.
9. Lee S. Robotic thyroidectomy: pros and cons of various surgical approaches. Korean J Endocr Surg. 2015; 15:73–78.
10. Inabnet WB 3rd. Robotic thyroidectomy: must we drive a luxury sedan to arrive at our destination safely? Thyroid. 2012; 22:988–990.
11. Hansen C, Wieferich J, Ritter F, Rieder C, Peitgen HO. Illustrative visualization of 3D planning models for augmented reality in liver surgery. Int J Comput Assist Radiol Surg. 2010; 5:133–141.
12. Simpfendorfer T, Baumhauer M, Müller M, Gutt CN, Meinzer HP, Rassweiler JJ, et al. Augmented reality visualization during laparoscopic radical prostatectomy. J Endourol. 2011; 25:1841–1845.
13. Haouchine N, Dequidt J, Berger MO, Cotin S. Deformation-based augmented reality for hepatic surgery. Stud Health Technol Inform. 2013; 184:182–188.
14. Okamoto T, Onda S, Yanaga K, Suzuki N, Hattori A. Clinical application of navigation surgery using augmented reality in the abdominal field. Surg Today. 2015; 45:397–406.
15. Baumhauer M, Neuhaus J, Fritzsche K, Meinzer HP. The MITK image guided therapy toolkit and its exemplary application for augmented reality guided prostate surgery. In : Dossel Olaf, Schlegel WC, editors. Proceedings of World Congress on Medical Physics and Biomedical Engineering; Sep 7–12, 2009; Munich, Germany. IFMBE Proceedings, vol 25/6.
16. Vemuri AS, Wu JC, Liu KC, Wu HS. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery. Surg Endosc. 2012; 26:3655–3662.
17. Soler L, Nicolau S, Pessaux P, Mutter D, Marescaux J. Real-time 3D image reconstruction guidance in liver resection surgery. Hepatobiliary Surg Nutr. 2014; 3:73–81.
18. Bodenstedt S, Goertler J, Wagner M, Kenngott H, Miller-Stich B, Dillmann R, et al. Superpixel-based structure classification for laparoscopic surgery. In : Medical Imaging 2016: Image-guided procedures, robotic interventions, and modeling; 2016 Feb 28-Mar 1; San Diego (CA).