Journal List > Prog Med Phys > v.32(2) > 1147262

Kim, Lee, Sohn, and Mun: In-House Developed Surface-Guided Repositioning and Monitoring System to Complement In-Room Patient Positioning System for Spine Radiosurgery

Abstract

Purpose

This study aimed to develop a surface-guided radiosurgery system customized for a neurosurgery clinic that could be used as an auxiliary system for improving the accuracy, monitoring the movements of patients while performing hypofractionated radiosurgery, and minimizing the geometric misses.

Methods

RGB-D cameras were installed in the treatment room and a monitoring system was constructed to perform a three-dimensional (3D) scan of the body surface of the patient and to express it as a point cloud. This could be used to confirm the exact position of the body of the patient and monitor their movements during radiosurgery. The image from the system was matched with the computed tomography (CT) image, and the positional accuracy was compared and analyzed in relation to the existing system to evaluate the accuracy of the setup.

Results

The user interface was configured to register the patient and display the setup image to position the setup location by matching the 3D points on the body of the patient with the CT image. The error rate for the position difference was within 1-mm distance (min, 一0.21 mm; max, 0.63 mm). Compared with the existing system, the differences were found to be as follows: x=0.08 mm, y=0.13 mm, and z=0.26 mm.

Conclusions

We developed a surface-guided repositioning and monitoring system that can be customized and applied in a radiation surgery environment with an existing linear accelerator. It was confirmed that this system could be easily applied for accurate patient repositioning and inter-treatment motion monitoring.

Introduction

Since stereotactic radiosurgery (SRS) delivers high-prescription doses to the treatment site, accurate target positioning and patient setup are essential. Linear accelerator (LINAC)-based SRS requires a frame-based stereotactic approach to achieve the required accuracy. However, auxiliary systems for radiation exposure control have been developed for a frameless stereotactic approach [1]. Patient positioning systems based on infrared (IR) reflective markers demonstrate comparable accuracy with setups using X-ray imaging and are currently used in the field of spine radiosurgery [2-4]. A frameless LINAC-based radiosurgery system captures and tracks the movement of a patient during treatment and checks whether the position of the patient is within the acceptable range. At this point, the X-ray dose used to check the setup accuracy may appear to be negligible when compared with the dose scattered throughout the body during radiosurgery or treatment; however, surface-guided radiation therapy (SGRT) is being developed and utilized as a system using three-dimensional (3D) surface imaging techniques without the disadvantage of the ionizing radiation as reported in American Association of Physicists in Medicine Task Group 75 (AAPM TG 75) [5-8].
While it is challenging to directly use surface imaging systems for monitoring spine lesions in the body, the system can be widely used for monitoring patient positional changes The SGRT system will record the changes in the motion of the 3D surface of the patient [9]. If a difference in location occurs, the location of the lesion is considered to have changed. The SGRT system using non-ionizing radiation integrates a projector with two or three cameras to obtain a real-time 3D surface of the patient [10]. During radiation therapy and radiosurgery, a stereo vision system or depth camera is used to optically scan the surface, thereby identifying the location of the patient and monitoring it with high spatial resolution [11].
Since SGRT provides real-time information about the body surface and surgery/treatment site of the patient in the treatment room, it yields more accurate positioning for radiosurgery than laser positioning within this context. The system also has the advantage of reduced radiation doses for fractionated/hypofractionated treatment with a reduced amount of X-ray imaging per day [5,6].
The SGRT system monitors the position of the patient in real time during radiosurgery, which contributes to the standardization of the radiosurgery workflow with high precision and reproducibility while prioritizing patient safety [10,11].
The clinical use of SGRT involves optical surface scanning for patient positioning, motion monitoring within the treatment area, and respiratory gating techniques and has proven to be highly beneficial. The AlignRT system (Vision RT, London, UK) that is constructed with this technology uses SGRT in gated radiation therapy for tumor locations close to the skin surface, such as accelerated partial breast irradiation, whole brain radiation therapy, and SRS using deep inspiration breath hold (DIBH) and voluntary DIBH [12,13].
The SGRT system involves the inherent risk of deviating positional precision in the sub-process of repositioning the patient’s treatment, with a risk that is ranked high in the risk priority number scoring using failure modes and effects analysis [14]. In the case of our neurosurgery clinic, the coordinates reconfirmed by repositioning in the treatment position inevitably involve an intrinsic error of submillimeters while attaching an IR reflective marker using the ExacTrac® (BrainLab, Munich, Germany) in the computed tomography (CT) simulation stage with the treatment positioning for the patient.
In other words, establishing and confirming the patient’s treatment location is an important step in surface-guided radiosurgery (SGRS) to ensure accurate execution of the treatment plan. This means that a treatment setup check and treatment monitoring are pivotal steps in the treatment process. Therefore, a more accurate and intuitive system is required to compensate for the geometric misses on existing systems that only use a few IR markers to set up the patient’s position. These errors can be reduced by using point cloud surface imaging as an additional method to model the body of the patient since it provides a representation of the target scene in the Cartesian coordinate system of x, y, and z. Various radiation treatment applications that have incorporated this method have been studied and used for treatment [12,13,15].
Another method to reduce such errors involves the use of a surgical guide system. Computer-based surgical assistance systems (NAV3i; Stryker, Kalamazoo, MI, USA, and StealthStation; Medtronic, Minneapolis, MN, USA) are being widely used in the field of neurosurgery. These systems provide a guide to the exact location of the identified lesion by checking the guide from the incision site to the surgical site based on CT and magnetic resonance images during surgery [16-20].
Therefore, this study aims to develop a SGRS system that is customized for a neurosurgery clinic and could be used as an auxiliary system for improving accuracy and tracking the movements of patients while performing divided therapy during treatment and minimizing the geometric misses.

Materials and Methods

1. Surface-guided reposition and monitoring system

A point cloud is a set of data points with x, y, and z coordinates in 3D space. A point cloud generally measures the outer surface of an object using a 3D scanner, which undergoes processing to ultimately yield a 3D image (Fig. 1a, b).
To create a 3D image of the patient, a cloud of 200,000 points was obtained for the region of interest (ROI) recognized by the depth camera, while a 35,000-point cloud was designed to shape the object to reduce the amount of computation. A RGB-D camera was used to recognize the depth, the body surface, and the position of the patient to create 3D images [21]. To obtain a point on the surface of an object from a depth camera, the depth information distance (z) and the distance between the illuminator and the actual camera sensor (x) must be obtained using the triangulation principle. Here, these images were updated by tracking the changes in the location, surface, and depth information caused by the movement of the patient. Meanwhile, Eqs. (1) and (2) were used to recognize the ROI and the depth of the patient. Here, Z is the distance between the patient body and camera; b is the distance from the IR illuminator to the center of the camera lens; f is the distance from the lens to the camera sensor; α and β are the angles of the IR illuminator and lens for the body surface spot, respectively; x is the distance from the body surface spot to the body point orthogonal to the illuminator; P is the distance between the center and outer edge of the sensor; and y is the body depth from the surface [21].
(1)
z=btan(α)+tan(β)
(2)
x=ztan(α)
Three depth cameras (ASTRA S; ORBBEC, Troy, MI, USA) were installed for imaging the entire body surface on the ceiling above the gantry head, with the specifications of the SGRS-integrated computer as follows: I7-8770, 16G RAM, SSD 512G, GTX 1080Ti, and Windows 10 (Fig. 1c). Each point cloud was obtained from the cameras installed on the left, center, and right, and a 3D point cloud model was created by overlapping the obtained point clouds (Fig. 2).

2. Point cloud spatial transformation

The matrix of the newly transformed coordinates x’, y’, and z’ for the spatial coordinates x, y, and z, which combine the points obtained from the camera positions on the left, center, and right of the point cloud into one scene, was obtained by performing spatial transformation corresponding to the movement described in Eq. (3). The rotation (R) and the translation matrix (T) for the homogeneous coordinates were as represented in Eqs. (3) and (4) [22,23]. In terms of geometric transformation, the value of the moving direction was Tx, Ty, Tz, respectively, with the point (0, 0, 0) moving to (Tx, Ty, Tz). In the case of the X axis, the X coordinate did not change because it was multiplied by 1, while the remaining Y- and Z-axis coordinates were rotated in the same way as the Cartesian coordinate system. The same principle was applied to the Y and Z axes described in Eq. (4); that is, each point was defined in 3D space with x, y, z Cartesian coordinates. Therefore, a 4×4 matrix was required to perform the rotation transformation for image composition, and an element (1) was added to the end of each of the x, y, z 3D vectors.
(3)
x'y'z'1=RxxRxyRxzTxRyxRyyRyzTyRzxRzyRzzTz0001xyz1
(4)
Rx(θ)=10000cos(θ)sin(θ)00sin(θ)cos(θ)00001,Ry(θ)=cos(θ)0sin(θ)00100sin(θ)0cos(θ)00001,Rz(θ)=cos(θ)sin(θ)00sin(θ)cos(θ)0000100001
A point cloud library (PCL) was used to acquire the point clouds and to build the user interfaces [24]. Additionally, a point cloud to CT image registration algorithm was developed and installed in the integrated graphical user interface. This enabled the patient setup position to be adjusted in real time using the SGRS-integrated computer in the treatment room and the movement of the patient to be monitored in real time during the treatment. The point clouds obtained after scanning the camera were down-sampled using voxel grid filtering. Voxel grid filtering calculates the presence or absence of a point in the leaf size at the center point of each voxel, calculates the center point of the points, and removes the remainder of the surrounding points. Then, the noise was removed, and the CT images on DICOM coordinates were converted into point clouds on patient-table coordinates. These were then aligned on the coordinates using the iterative closet point (ICP) algorithm (Fig. 3) [25-28].

3. Measurement accuracy

The Hausdorff distance is the distance between two sets of metric spaces (Eq. 5). In Eq. (5), a and b represent points from sets A and B, respectively, while for simplicity, d(a, b) is any metric between these points. Here, d(a, b) is denoted as the Euclidean distance between a and b. The Hausdorff distance can be used in computer vision applications to obtain a given reference image from any target image [29]. In this study, a reference image was obtained in the CT simulation stage for radiosurgery. This image was used as the basis for a comparison and the difference between the reference image and the first or fractionated treatment was observed. The area of the actual binary target image was processed as a point cloud. The treatment position was located by calculating the Hausdorff distance between the reference image and the area of the target image at the time of treatment with the distance matching algorithm and by minimizing this distance:
(5)
hA, B=maxaA{minbBda,b}
Subsequently, 10-th setups were performed to verify the positional accuracy of the developed system. The positional accuracy was calculated in relation to the existing system (ExacTrac).

Results

A user interface was established for the SGRS system (Fig. 4). The point cloud of the patient in the CT simulation stage for radiosurgery was registered by providing the patient information registration window, which included patient name, department number, date of birth, gender, tumor location, and treatment information (fraction), and a window (panel) for real-time monitoring of the point cloud in one screen (Fig. 4a). A dummy phantom was used for testing (Fig. 4b) and was regarded as a rigid body with no movement via breathing. The acquired point cloud image is shown in Fig. 4a.
The point cloud dataset obtained in the CT simulation process was used for the first treatment and the split treatment after establishing the radiation surgery plan. In addition, it was matched with the CT image for patient setup and for monitoring in the treatment room (Fig. 5). The accuracy of the matching (the average of ten trial images) using the ICP matching algorithm of the PCL library was as follows: x=1.44±0.5 mm, y=1.48±0.31 mm, z=0.09±0.11 mm, pitch=0.01°±0.02°, roll=0.01°±0.01°, and yaw=0.05°±0.02° (Fig. 5b, c). An error occurred in the process of matching the setup point cloud acquired in the treatment room and the CT image conversion into a point cloud. It was possible to locate the target lesion by obtaining the CT point cloud (Fig. 5a).
The developed system was used to perform ten repeated patient setups. The maximum error of the Hausdorff distance was found to be within 1 mm, with a minimum error of –0.21 mm and a maximum error of 0.63 mm (Fig. 6).
Fig. 7a presents the results of the location accuracy verification for the constructed SGRS system. The x, y, and z values of the developed system all exhibited an error within the range of 0.25 mm. While evaluating the difference from the existing system, an error in the range of 0.26 mm was observed in the maximum z direction and an error within 0.15 mm was perceived in the x and y directions (Fig. 7b).

Discussion

A SGRS system can be used to continuously monitor a patient’s surface motion during radiation therapy through optical surface imaging technology. We attempted to reduce the image processing burden by reducing the amount of point cloud raw data by around one-fifth and controlled the latency to within several hundred milliseconds. Wang et al. applied a machine learning method to predict external respiratory motion signals and the internal liver motion. As a result, the matching latency with the surface image was adjusted to within 500 ms by predicting the liver motion [30]. While our system is based on a couch angle of 0°, a recent study by Covington and Popple [31] presented a simple and inexpensive procedure for evaluating the performance of a surface imaging system used in stereotactic radiosurgery treatment at a non-zero couch angle. Meanwhile, Chan et al. [32] used the ICP registration method to match CT and 3D ultrasound images for a surface-guided system for spinal surgery as a navigation imaging system. As a result, the accuracy was reported to be 0.3±0.2 mm and 0.9°±0.8°.
When obtaining a 3D point cloud, geometric and calibration errors occur, which result in noise in the point cloud image [33-36]. Furthermore, the patient’s ROI can be imaged according to the acquisition distance, with the angle varied depending on the specifications of the RGB-D camera [37]. Therefore, it is crucial to obtain a suitable installation location since the installation distance and angle vary depending on the range of treatment for the patient or on whether the gantry in the treatment room is rotated. This study excluded the treatment of brain patients, with the distance and angle set for spine treatment and the ROI for monitoring. In other words, there will be certain differences between the setup of the imaging system when treating brain and spine cases for accurate observation of the lesion site.
The ICP algorithm was used to evaluate the distance between two-point clouds. The matching was performed by repeatedly obtaining the closest distance between each point on the 3D body surface of the patient and the point cloud of the matching CT image. If there was a section wherein the dots did not coincide while selecting an ROI, the error of x, y, and z exceeded 2 mm. However, it was possible to improve the precision by removing the weights and outliers, while this aspect requires further optimization research [38,39].
The surface point cloud was used to visualize the patient’s current position and to execute a more accurate setup with the CT point cloud set according to image registration. Meanwhile, the movement of the patient can be detected and monitored in the user interface screen. Comparable results of the position of the body surface after matching the surface point cloud and the surface of the CT via the ICP algorithm demonstrated that the difference was negligible. However, the surfaces of small areas, such as the nipple of the patient, were not well sampled due to the limited spatial resolution [40], which was related to the resolution limit of maximum distance and the ROI provided by the depth camera. However, since an increase in the sample size aimed at improving the spatial resolution will increase the calculation time, a loss of spatial resolution was preferred over a delay of tens of milliseconds in real-time monitoring. The temporal resolution was measured in real time at several milliseconds to tens of microseconds in relation to the IR-reflective-marker-based system. Here, the slow read-out speed can cause motion blurring effects. Further research is required to improve the spatial resolution while reducing the computation time in a trade-off relationship between the two factors [41].
With the exception of cases wherein the latest SGRT system is installed in terms of the initial LINAC setup, difficulties in terms of installation and cost will emerge because the latest SGRT is installed on the existing radiation therapy machine models. While actual testing and analysis using a patient setup are required, the advantage of the system proposed in this study is that it is possible to determine the location and to monitor errors, such as ExacTrac (existing SGRT), in a system that incorporates, for example, a RGB-D camera and an image display user interface and an image registration algorithm using ICP. The advantage of our system is that it can be easily installed by combining it with the existing systems.

Conclusions

In this study, we developed a surface-guided repositioning and monitoring system that can be customized for an environment with an existing LINAC. The system assists in improving the setup accuracy in radiation surgery and can be easily applied for more accurate patient repositioning and inter-treatment motion monitoring.

Acknowledgements

This work was supported by the 2017 Inje University research grant.

Notes

Conflicts of Interest

The authors have nothing to disclose.

Availability of Data and Materials

The data that support the findings of this study are available on request from the corresponding author.

Author Contributions

Conceptualization: Kwang Hyeon Kim and Moon-Jun Sohn. Data curation and formal analysis: Kwang Hyeon Kim and Haenghwa Lee. Funding acquisition: Moon-Jun Sohn. Investigation: Kwang Hyeon Kim, Haenghwa Lee, and Moon-Jun Sohn. Methodology: Kwang Hyeon Kim, Moon-Jun Sohn, and Chi-Woong Mun. Supervision: Moon-Jun Sohn. Validation: Kwang Hyeon Kim, Haenghwa Lee, and Chi-Woong Mun. Writing–original draft: Kwang Hyeon Kim. Writing–review & editing: Moon-Jun Sohn and Chi-Woong Mun.

References

1. Li G, Ballangrud A, Kuo LC, Kang H, Kirov A, Lovelock M, et al. 2011; Motion monitoring for cranial frameless stereotactic radiosurgery using video-based three-dimensional optical surface imaging. Med Phys. 38:3981–3994. DOI: 10.1118/1.3596526. PMID: 21858995.
crossref
2. Tagaste B, Riboldi M, Spadea MF, Bellante S, Baroni G, Cambria R, et al. 2012; Comparison between infrared optical and stereoscopic X-ray technologies for patient setup in image guided stereotactic radiotherapy. Int J Radiat Oncol Biol Phys. 82:1706–1714. DOI: 10.1016/j.ijrobp.2011.04.004. PMID: 21605942.
crossref
3. Wang LT, Solberg TD, Medin PM, Boone R. 2001; Infrared patient positioning for stereotactic radiosurgery of extracranial tumors. Comput Biol Med. 31:101–111. DOI: 10.1016/s0010-4825(00)00026-3. PMID: 11165218.
crossref
4. Schipani S, Wen W, Jin JY, Kim JK, Ryu S. 2012; Spine radiosurgery: a dosimetric analysis in 124 patients who received 18 Gy. Int J Radiat Oncol Biol Phys. 84:e571–e576. DOI: 10.1016/j.ijrobp.2012.06.049. PMID: 22975607.
crossref
5. Wu VW, Ho YY, Tang YS, Lam PW, Yeung HK, Lee SW. 2019; Comparison of the verification performance and radiation dose between ExacTrac x-ray system and On-Board Imager-a phantom study. Med Dosim. 44:15–19. DOI: 10.1016/j.meddos.2017.12.008. PMID: 29395461.
crossref
6. Murphy MJ, Balter J, Balter S, BenComo JA Jr, Das IJ, Jiang SB, et al. 2007; The management of imaging dose during image-guided radiotherapy: report of the AAPM Task Group 75. Med Phys. 34:4041–4063. DOI: 10.1118/1.2775667. PMID: 17985650.
crossref
7. Cheng CS, Jong WL, Ung NM, Wong JHD. 2017; Evaluation of imaging dose from different image guided systems during head and neck radiotherapy: a phantom study. Radiat Prot Dosimetry. 175:357–362. DOI: 10.1093/rpd/ncw357. PMID: 27940494.
crossref
8. Steiner E, Stock M, Kostresevic B, Ableitinger A, Jelen U, Prokesch H, et al. 2013; Imaging dose assessment for IGRT in particle beam therapy. Radiother Oncol. 109:409–413. DOI: 10.1016/j.radonc.2013.09.007. PMID: 24128802.
crossref
9. Hoisak JDP, Pawlicki T. 2018; The role of optical surface imaging systems in radiation therapy. Semin Radiat Oncol. 28:185–193. DOI: 10.1016/j.semradonc.2018.02.003. PMID: 29933878.
crossref
10. Freislederer P, Kügele M, Öllers M, Swinnen A, Sauer TO, Bert C, et al. 2020; Recent advanced in surface guided radiation therapy. Radiat Oncol. 15:187. DOI: 10.1186/s13014-020-01629-w. PMID: 32736570. PMCID: PMC7393906.
11. Li J, Shi W, Andrews D, Werner-Wasik M, Lu B, Yu Y, et al. 2017; Comparison of online 6 degree-of-freedom image registration of Varian TrueBeam cone-beam CT and BrainLab ExacTrac X-ray for intracranial radiosurgery. Technol Cancer Res Treat. 16:339–343. DOI: 10.1177/1533034616683069. PMID: 28462690. PMCID: PMC5616049.
crossref
12. Laaksomaa M, Sarudis S, Rossi M, Lehtonen T, Pehkonen J, Remes J, et al. 2019; AlignRT® and Catalyst™ in whole-breast radiotherapy with DIBH: is IGRT still needed? J Appl Clin Med Phys. 20:97–104. DOI: 10.1002/acm2.12553. PMID: 30861276. PMCID: PMC6414178.
13. Agazaryan N, Tenn S, Dieterich S, Gevaert T, Goetsch SJ, Kaprealian T. 2020. Frameless image guidance in stereotactic radiosurgery. Stereotactic and Functional Neurosurgery. Springer;Cham: p. 37–48. DOI: 10.1007/978-3-030-34906-6_4.
crossref
14. Manger RP, Paxton AB, Pawlicki T, Kim GY. 2015; Failure mode and effects analysis and fault tree analysis of surface image guided cranial radiosurgery. Med Phys. 42:2449–2461. DOI: 10.1118/1.4918319. PMID: 25979038.
crossref
15. Gilles M, Fayad H, Miglierini P, Clement JF, Scheib S, Cozzi L, et al. 2016; Patient positioning in radiotherapy based on surface imaging using time of flight cameras. Med Phys. 43:4833. DOI: 10.1118/1.4959536. PMID: 27487901.
crossref
16. Padilla L, Pearson EA, Pelizzari CA. 2015; Collision prediction software for radiotherapy treatments. Med Phys. 42:6448–6456. DOI: 10.1118/1.4932628. PMID: 26520734.
crossref
17. Hoole AC, Twyman N, Langmack KA, Hebbard M, Lowrie D. 2001; Laser scanning of patient outlines for three-dimensional radiotherapy treatment planning. Physiol Meas. 22:605–610. DOI: 10.1088/0967-3334/22/3/316. PMID: 11556678.
crossref
18. Roessler K, Ungersboeck K, Dietrich W, Aichholzer M, Hittmeir K, Matula C, et al. 1997; Frameless stereotactic guided neurosurgery: clinical experience with an infrared based pointer device navigation system. Acta Neurochir (Wien). 139:551–559. DOI: 10.1007/BF02750999. PMID: 9248590.
crossref
19. Kosugi Y, Watanabe E, Goto J, Watanabe T, Yoshimoto S, Takakura K, et al. 1988; An articulated neurosurgical navigation system using MRI and CT images. IEEE Trans Biomed Eng. 35:147–152. DOI: 10.1109/10.1353. PMID: 3350540.
crossref
20. Fan Y, Jiang D, Wang M, Song Z. 2014; A new markerless patient-to-image registration method using a portable 3D scanner. Med Phys. 41:101910. DOI: 10.1118/1.4895847. PMID: 25281962.
crossref
21. Giancola S, Valenti M, Sala R. 2018. A survey on 3D cameras: metrological comparison of time-of-flight, structured-light and active stereoscopy technologies. Springer;Cham:
22. He Y, Liang B, Yang J, Li S, He J. 2017; An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features. Sensors (Basel). 17:1862. DOI: 10.3390/s17081862. PMID: 28800096. PMCID: PMC5580094.
crossref
23. Habib A, Detchev I, Bang K. 2010. Jun. 15-18. A comparative analysis of two approaches for multiple-surface registration of irregular point clouds. Paper presented at: The 2010 Canadian Geomatics Conference and Symposium of Commission I. Calgary, Canada: 39.
24. Rusu RB, Cousins S. 2011. May. 9-13. 3D is here: Point Cloud Library (PCL). Paper presented at: 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: DOI: 10.1109/ICRA.2011.5980567. PMID: 21955422.
crossref
25. Arun KS, Huang TS, Blostein SD. 1987. Least-squares fitting of two 3-D point sets. IEEE Trans Pattern Anal Mach Intell. PAMI-9:698-700. DOI: 10.1109/TPAMI.1987.4767965. PMID: 21869429.
crossref
26. Ge Y, Maurer CR Jr, Fitzpatrick JM. 1996. Surface-based 3D image registration using the iterative closest-point algorithm with a closest-point transform Medical Imaging 1996: Image Processing. SPIE Digital Library. 358–367. DOI: 10.1117/12.237938.
crossref
27. Wu ML, Chien JC, Wu CT, Lee JD. 2018; An augmented reality system using improved-iterative closest point algorithm for on-patient medical image visualization. Sensors (Basel). 18:2505. DOI: 10.3390/s18082505. PMID: 30071645. PMCID: PMC6111829.
crossref
28. Tehrani JN, O’Brien RT, Poulsen PR, Keall P. 2013; Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm. Phys Med Biol. 58:8517–8533. DOI: 10.1088/0031-9155/58/23/8517. PMID: 24240537.
crossref
29. Huttenlocher DP, Klanderman GA, Rucklidge WJ. 1993; Comparing images using the Hausdorff distance. IEEE Trans Pattern Anal Mach Intell. 15:850–863. DOI: 10.1109/34.232073.
crossref
30. Wang G, Li Z, Li G, Dai G, Xiao Q, Bai L, et al. 2021; Real-time liver tracking algorithm based on LSTM and SVR networks for use in surface-guided radiation therapy. Radiat Oncol. 16:13. DOI: 10.1186/s13014-020-01729-7. PMID: 33446245. PMCID: PMC7807524.
crossref
31. Covington EL, Popple RA. 2021; A low-cost method to assess the performance of surface guidance imaging systems at non-zero couch angles. Cureus. 13:e14278. DOI: 10.7759/cureus.14278. PMID: 33959456. PMCID: PMC8093097.
crossref
32. Chan A, Coutts B, Parent E, Lou E. 2021; Development and evaluation of CT-to-3D ultrasound image registration algorithm in vertebral phantoms for spine surgery. Ann Biomed Eng. 49:310–321. DOI: 10.1007/s10439-020-02546-5. PMID: 32533392.
crossref
33. Wang S, Sun HY, Guo HC, Du L, Liu TJ. 2018; Multi-view laser point cloud global registration for a single object. Sensors (Basel). 18:3729. DOI: 10.3390/s18113729. PMID: 30388874. PMCID: PMC6263679.
crossref
34. Li J, Zhou Q, Li X, Chen R, Ni K. 2019; An improved low-noise processing methodology combined with PCL for industry inspection based on laser line scanner. Sensors (Basel). 19:3398. DOI: 10.3390/s19153398. PMID: 31382454. PMCID: PMC6695628.
crossref
35. Liu W, Cheung Y, Sabouri P, Arai TJ, Sawant A, Ruan D. 2015; A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system. Med Phys. 42:6564–6571. DOI: 10.1118/1.4933196. PMID: 26520747. PMCID: PMC4617738.
crossref
36. Fan Y, Yao X, Hu T, Xu X. 2019; An automatic spatial registration method for image-guided neurosurgery system. J Craniofac Surg. 30:e344–e350. DOI: 10.1097/SCS.0000000000005330. PMID: 30817512.
crossref
37. Muralikrishnan B, Rachakonda P, Lee V, Shilling M, Sawyer D, Cheok G, et al. 2017; Relative range error evaluation of terrestrial laser scanners using a plate, a sphere, and a novel dual-sphere-plate target. Meas Sci Technol. 111:60–68. DOI: 10.1016/j.measurement.2017.07.027. PMID: 28924331. PMCID: PMC5600278.
crossref
38. Maier-Hein L, Franz AM, dos Santos TR, Schmidt M, Fangerau M, Meinzer HP, et al. 2011; Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error. IEEE Trans Pattern Anal Mach Intell. 34:1520–1532. DOI: 10.1109/TPAMI.2011.248. PMID: 22184256.
crossref
39. Liu W. 2017; LiDAR-IMU time delay calibration based on iterative closest point and iterated sigma point Kalman filter. Sensors (Basel). 17:539. DOI: 10.3390/s17030539. PMID: 28282897. PMCID: PMC5375825.
crossref
40. Coroiu ADCA, Coroiu A. 2018. Sep. 6-8. Interchangeability of Kinect and Orbbec sensors for gesture recognition. Paper presented at: 2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP). Cluj-Napoca, Romania: 309–315. DOI: 10.1109/ICCP.2018.8516586.
crossref
41. Wiersma RD, Tomarken SL, Grelewicz Z, Belcher AH, Kang H. 2013; Spatial and temporal performance of 3D optical surface imaging for real-time head position tracking. Med Phys. 40:111712. DOI: 10.1118/1.4823757. PMID: 24320420.
crossref

Fig. 1
Three-dimensional surface modeling system architecture and surface-guided radiosurgery (SGRS) in the treatment room: (a) the architecture of the image acquisition using the depth camera, (b) the surface imaging profile in sagittal plane, and (c) the installed SGRS in the treatment room. IR, infrared; LINAC, linear accelerator .
pmp-32-2-40-f1.tif
Fig. 2
Acquired three-dimensional images using RGB-D cameras: (a) left, (b) center, (c) right direction, and (d) integrated images through the cameras.
pmp-32-2-40-f2.tif
Fig. 3
Image registration process and user interface integration. 3D, three-dimensional; CT, computed tomography.
pmp-32-2-40-f3.tif
Fig. 4
Surface-guided radiosurgery user interface in the treatment room. (a) The user interface for the patient setup matching. (b) The phantom experiment using our surface-guided reposition and monitoring system. CT, computed tomography.
pmp-32-2-40-f4.tif
Fig. 5
Image registration results of surface-guided image to computed tomography (CT) for a phantom and clinical case. (a) The 3D image registration result for a phantom CT and point cloud. (b) The image registration process in the same coordinate plane. (c) The final image in which the point cloud and CT images are registered.
pmp-32-2-40-f5.tif
Fig. 6
Multi-fractional setup trials using Hausdorff distance in difference plots.
pmp-32-2-40-f6.tif
Fig. 7
Multi-fractional setup trials involving existing system (ExacTrac, BrainLab, Munich, Germany) and the developed surface-guided radiosurgery (SGRS) in difference plots. (a) The location accuracy of the developed system for the multi-fractional setup trials. (b) The difference for the x, y, and z axis.
pmp-32-2-40-f7.tif
TOOLS
Similar articles