6533b86cfe1ef96bd12c8391
RESEARCH PRODUCT
Augmented Reality of the Middle Ear Combining Otoendoscopy and Temporal Bone Computed Tomography
Caroline GuigouRoberto MarroquinAlain LalandeRaabid HussainAlexis Bozorg Grayelisubject
Video RecordingOptical flowEar MiddleScale-invariant feature transformInitialization03 medical and health sciencesImaging Three-Dimensional0302 clinical medicine[INFO.INFO-IM]Computer Science [cs]/Medical ImagingHumansMedicineComputer vision[SDV.MHEP.OS]Life Sciences [q-bio]/Human health and pathology/Sensory Organs030223 otorhinolaryngologybusiness.industryTemporal BoneEndoscopyFrame rateSensory SystemsRefresh rateOtorhinolaryngologyFeature (computer vision)030220 oncology & carcinogenesisAugmented realityNeurology (clinical)Artificial intelligenceTomographyTomography X-Ray Computedbusinessdescription
International audience; HYPOTHESIS:Augmented reality (AR) may enhance otologic procedures by providing sub-millimetric accuracy and allowing the unification of information in a single screen.BACKGROUND:Several issues related to otologic procedures can be addressed through an AR system by providing sub-millimetric precision, supplying a global view of the middle ear cleft, and advantageously unifying the information in a single screen. The AR system is obtained by combining otoendoscopy with temporal bone computer tomography (CT).METHODS:Four human temporal bone specimens were explored by high-resolution CT-scan and dynamic otoendoscopy with video recordings. The initialization of the system consisted of a semi-automatic registration between the otoendoscopic video and the 3D CT-scan reconstruction of the middle ear. Endoscope movements were estimated by several computer vision techniques (feature detectors/descriptors and optical flow) and used to warp the CT-scan to keep the correspondence with the otoendoscopic video.RESULTS:The system maintained synchronization between the CT-scan image and the otoendoscopic video in all experiments during slow and rapid (5-10 mm/s) endoscope movements. Among tested algorithms, two feature-based methods, scale-invariant feature transform (SIFT); and speeded up robust features (SURF), provided sub-millimeter mean tracking errors (0.38 ± 0.53 mm and 0.20 ± 0.16 mm, respectively) and an adequate image refresh rate (11 and 17 frames per second, respectively) after 2 minutes of procedure with continuous endoscope movements.CONCLUSION:A precise augmented reality combining video and 3D CT-scan data can be applied to otoendoscopy without the use of conventional neuronavigation tracking thanks to computer vision algorithms.
year | journal | country | edition | language |
---|---|---|---|---|
2018-09-01 | Otology & Neurotology |