泡泡一分钟:Context-Aware Modelling for Augmented Reality Display Behaviour

张宁 Context-Aware Modelling for Augmented Reality Display Behaviour
链接:https://pan.baidu.com/s/1RpX6ktZCTGpQ7okksw5TUA&shfl=sharepset 提取码:xttr

Abstract—Current surgical augmented reality (AR) systems typically employ an on-demand display behaviour, where the surgeon can toggle the AR on or off using a switch. The need to be able to turn the AR off is in part due to the obstructing nature of AR overlays, potentially hiding important information from the surgeon in order to provide see-through vision. This on-demand paradigm is inefficient as the surgeon is always in one of two sub-optimal states: either they do not benefit at all from the image guidance (AR off), or the field of view is partially obstructed (AR on). Additionally, frequent toggling between the two views during the operation can be disruptive for the surgeon. This paper presents a novel approach to automatically adapt the AR display view based on the context of the surgical scene. Using gaze tracking in conjunction with information from the surgical instruments and the registered anatomy, a multi Gaussian process model can be trained to infer the desired AR display view at any point during the procedure. Furthermore, a new AR display view is introduced in this model, taking advantage of the context information to only display a partial view of the AR when relevant. To validate the presented approach, a detailed simulation of a neurosurgical tumour contour marking task is designed. A study conducted with 15 participants demonstrates the usefulness of the proposed approach, showing a statistically significantmeanreductionof48%intheaveragetimenecessary for the detection of simulated bleeding, as well as statistically significant improvements in total task time.

 

posted @ 2019-10-27 22:10  feifanren  阅读(155)  评论(0编辑  收藏  举报