The success of MIS is coupled with an increasing demand on surgeons’ manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master–slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed.
Objective: Using eye-tracking technology, we aim to examine if there are common patterns of visual attention strategies employed by surgeons that are associated with a greater chance of successful reorientation when disorientated during laparoscopic cholecystectomy.
Summary Background Data: Laparoscopic cholecystectomy is one of the most commonly practiced minimally invasive procedures with a recognized morbidity relating to bile duct injuries. It has been suggested that the majority of bile duct injuries occur as a result of operator disorientation.
Methods: A total of 21 surgeons of varying experience participated in the study. Attention as represented by gaze was captured, as subjects were presented with 8 images of various stages of a laparoscopic cholecystectomy with the task of interpreting the orientation of the image. Subject fixations on relevant anatomic structures within the images were analyzed and a visual behavior profiling algorithm was applied to compare the behavior of individual surgeons.
Results: No difference in orientation performance between seniority levels or with laparoscopic experience was found. Key structures used as "anchor objects" to successfully orientate at various stages of a laparoscopic cholecystectomy were unveiled, and a representative successful visual attention behavior for each stage of the operation was described.
Conclusion: There are discernable and quantifiable visual attention strategies used by surgeons during laparoscopic cholecystectomy associated with successful orientation. By quantifying visual behavior and by inference attention processes of surgeons, this study represents an initial step in attempting to decrease the morbidity associated with disorientation. This study raises some important questions. First, can these common reorientation strategies be taught to aspiring surgeons as part of a curriculum thereby decreasing the learning curve associated with the apparent need for experience in laparoscopy? Second, can these common reorientation strategi
The success of MIS is coupled with an increasing demand on surgeons’ manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional near-infrared spectroscopy is used and preliminary results are discussed.
A challenging goal today is the use of computer networking and advanced monitoring technologies to extend human intellectual capabilities in medical decision making. Modern commercial eye trackers are used in many of research fields, but the improvement of eye tracking technology, in terms of precision on the eye movements capture, has led to consider the eye tracker as a tool for vision analysis, so that its application in medical research, e.g. in ophthalmology, cognitive psychology and in neuroscience has grown considerably. The improvements of the human eye tracker interface become more and more important to allow medical doctors to increase their diagnosis capacity, especially if the interface allows them to remotely administer the clinical tests more appropriate for the problem at hand. In this paper, we propose a client/server eye tracking system that provides an interactive system for monitoring patients eye movements depending on the clinical test administered by the medical doctors. The system supports the retrieval of the gaze information and provides statistics to both medical research and disease diagnosis.
Depth estimation is one of the most fundamental challenges for performing minimally invasive surgical (MIS) procedures. The requirement of accurate 3D instrument navigation using limited visual depth cues makes such tasks even more difficult. With the constant expectation of improving safety for MIS, there is a growing requirement for overcoming such constraints during MIS. We present in this paper a method of improving the surgeon’s perception of depth by introducing an “invisible shadow” in the operative field cast by an endoscopic instrument. Although, the shadow is invisible to human perception, it can be digitally detected, enhanced and re-displayed. Initial results from our study suggest that this method improves depth perception especially when the endoscopic instrument is in close proximity to the surface. Experiment results have shown that the method could potentially be used as an instrument navigation aid allowing accurate maneuvering of the instruments whilst minimizing tissue trauma.
The advent and accelerated adoption of laparoscopic surgery requires an objective assessment of both operative performance and perceptual events that lead to clinical decisions. In this paper we present a framework to extract the underlying strategy through the analysis of saccadic eye-movements that lead to visual attention, and identification of intrinsic features central to the execution of basic laparoscopic tasks. Markov modeling is applied to the quantification of the saccadic eye movements for elucidating the intrinsic behaviour of the participants and the spatial-temporal evolution of visual search and hand/eye coordination characteristics. It has been found that participants adopted a unified strategy but the underlying disparity in saccadic behaviour reflect temporal and behavioural differences that could be indicative of the mental process by which the task was executed.
Brain-computer interfaces (BCIs) are systems that establish a direct connection between the human brain and a computer, thus providing an additional communication channel. They are used in a broad field of applications nowadays. One important issue is the control of neuroprosthetic devices for the restoration of the grasp function in spinal-cord-injured people. In this communication, an asynchronous (self-paced) four-class BCI based on steady-state visual evoked potentials (SSVEPs) was used to control a two-axes electrical hand prosthesis. During training, four healthy participants reached an online classification accuracy between 44% and 88%. Controlling the prosthetic hand asynchronously, the participants reached a performance of 75.5 to 217.5 s to copy a series of movements, whereas the fastest possible duration determined by the setup was 64 s. The number of false negative (FN) decisions varied from 0 to 10 (the maximal possible decisions were 34). It can be stated that the SSVEP-based BCI, operating in an asynchronous mode, is feasible for the control of neuroprosthetic devices with the flickering lights mounted on its surface.
Navigating using an endoscope in intra- or extra-lumenal surgical procedures can be difficult. One of the main reasons why this is difficult is operator disorientation. Operator disorientation can result from a number of factors including a lack of navigational cues, cognitive overload and restricted field of view of the endoscope. These result in decreased operator awareness of surroundings and the endoscope location in space. It is important to try to prevent operation disorientation in endoscopic procedures; however, it is equally important to efficiently and correctly re-orientate when disoriented to ensure safe surgery. It is likely that as endoscopic procedures are carried out extralumenally in greater spatial environments than the gastrointestinal tract, such as in natural orifice translumenal endoscopic surgery (NOTES), disorientation will become more of a problem, placing greater emphasis on the abilities of the operator to re-orientate efficiently.
The hypothesis is that when humans are disoriented there exist discrete patterns in psychophysical visual behaviour used to re-orientate themselves in this simulated NOTES environment. These patterns are associated with increased performance and can be quantified or described. Should this be the case, it would seem possible that these strategies may be taught to re-orientate more effectively during minimally invasive surgery thereby critically minimising danger to the patient should the operator becomes disoriented.
Despite technological advances in minimally invasive surgery (MIS) in recent years, 3D visualization of the operative field still remains one of greatest challenges. In this paper, the effect of three visualization techniques including conventional 2D, 2D with enhanced depth cue based on shadow, and active 3D displays for novices with no prior adaptation to laparoscopic visualization techniques has been analyzed. A wavelet based paradigm is proposed which offer important insights into the effect of depth perception and visual-motor compensation when performing MIS instrument maneuvers. The proposed method has shown to be advantageous over conventional end-point methods of laparoscopic performance assessment as important supplementary information can be derived from the same trajectories where conventional measures fail to show significant differences.
With the increasing sophistication of surgical robots, the use of motion stabilisation for enhancing the performance of micro-surgical tasks is an actively pursued research topic. The use of mechanical stabilisation devices has certain advantages, in terms of both simplicity and consistency. The technique, however, can complicate the existing surgical workflow and interfere with an already crowded MIS operated cavity. With the advent of reliable vision-based real-time and in situ in vivo techniques on 3D-deformation recovery, current effort is being directed towards the use of optical based techniques for achieving adaptive motion stabilisation. The purpose of this paper is to assess the effect of virtual stabilization on foveal/parafoveal vision during robotic assisted MIS. Detailed psychovisual experiments have been performed. Results show that stabilisation of the whole visual field is not necessary and it is sufficient to perform accurate motion tracking and deformation compensation within a relatively small area that is directly under foveal vision. The results have also confirmed that under the current motion stabilisation regime, the deformation of the periphery does not affect the visual acuity and there is no indication of the deformation velocity of the periphery affecting foveal sensitivity. These findings are expected to have a direct implication on the future design of visual stabilisation methods for robotic assisted MIS.
Abstract. Effective hand-eye coordination is an important aspect of training in laparoscopic surgery. This paper investigates the interdependency of the hand and eye movement along with the variability of their temporal relationships based on Ganger-causality. Partial directed coherence is used to reveal the subtle effects of improvement in hand-eye coordination, where the causal relationship
between instrument and eye movement gradually reverse during simple laparoscopic tasks. For assessing the practical value of the proposed technique for minimally invasive surgery, two laparoscopic experiments have been conducted to examine the ability of the trainees in handling mental rotation tasks, as well as dissection and manipulation skills in laparoscopic surgery. Detailed experimental results highlight the value of the technique in investigating handeye coordination in laparoscopic training, particularly during early motor learning for complex bimanual procedures.
In today’s climate of clinical governance there is growing pressure on surgeons to demonstrate their competence, improve standards and reduce surgical errors. This paper presents a study on developing a novel eye-gaze driven technique for surgical assessment and workflow recovery. The proposed technique investigates the use of a Parallel Layer Perceptor (PLP) to automate the recognition of a key surgical step in a porcine laparoscopic cholecystectomy model. The classifier is eye-gaze contingent but combined with image based visual feature detection for improved system performance. Experimental results show that by fusing image instrument likelihood measures, an overall classification accuracy of 75% is achieved.
The use of master-slave surgical robots for Minimally Invasive Surgery (MIS) has created a physical separation between the surgeon and the patient. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robotic assisted MIS procedures. This paper introduces a novel gaze contingent framework with real-time haptic feedback by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of gaze-contingent motor channelling. The method also uses 3D eye gaze to dynamically prescribe and update safety boundaries during robotic assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robotic assisted phantom procedures demonstrate the potential clinical value of the technique.
Aim: To determine whether experience improves the consistency of visual search behaviour in fracture identification in plain radiographs, and the effect of specialization.
Material and methods: Twenty-five observers consisting of consultant radiologists, consultant orthopaedic surgeons, orthopaedic specialist registrars, orthopaedic senior house officers, and accident and emergency senior house officers examined 33 skeletal radiographs (shoulder, hand, and knee). Eye movement data were collected using a Tobii 1750 eye tracker with levels of diagnostic confidence collected simultaneously. Kullback-Leibler (KL) divergence and Gaussian mixture model fitting of fixation distance-to-fracture were used to calculate the consistency and the relationship between discovery and reflective visual search phases among different observer groups.
Results: Total time spent studying the radiograph was not significantly different between the groups. However, the expert groups had a higher number of true positives (p<0.001) with less dwell time on the fracture site (p<0.001) and smaller KL distance (r=0.062, p<0.001) between trials. The Gaussian mixture model revealed smaller mean squared error in the expert groups in hand radiographs (r=0.162, p=0.07); however, the reverse was true in shoulder radiographs (r=−0.287, p<0.001). The relative duration of the reflective phase decreases as the confidence level increased (r=0.266, p=0.074).
Conclusions: Expert search behaviour exhibited higher accuracy and consistency whilst using less time fixating on fracture sites. This strategy conforms to the discovery and reflective phases of the global-focal model, where the reflective search may be implicated in the cross-referencing and conspicuity of the target, as well as the level of decision-making process involved. The effect of specialization appears to change the search strategy more than the effect of the length of training.
This paper describes a decision support system for
determining salient features for CT lung nodule detection
using an eye-tracking based machine learning technique. The
method first analyses the scan paths of expert radiologists
during normal examination. The underlying features are then
used to highlight salient regions that may be of diagnostic
relevance by merging visual features learned from different
experts with a weighted probability function. The framework
has been evaluated using data from CT lung nodule
examination and the results demonstrate the potential
clinical value of the proposed technique, which can also be
generalized to other diagnostic applications.