Skip to main contentdfsdf

Tobii EyeTracking's List: Eye Tracking Technology

  • Oct 03, 11

    Eye-based human-computer interaction (HCI) goes back at least to the early 1990s. Controlling a computer using the eyes traditionally meant extracting information from the gaze—that is, what a person was looking at. In an early work, Robert Jacob investigated gaze as an input modality for desktop computing.1 He discussed some of the human factors and technical aspects of performing common tasks such as pointing, moving screen objects, and menu selection. Since then, eye-based HCI has matured considerably. Today, eye tracking is used successfully as a measurement technique not only in the laboratory but also in commercial applications, such as marketing research and automotive usability studies.

  • Dec 14, 10

    ABSTRACT
    The use of standardised tests is an important part of the clinical assessment procedure all over the world, when investigating verbal and cognitive ability.
    The purpose of this Master Thesis has been to investigate if these tests could gain from being transferred to digital format with the added functionality of eye tracking technology. As a first step, however, it was necessary to verify that the use of eye tracking as an input method did not alter the scores measured by the test.
    A prototype was built, using C# and the Tobii TEC SDK for the eye tracker, and it was later evaluated and compared with a standardised
    test in an experimental study with 21 test subjects, using within-subjects design.
    An ANOVA analysis of the experiment data showed no significant difference between the two test conditions. This implies that eye tracking is well suited as an input method in this context. These promising results shows that a further development of this use of the technology should be of interest.

  • Nov 30, 10

    ABSTRACT
    In this thesis we present an evaluation of machine learning methods for real-time classification of reading in eye movements recorded by an eye tracker. The classification uses the relative positions of fixations in the gaze data. The methods evaluated are Hidden Markov models and Artificial Neural Networks. We conclude that real-time classification indeed is possible and that Hidden Markov models provide better predictability in terms of performance and better actual performance. The Hidden Markov Models also are more flexible as the number of fixations used as input can be adjusted at runtime to make a tradeoff between speed and classification performance.

  • Nov 30, 10

    ABSTRACT
    A new method of evaluating eye movement classification algorithms using Precision and Recall is proposed. The method involves recording test subjects looking at known stimuli and then testing various algorithms’ ability to classify the eye movements that are anticipated. This method is then used to evaluate the performance of different off-line algorithms.
    The algorithms I-DT, I-VT and an HMM-based method were tested, as well as eye tracking company Tobii Technology’s algorithms ClearView and Tobii Fixation Filter. An analysis tool, which is available freely to the public, was developed in order to facilitate the process of developing and evaluating classification algorithms.
    Precision and Recall gave a clear profile of how accurately different algorithms could identify fixations. The implementations of I-VT and ClearView are essentially the same, and so were the results. The HMM offered no improvements, but should not be dismissed completely. Tobii Fixation Filter performed well due to filtering of the data.
    Most significantly, I-DT performed better than I-VT for fixation identification, while the reverse was true for extracting accurate saccadic information.

  • Nov 08, 10

    ABSTRACT
    An eye tracker makes it possible to record the gaze point of a person looking at for example a computer monitor. Modern techniques are very flexible and allow the user to behave naturally without the need of cumbersome equipment such as special contact lenses or electrical probes. This is valuable in psychological research, marketing research and Human Computer Interaction. Eye trackers also give people who are severely paralyzed and unable to type and speak means to communicate using their eyes.
    Measurement noise makes the use of digital filters necessary. An example is an eye-controlled cursor for a desktop environment such as Windows. The cursor has to be stable enough to allow the user to select folders, icons or other items of interest. While this type of application requires a fast real-time filter, others are less sensitive to processing time but demand an even higher level of accuracy. This work explores three areas of eye tracking filtration and aims at enhancing the performance of the filters used in the eye tracking systems built by Tobii Technology, Sweden. First, a post-processing algorithm to find fixations in raw gaze data is detailed. Second, modifications to an existing reading detection algorithm are described to make it more robust to natural irregularities in reading patterns. Third, a real-time filter for an eye-controlled cursor to be used in a desktop environment is designed using a low-pass filter in parallel with a change detector.
    The fixation filter produced fewer false fixations and was also able to detect fixations lying spatially closer together than the previously used filter. The reading detection algorithm was shown to be robust to natural irregularities in reading such as revisits to previously read text or skipped paragraphs. The eye-cursor filter proved to respond quicker than the previously used moving average filter while maintaining a high level of noise attenuation.

  • Sep 10, 10

    ABSTRACT
    To determine the accuracy and precision of pupil measurements made with the Tobii 1750 remote video eye tracker, we performed a formal metrological study with respect to a calibrated reference instrument, a medical pupillometer. We found that the eye tracker measures mean binocular pupil diameter with precision 0.10 mm and mean binocular pupil dilations with precision 0.15 mm.

  • Aug 16, 10

    ABSTRACT
    Eye tracking can be used in measuring point of gaze data that provides information concerning subject’s focus of attention. The focus of subject’s attention can be used as supportive evidence in studying cognitive processes. Despite the potential usefulness of eye tracking in psychology of programming research, there exists only few instances where eye tracking has actually been used. This paper presents an experiment in which we used three eye tracking devices to record subjects’ points of gaze when they were studying short computer programs using a program animator. The results suggest that eye tracking can be used to collect relatively accurate data for the purposes of psychology of programming research. The results also revealed significant differences between the devices in the accuracy of the point of gaze data and in the times needed for setting up the monitoring process.

  • Aug 06, 10

    ABSTRACT
    We introduce an algorithm for space-variant filtering of video based on a spatio-temporal Laplacian pyramid and use this algorithm to render videos in order to visualize prerecorded eye movements. Spatio-temporal contrast and colour saturation are reduced as a function of distance to the nearest gaze point of regard, i.e. non-fixated, distracting regions are filtered out, whereas fixated image regions remain unchanged. Results of an experiment in which the eye movements of an expert on instructional videos are visualized with this algorithm, so that the gaze of novices is guided to relevant image locations, show that this visualization technique facilitates the novices' perceptual learning.

  • Aug 06, 10

    Abstract
    This study is concerned with the negative effects of wearing corrective lenses while using eye trackers, and the correction of those negative effects. The eye tracker technology studied is the video based real-time Pupil Center and Corneal Reflection method. With a user study, the wearing of eyeglasses is shown to cause 20 % greater errors in the accuracy of an eye tracker than when not wearing glasses. The error is shown to depend on where on the eye tracker viewing area the user is looking.
    A model for ray refraction when wearing glasses was developed. Measurements on distortions on the image of the eye caused by eyeglass lenses were carried out. The distortions were analyzed with eye tracking software to determine their impact on the image-to world coordinates mapping.
    A typical dependence of 1 mm relative distance change on cornea to 9 degrees of visual field was found.
    The developed mathematical/physiological model for eyeglasses focuses on artifacts not possible to accommodate for with existing calibration methods, primarily varying combinations of viewing angles and head rotations. The main unknown in the presented model is the effective strength of the glasses. Automatic identification is discussed. The model presented here is general in nature and needs to be developed further in order to be a part of a specific application.

  • Aug 06, 10

    ABSTRACT
    The scanpath comparison framework based on string editing is revisited. The previous method of clustering based on k-means "preevaluation" is replaced by the mean shift algorithm followed by elliptical modeling via Principal Components Analysis. Ellipse intersection determines cluster overlap, with fast nearest-neighbor search provided by the kd-tree. Subsequent construction of Y - matrices and parsing diagrams is fully automated, obviating prior interactive steps. Empirical validation is performed via analysis of eye movements collected during a variant of the Trail Making Test, where participants were asked to visually connect alphanumeric targets (letters and numbers). The observed repetitive position similarity index matches previously published results, providing ongoing support for the scanpath theory (at least in this situation). Task dependence of eye movements may be indicated by the global position index, which differs considerably from past results based on free viewing.

  • Aug 06, 10

    ABSTRACT
    Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.

  • Aug 06, 10

    ABSTRACT
    Eye tracking specialists often need to understand and represent aggregate scanning strategies, but methods to identify similar scanpaths and aggregate multiple scanpaths have been elusive. A new method is proposed here to identify scanning strategies by aggregating groups of matching scanpaths automatically. A dataset of scanpaths is first converted to sequences of viewed area names, which are then represented in a dotplot. Matching sequences in the dotplot are found with linear regressions, and then used to cluster the scanpaths hierarchically. Aggregate scanning strategies are generated for each cluster and presented in an interactive dendrogram. While the clustering and aggregation method works in a bottom-up fashion, based on pair-wise matches, a top-down extension is also described, in which a scanning strategy is first input by cursor gesture, then matched against the dataset. The ability to discover both bottom-up and top-down strategy matches provides a powerful tool for scanpath analysis, and for understanding group scanning strategies.

  • Aug 05, 10

    ABSTRACT
    Gaze visualizations hold the potential to facilitate usability studies of interactive systems. However, visual gaze analysis in three-dimensional virtual environments still lacks methods and techniques for aggregating attentional representations. We propose three novel gaze visualizations for the application in such environments: projected, object-based, and surface-based attentional maps. These techniques provide an overview of how visual attention is distributed across a scene, among different models, and across a model’s surface. Two user studies conducted among eye tracking and visualization experts approve the high value of these techniques for the fast evaluation of eye tracking studies in virtual environments.

  • Aug 05, 10

    ABSTRACT
    This paper presents a set of qualitative and quantitative scores designed to assess performance of any eye movement classification algorithm. The scores are designed to provide a foundation for the eye tracking researchers to communicate about the performance validity of various eye movement classification algorithms. The paper concentrates on the five algorithms in particular: Velocity Threshold Identification (I-VT), Dispersion Threshold Identification (I-DT), Minimum Spanning Tree Identification (MST), Hidden Markov Model Identification (I-HMM) and Kalman Filter Identification (I-KF). The paper presents an evaluation of the classification performance of each algorithm in the case when values of the input parameters are varied. Advantages provided by the new scores are discussed. Discussion on what is the "best" classification algorithm is provided for several applications. General recommendations for the selection of the input parameters for each algorithm are provided.

  • Aug 05, 10

    "ABSTRACT
    Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an algorithm is proposed to allow the analyst to set the visual span as a parameter prior to generation of a heat map.
    Although the ideas are not novel, the algorithm also indicates how transparency of the heat map can be achieved and how the color gradient can be generated to represent the probability for an object to be observed within the defined visual span. The optional addition of contour lines provides a way to visualize separate intervals in the continuous color map.

  • Aug 05, 10

    ABSTRACT
    We propose a new way of analyzing pupil measurements made in conjunction with eye tracking: fixation-aligned pupillary response averaging, in which short windows of continuous pupil measurements are selected based on patterns in eye tracking data, temporally aligned, and averaged together. Such short pupil data epochs can be selected based on fixations on a particular spot or a scan path. The windows of pupil data thus selected are aligned by temporal translation and linear warping to place corresponding parts of the gaze patterns at corresponding times and then averaged together. This approach enables the measurement of quick changes in cognitive load during visual tasks, in which task components occur at unpredictable times but are identifiable via gaze data. We illustrate the method through example analyses of visual search and map reading. We conclude with a discussion of the scope and limitations of this new method.

  • Jul 22, 10

    ABSTRACT
    Eye tracking heatmaps have become very popular and easy to create over the last few years. They are very compelling and can be effective in summarizing and communicating data. However, heatmaps are often used incorrectly and for the wrong reasons. In addition, many do not include all the information that is necessary for proper interpretation. This paper describes several types of heatmaps as representations of different aspects of visual attention, and provides guidance on when to use and how to interpret heatmaps. It explains how heatmaps are created and how their appearance can be modified by manipulating different display settings. Guidelines for proper use of heatmaps are also proposed.

  • Jul 20, 10

    ABSTRACT
    The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donder's law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error approximately 1 degree), using more light sources and calibration points can result in lower average errors.

  • Apr 09, 10

    Abstract
    Characterizing the location and extent of a viewer's interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-of-interest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.

  • Apr 09, 10

    ABSTRACT
    The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking
    protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.

1 - 20 of 41 Next › Last »
20 items/page
List Comments (0)