"A non-specific "top-heavy" configuration bias has been proposed to
explain neonatal face preference (F. Simion, E. Valenza, V. Macchi Cassia, C.
Turati, & C. Umiltà,
). Using an eye tracker (Tobii T60), we investigated
whether the top-heavy bias is still present in 3- to 5.5-month-old infants and
in adults as a comparison group. Each infant and adult viewed three classes of
stimuli: simple geometric patterns, face-like figures, and photographs of faces.
Using area of interest analyses on fixation duration, we computed a top-heavy
bias index (a number between −1 and 1) for each individual. Our results showed
that the indices for the geometric and face-like patterns were about zero in
infants, indicating no consistent bias for the "top-heavy" configuration. In
adults, the indices for the geometric and face-like patterns were also close to
zero except for the T-shaped figure and the ones that had higher rating on
facedness. Moreover, the indices for photographs of faces were positive in both
infants and adults, indicating significant preferences for upright natural faces
over inverted ones. Taken together, we found no evidence for the top-heavy
configuration bias in both infants and adults. The absence of top-heavy bias
plus a clear preference for photographed upright faces in infants seem to
suggest an early cognitive specialization process toward face representation."
Eye movement recordings produce large quantities of spatio-temporal data, and are more and more frequently used as an aid to gain further insight into human thinking in usability studies in GIScience domain among others. After reviewing some common visualization methods for eye movement data, the limitations of these methods are discussed. This paper proposes an approach that enables the use of the Space-Time-Cube (STC) for representation of eye movement recordings. Via interactive functions in the STC, spatio-temporal patterns in eye movement data could be analyzed. A case study is presented according to proposed solutions for eye movement data analysis. Finally, the advantages and limitations of using the STC to visually analyze eye movement recordings are summarized and discussed.
Despite the word’s common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants’ abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants’ eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.
Games arguably have the most impressive success of any computer-based application and it would be useful to be able to extract some of the successful features of games for use in different application areas. Whilst games are clearly a multi-faceted phenomenon, when talking about games, gamers and reviewers often refer to the immersive experience of the game as being of particular importance. Moreover, the term immersion can be applied across many different genres of games from first person shooters, to strategy games and simulations. However, whilst many people use the term immersion, it is not clear exactly what this term means or whether the experience of immersion is the same across different games. Earlier qualitative studies (Brown & Cairns, 2004) showed that immersion can be better understood as a scale of experience with lower levels of immersion leading to higher levels. The purpose of our current work is to consider if it is possible to quantify the experience of immersion through more objective measures of the cognition of an immersed person such as eye-movements.
Eye-movement tracking is a method that is increasingly being employed to study usability issues in HCI contexts. The objectives of the present chapter are threefold. First, we introduce the reader to the basics of eye-movement technology, and also present key aspects of practical guidance to those who might be interested in using eye tracking in HCI research, whether in usability-evaluation studies, or for capturing people’s eye movements as an input mechanism to drive system interaction. Second, we examine various ways in which eye movements can be systematically measured to examine interface usability. We illustrate the advantages of a range of different eyemovement metrics with reference to state-of-the-art usability research. Third, we discuss the various opportunities for eye-movement studies in future HCI research, and detail some of the challenges that need to be overcome to enable effective application of the technique in studying the complexities of advanced interactive-system use.
Research applying eye-tracking to usability testing is increasing in popularity. A great deal of data can be obtained with eye-tracking, but there is little guidance as to how eye-movement data can be used in software usability testing. In the current study, users' eye-movements were recorded while they completed a series of tasks on one of three e-commerce websites specializing in educational toys. Four main research questions were addressed in this study: (1)?Are eye-tracking measures correlated with the more traditional measures of website usability (e.g., success, time on task, number of pages visited); (2)?Are eye-tracking measures sensitive to differences in task difficulty; (3)?Are eye-tracking measures sensitive to differences in site usability; and (4)?How does the design of a website drive user eye-movements? Traditional usability performance measures consisted of time on task, number of pages visited, and perceived task difficulty. Eye-tracking measures included the number of fixations, total dwell time, and average fixation duration. In general, all these measures were found to be highly correlated with one another, with the exception of average fixation duration. The two groups of measures generally agreed on differences in task difficulty; tasks showing high scores on one variable (e.g., time on task) showed high results on other measures (e.g., number of fixations). Similar agreement among measures was observed in comparisons of the sites on each task. The unique contributions of eye-tracking to usability testing were best realized in qualitative examinations of eye-tracking data in relation to specific areas of interest (AOIs) on site pages, which demonstrated this to be a useful tool in understanding how aspects of design may drive users' visual exploration of a web page.
Since the mid-1800s, experimental psychologists have been using eye movements and gaze direction to make inferences about perception and cognition in adults (Müller, 1826, cited in Boring, 1942). In the past 175 years, these oculomotor measures have been refined (seeKowler, 1990) and used to address similar questions in infants (see Aslin, 1985, 1987; Bronson, 1982; Haith, 1980; Maurer, 1975). The general rationale for relying on these visual behaviors is that where one is looking is closely tied to what one is seeing. This is not to deny the fact that we can detect visual stimuli in the peripheral visual field, but rather that there is a bias to attend to and process information primarily when it is located in the central portion of the retina. Thus, although the direction of gaze is not perfectly correlated with the uptake of visual information (e.g., as in a blank stare or a covert shift of attention), there is a strong presumption that the direction of gaze can provide important information about visual stimuli even in newborn infants (Haith, 1966; Salapatek, 1968; Salapatek & Kessen, 1966).
The eyes have it! This chapter describes cutting-edge computer vision methods employed in advanced vision sensing technologies for medical, safety, and security applications, where the human eye represents the object of interest for both the imager and the computer. A camera receives light from the real eye to form a sequence of digital images of it. As the eye scans the environment, or focuses on particular objects in the scene, the computer simultaneously localizes the eye position, tracks its movement over time, and infers measures such as the attention level, and the gaze direction in real time and fully automatic. The main focus of this chapter is on computer vision and pattern recognition algorithms for eye appearance variability modeling, automatic eye detection, and robust eye position tracking. This chapter offers good readings and solid methodologies to build the two fundamental low-level building blocks of a vision-based eye tracking technology.
The growing interest in using eye-movements for human-computer interaction has also increased the need for tools to investigate and analyse the behavior of human eyes . First such tools were developed around the same time when the first eye trackers became available. However, these tools were bound to the structure of data produced by a specific eye tracker, and thus each tool supported only the eye tracker for which it was developed. This hampered introduction of next-generation devices and required a lot of effort to transfer the functionality of the tools developed to new platforms. Nowadays, there are many commercial and academic products available for researchers in this field and the quality and accuracy of eye-tracking devices are constantly increasing. Several researchers have attempted developing tools for supporting analysis of the data recorded with different eye trackers [2, 3]. This way the same software could support different data protocols and formats. Meanwhile, several manufactures released eye trackers having protocols for data transfer and collection that can be recognized by some most intelligent and advanced gaze-data analysis tools. However, there is still a lack of effective tools to support various eye trackers in recording eye movements and using this data in real time. Despite the numerous methods developed for analysing and visualizing gaze paths, no universal tools are available yet to accomplish this. To fill in this gap, we developed iComponent - a software product with a highly flexible architecture for easy development of interchangeable plug-in modules to support various eye-tracking devices and experimental software. This paper describes the main functionality of iComponent related to gathering, analysis and visualization of eye gaze data. The presentation is organized in the following sequence. First, the login manager is introduced. Then, the paper describes the Quick Start wizard used to help the user prepare for a data recording or analysis session. Finally..
The pupil diameter (PD) has been found to respond to cognitive and emotional processes. However, the pupillary light reflex (PLR), is known to be the dominant factor in determining pupil size. In this paper, we attempt to minimize the PLR-driven component in the measured PD signal, through an Adaptive Interference Canceller (AIC), with the H ∞ time-varying (HITV) adaptive algorithm, so that the output of the AIC, the Modified Pupil Diameter (MPD), can be used as an indication of the pupillary affective response (PAR) after some post-processing. The results of this study confirm that the AIC with the HITV adaptive algorithm is able to minimize the PD changes caused by PLR to an acceptable level, to facilitate the affective assessment of a computer user through the resulting MPD signal.
Tobii hardware/software mentioned in this book:
Human computer interfaces (HCI) for assisting persons with disabilities may employ eye gazing as the primary computer input mechanism. These systems rely on the use of remote eye-gaze tracking (EGT) devices to compute the direction of gaze and employ it to control the mouse cursor. Regrettably, the performance of these interfaces is traditionally affected by inaccuracies inherited from the eye tracking devices and ineffective EGT to mouse-pointer data conversion mechanisms. This study addresses this problem and proposes a new optimized data conversion mechanism. It analyzes in more details the correlation between the two data types resulting in a considerable increment in the accuracy of the system. This improved data conversion interface integrates the following procedures: (a) map the correlation between the EGT data and the mouse cursor position, (b) apply a curve fitting method that best suits the behavior of the data, (c) interpret the direction of gaze in order to determine the appropriate mouse cursor response, and (d) use effective means to monitor and evaluate the system performance.
In the present article, a new software is introduced that allows the recording and analyzing of eye- and mousetracking data from slideshow-based experiments in parallel. The Open Gaze and Mouse Analyzer (OGAMA) is written in C#.NET and has been released as an open-source project. Its main features include slideshow design, the recording of gaze and mouse data, database-driven preprocessing and filtering of gaze and mouse data, the creation of attention maps, areas-of-interest definition, and replay. Eyetracking and/or presentation soft- and hardware recordings in ASCII format can be imported. Data output is provided that can be used directly with different statistical software packages. Because it is open source, one can easily adapt it to suit one's needs.
• To establish common eye-tracker settings for all participants in the project.
• To test these settings in a basic study with Tobii 1750, replicating a well-known effect in
psycholinguistics and eye movement research.
Multimodal conversational interfaces allow users to carry a dialog with a graphical display using speech to accomplish a particular task. Motivated by previous psycholinguistic findings, we examine how eye-gaze contributes to reference resolution in such a setting. Specifically, we present an integrated probabilistic framework that combines speech and eye-gaze for reference resolution. We further examine the relationship between eyegaze and increased domain modeling with corresponding linguistic processing. Our empirical results show that the incorporation of eye-gaze significantly improves
reference resolution performance. This improvement is most dramatic when a simple domain model is used. Our results also show that minimal domain modeling combined with eye-gaze significantly outperforms complex domain modeling without eye-gaze, which indicates that eye-gaze can be used to potentially compensate a
lack of domain modeling for reference resolution.
This paper describes the practical side of eye tracker use in the field of human computer interaction. The paper relates to usability evaluations in practice covering those topics of primary importance to practitioners including the business case for eye tracking and the technique's benefits and limitations. The authors describe techniques, based on practical experience, to be deployed to ensure success with eye tracking and provide some useful links and references for those contemplating adoption of the technique. Ideas on future practical areas of deployment are discuss.