Skip to main contentdfsdf

  • Jul 19, 12

    Abstract
    Since eye gaze may serve as an efficient and natural input for steering in virtual 3D scenes, we investigate the design of eye gaze steering user interfaces (UIs) in this paper. We discuss design considerations and propose design alternatives based on two selected steering approaches differing in input condition (discrete vs. continuous) and velocity selection (constant vs. gradient-based). The proposed UIs have been iteratively advanced based on two user studies with twelve participants each. In particular, the combination of continuous and gradient-based input shows a high potential, because it allows for gradually changing the moving speed and direction depending on a user’s point-of-regard. This has the advantage of reducing overshooting problems and dwell-time activations. We also investigate discrete constant input for which virtual buttons are toggled using gaze dwelling. As an alternative, we propose the Sticky Gaze Pointer as a more flexible way of discrete input.

  • Oct 03, 11

    ABSTRACT
    Situated public displays and interactive surfaces are becoming
    ubiquitous in our daily lives. Issues arise with these
    devices when attempting to interact over a distance or with
    content that is physically out of reach. In this paper we outline
    three techniques that combine gaze with manual handcontrolled
    input to move objects. We demonstrate and discuss
    how these techniques could be applied to two scenarios
    involving, (1) a multi-touch surface and (2) a public display
    and a mobile device.

  • Aug 06, 10

    ABSTRACT
    To enable people with motor impairments to use gaze control to play online games and take part in virtual communities, new interaction techniques are needed that overcome the limitations of dwell clicking on icons in the games interface. We have investigated gaze gestures as a means of achieving this. We report the results of an experiment with 24 participants that examined performance differences between different gestures. We were able to predict the effect on performance of the numbers of legs in the gesture and the primary direction of eye movement in a gesture. We also report the outcomes of user trials in which 12 experienced gamers used the gaze gesture interface to play World of Warcraft. All participants were able to move around and engage other characters in fighting episodes successfully. Gestures were good for issuing specific commands such as spell casting, and less good for continuous control of movement compared with other gaze interaction techniques we have developed.

  • Aug 06, 10

    ABSTRACT
    Eye gaze interaction for disabled people is often dealt with by designing ad-hoc interfaces, in which the big size of their elements compensates for both the inaccuracy of eye trackers and the instability of the human eye. Unless solutions for reliable eye cursor control are employed, gaze pointing in ordinary graphical operating environments is a very difficult task. In this paper we present an eye-driven cursor for MS Windows which behaves differently according to the "context". When the user's gaze is perceived within the desktop or a folder, the cursor can be discretely shifted from one icon to another. Within an application window or where there are no icons, on the contrary, the cursor can be continuously and precisely moved. Shifts in the four directions (up, down, left, right) occur through dedicated buttons. To increase user awareness of the currently pointed spot on the screen while continuously moving the cursor, a replica of the spot is provided within the active direction button, resulting in improved pointing performance.

  • Aug 06, 10

    ABSTRACT
    This paper introduces a Real Time Eye Movement Identification (REMI) protocol designed to address challenges related to the implementation of the eye-gaze guided computer interfaces. The REMI protocol provides the framework for 1) eye position data processing such as noise removal, smoothing, prediction and handling of invalid positional samples 2) real time eye movement identification into the basic eye movement types 3) mapping of the classified eye movement data to interface actions such as object selection.

  • Aug 05, 10

    ABSTRACT
    As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.

  • Aug 05, 10

    ABSTRACT
    Welcome to the course: Gazing at Games: Using Eye Tracking to Control Virtual Characters. I will start with a short introduction of the course which will give you an idea of its aims and structure. I will also talk a bit about my background and research interests and motivate why I think this work is important.

  • Jul 19, 10

    EXECUTIVE SUMMARY
    This report includes a description of user trials that have been carried out by the three COGAIN partners – DART/Sahlgrenska University Hospital (Gothenburg), The Politecnico di Torino (in partnership with the Torino ALS Centre) and The ACE Centre, Oxford. All of the results point to the huge potential benefits for
    the kinds of people who need it most. The findings of the Politecnico di Torino in partnership with the Torino ALS Centre were as follows:
    • The level of satisfaction and engagement gained from eye-control was relative to the level of the
    person’s disability.
    • Patients who were unable to speak or move any of their limbs were very motivated to learn a new method of communication and felt that eye-control gave them hope.
    • The team felt, following the trial, that eye-control potentially offers great satisfaction for ALS patients once other methods of control (head-mouse, switches etc.) have failed.
    • The majority of the ALS patients involved were not aware that is possible to write a letter, play chess, send an e-mail, or communicate needs, emotions, and problems just by eye-gaze alone.

    From a technical point of view, the process of implementing the use of an eye control system with the
    majority of people with ALS is a comparatively straightforward process as most have good visual, cognitive and literacy skills. They do not have involuntary movement so they can potentially choose from a range of eye control systems, whether they are designed to accommodate head movement or not. On the other hand,
    ACE and DART deliberately chose to work with people who might also benefit greatly from eye control but who find it difficult because of involuntary head movement, visual difficulties and/or learning difficulties. Their aim was to use their clinical and technical skills and experience to see how best to accommodate their
    needs. They found that considerations when assessing for and implementing the use of an eye control system should include the following:
    • Appropriate mounting and positionin

  • Mar 08, 10

    ABSTRACT
    Human computer interfaces (HCI) for assisting persons with disabilities may employ eye gazing as the primary computer input mechanism. These systems rely on the use of remote eye-gaze tracking (EGT) devices to compute the direction of gaze and employ it to control the mouse cursor. Regrettably, the performance of these interfaces is traditionally affected by inaccuracies inherited from the eye tracking devices and ineffective EGT to mouse-pointer data conversion mechanisms. This study addresses this problem and proposes a new optimized data conversion mechanism. It analyzes in more details the correlation between the two data types resulting in a considerable increment in the accuracy of the system. This improved data conversion interface integrates the following procedures: (a) map the correlation between the EGT data and the mouse cursor position, (b) apply a curve fitting method that best suits the behavior of the data, (c) interpret the direction of gaze in order to determine the appropriate mouse cursor response, and (d) use effective means to monitor and evaluate the system performance.

  • Mar 08, 10

    ABSTRACT
    We present an eyes-only computer game, Invisible Eni, which uses gaze, blinking and as a novelty pupil size to affect game state. Pupil size can be indirectly controlled by physical activation, strong emotional experiences and cognitive effort. Invisible Eni maps the pupil size variations to the game mechanics and allows players to control game objects by use of willpower. We present the design rationale behind the interaction in Invisible Eni and consider the design implications of using pupil measurements in the interface. We discuss limitations for pupil based interaction and provide suggestions for using pupil size as an active input modality.

  • Dec 10, 09

    ABSTRACT
    Multimodal conversational interfaces allow users to carry a dialog with a graphical display using speech to accomplish a particular task. Motivated by previous psycholinguistic findings, we examine how eye-gaze contributes to reference resolution in such a setting. Specifically, we present an integrated probabilistic framework that combines speech and eye-gaze for reference resolution. We further examine the relationship between eyegaze and increased domain modeling with corresponding linguistic processing. Our empirical results show that the incorporation of eye-gaze significantly improves
    reference resolution performance. This improvement is most dramatic when a simple domain model is used. Our results also show that minimal domain modeling combined with eye-gaze significantly outperforms complex domain modeling without eye-gaze, which indicates that eye-gaze can be used to potentially compensate a
    lack of domain modeling for reference resolution.

  • Dec 08, 09

    ABSTRACT
    The overwhelming amount of information on the web makes it critical for users to quickly and accurately evaluate the relevance of content. Here we tested whether pupil size can be used to discriminate the perceived relevance of web search results. Our findings revealed that measures of pupil size carry information that can be used to discriminate the relevance of text and image web search results, but the low signal-to-noise ratio poses challenges that need to be overcome when using this technique in naturalistic settings. Despite these challenges, our findings highlight the promise that pupillometry has as a technique that can be used to assess interest and relevance in web interaction in a nonintrusive and objective way.

  • Dec 02, 09

    The GUIDe (Gaze-enhanced User Interface Design) project in the HCI Group at Stanford University explores how gaze information can be effectively used as an augmented input in addition to keyboard and mouse. We present three practical applications of gaze as an augmented input for pointing and selection, application switching, and scrolling. Our gaze-based interaction techniques do not overload the visual channel and present a natural, universally-accessible and general purpose use of gaze information to facilitate interaction with everyday computing devices.

    Abstract
    The eyes are a rich source of information for gathering context in our everyday lives. A user’s gaze is postulated to be the best proxy for attention or intention. Using gaze information as a form of input can enable a computer system to gain more contextual information about the user’s task, which in turn can be leveraged to design interfaces which are more intuitive and intelligent. Eye gaze tracking as a form of input was primarily developed for users who are unable to make normal use of a keyboard and pointing device. However, with the increasing accuracy and decreasing cost of eye gaze tracking systems it will soon be practical for able-bodied users to use gaze as a form of input in addition to keyboard and mouse. This dissertation explores how gaze information can be effectively used as an augmented input in addition to traditional input devices.
    The focus of this research is to augment rather than replace existing interaction techniques. Adding gaze information provides viable alternatives to traditional interaction techniques, which users may prefer to use depending upon their abilities, tasks and preferences. This dissertation presents a series of novel prototypes that explore the use of gaze as an augmented input to perform everyday computing tasks. In particular, it explores the use of gaze-based input for pointing and selection, scrolling and document navigation, application switching, password entry, zooming and other applications. It p

  • Dec 02, 09

    For more information visit:
    http://hci.stanford.edu/research/GUIDe

    ABSTRACT
    We present a practical technique for pointing and selection
    using a combination of eye gaze and keyboard triggers.
    EyePoint uses a two-step progressive refinement process
    fluidly stitched together in a look-press-look-release action,
    which makes it possible to compensate for the accuracy
    limitations of the current state-of-the-art eye gaze trackers.
    While research in gaze-based pointing has traditionally
    focused on disabled users, EyePoint makes gaze-based
    pointing effective and simple enough for even able-bodied
    users to use for their everyday computing tasks. As the cost
    of eye gaze tracking devices decreases, it will become
    possible for such gaze-based techniques to be used as a
    viable alternative for users who choose not to use a mouse
    depending on their abilities, tasks and preferences.

  • Dec 01, 09

    This paper evaluates the input performance capabilities of Velocity Threshold (I-VT) and Kalman Filter (I-KF) eye movement detection models when employed for eye-gaze-guided interface control. I-VT is a common eye movement identification model employed by the eye tracking community, but it is neither robust nor capable of handling high levels of noise present in the eye position data. Previous research implies that use of a Kalman filter reduces the noise in the eye movement signal and predicts the signal during brief eye movement failures, but the actual performance of I-KF was never evaluated. We evaluated the performance of I-VT and I-KF models using guidelines for ISO 9241 Part 9 standard, which is designed for evaluation of non keyboard/mouse input devices with emphasis on performance, comfort, and effort. Two applications were implemented for the experiment: 1) an accuracy test 2) a photo viewing application specifically designed for eye-gaze-guided control. Twenty-one subjects participated in the evaluation of both models completing a series of tasks. The results indicates that I-KF allowed participants to complete more tasks with shorter completion time while providing higher general comfort, accuracy and operation speeds with easier target selection than the I-VT model. We feel that these results are especially important to the engineers of new assistive technologies and interfaces that employ eye-tracking technology in their design.

  • Mar 04, 10

    Introduction
    People with severe motor disabilities often cannot use a conventional keyboard and mouse. One option for these users is to enter text with their eyes using an eye-tracker and on-screen keyboard (Istance et al. 1996). Such keyboards usually require users to stare at keys long enough to trigger them in a process called 'eyetyping' (Majaranta and Räihä 2002). However, eye-typing with on-screen keyboards has many drawbacks, including the reduction of available screen real-estate, the accidental triggering of keys, the need for high eyetracker accuracy due to small key sizes, and tedium. In contrast, we describe a new system for 'eye-writing' that uses gestures similar to hand-printed letters. Our system, called EyeWrite, uses the EdgeWrite unistroke alphabet previously developed for enabling text entry on PDAs, joysticks, trackballs, and other devices (Wobbrock et al. 2003, Wobbrock and Myers 2006). EdgeWrite's adaptation to EyeWrite has many potential advantages, such as reducing the need for eye-tracker accuracy, reducing the screen footprint devoted to text input, and reducing tedium. However, the best interaction design for EyeWrite was non-obvious. As a result, EyeWrite required extensive iteration and usability testing. In this paper, we describe EyeWrite and its development, and offer initial evidence in favour of this new technique.

  • Dec 09, 09

    ABSTRACT
    We investigate the use of two concurrent input channels to perform a pointing task. The first channel is the traditional mouse input device whereas the second one is the gaze position. The rake cursor interaction technique combines a grid of cursors controlled by the mouse and the selection of the active cursor by the gaze. A controlled experiment shows that rake cursor pointing drastically outperforms mouse-only pointing and also significantly outperforms the state of the art of pointing techniques mixing gaze and mouse input. A
    theory explaining the improvement is proposed: the global difficulty of a task is split between those two channels, and the sub-tasks could partly be performed concurrently.

  • Dec 04, 09

    ABSTRACT
    We present LookPoint, a system that uses eye input for
    switching input between multiple computing devices.
    LookPoint uses an eye tracker to detect which screen the
    user is looking at, and then automatically routes mouse
    and keyboard input to the computer associated with that
    screen. We evaluated the use of eye input for switching
    between three computer monitors during a typing task,
    comparing its performance with that of three other
    selection techniques: multiple keyboards, function key
    selection, and mouse selection. Results show that the use
    of eye input is 111% faster than the mouse, 75% faster
    than function keys, and 37% faster than the use of
    multiple keyboards. A user satisfaction questionnaire
    showed that participants also preferred the use of eye
    input over other three techniques. The implications of
    this work are discussed, as well as future calibration-free
    implementations.

  • Dec 03, 09

    Introduction
    We report ongoing work on using an eye tracker as an input device in first person shooter (FPS) games. In
    these games player moves in a three-dimensional virtual world that is rendered from the player’s point of
    view. The player interacts with the objects he or she encounters mainly by shooting at them. Typical game
    storylines reward killing and punish other forms of interaction.
    The reported work is a part of an effort to evaluate a range of input devices in this context. Our results on the
    other devices in the same game allow us to compare the efficiency of eye trackers as game controllers against
    more conventional devices. Our goal regarding eye trackers is to see whether they can help players perform
    better. Some FPS games are played competitively over the Internet. If using an eye tracker gives an edge in
    competitive play, players may want to acquire eye tracking equipment. Eye trackers as input devices in FPS
    games have been investigated before (Jönsson, 2005), but that investigation focused on user impressions rather than on the efficiency and effectiveness of eye trackers in this domain. However, Jönsson’s results on eye tracker efficiency in a non-FPS game were encouraging

  • Dec 02, 09

    ABSTRACT
    Scrolling is an essential part of our everyday computing
    experience. Contemporary scrolling techniques rely on the
    explicit initiation of scrolling by the user. The act of scrolling
    is tightly coupled with the user’s ability to absorb
    information via the visual channel. The use of eye gaze
    information is therefore a natural choice for enhancing
    scrolling techniques. We present several gaze-enhanced
    scrolling techniques for manual and automatic scrolling
    which use gaze information as a primary input or as an
    augmented input. We also introduce the use off-screen
    gaze-actuated buttons for document navigation and control.

1 - 20 of 61 Next › Last »
20 items/page
List Comments (0)