Skip to main content

14 Dec 10

ABSTRACT
Previous research on lexical development has aimed to identify the factors that enable accurate initial word-referent mappings based on the assumption that the accuracy of initial word-referent associations is critical for word learning. The present study challenges this assumption. Adult English speakers learned an artificial language within a cross-situational learning paradigm. Visual fixation data were used to assess the direction of visual attention. Participants whose longest fixations in the initial trials fell more often on distracter images performed significantly better at test than participants whose longest fixations fell more often on referent images. Thus, inaccurate initial word-referent mappings may actually benefit learning.

30 Nov 10

ABSTRACT
In this thesis we present an evaluation of machine learning methods for real-time classification of reading in eye movements recorded by an eye tracker. The classification uses the relative positions of fixations in the gaze data. The methods evaluated are Hidden Markov models and Artificial Neural Networks. We conclude that real-time classification indeed is possible and that Hidden Markov models provide better predictability in terms of performance and better actual performance. The Hidden Markov Models also are more flexible as the number of fixations used as input can be adjusted at runtime to make a tradeoff between speed and classification performance.

23 Nov 10

ABSTRACT
Previous research indicates that adult learners are able to use co-occurrence information to learn word-to-object mappings and form object categories simultaneously. The current eye-tracking study investigated the dynamics of attention allocation during concurrent statistical learning of words and categories. The results showed that the participants’ learning performance was associated with the numbers of short and mid-length fixations generated during training. Moreover, the learners’ patterns of attention allocation indicated online interaction and bi-directional bootstrapping between word and category learning processes.

19 Nov 10

ABSTRACT
An experiment was conducted to test the efficacy of a new intelligent hypermedia system, MetaTutor, which is intended to prompt and scaffold the use of self-regulated learning (SRL) processes during learning about a human body system. Sixty-eight (N=68) undergraduate students learned about the human circulatory system under one of three conditions: prompt and feedback (PF), prompt-only (PO), and control (C) condition. The PF condition received timely prompts from animated pedagogical agents to engage in planning processes, monitoring processes, and learning strategies and also received immediate directive feedback from the agents concerning the deployment of the processes. The PO condition received the same timely prompts, but did not receive any feedback following the deployment of the processes. Finally, the control condition learned without any assistance from the agents during the learning session. All participants had two hours to learn using a 41-page hypermedia environment which included texts describing and static diagrams depicting various topics concerning the human circulatory system. Results indicate that the PF condition had significantly higher learning efficiency scores, when compared to the control condition. There were no significant differences between the PF and PO conditions. These results are discussed in the context of development of a fully-adaptive hypermedia learning system intended to scaffold self-regulated learning.

11 Nov 10

ABSTRACT
The paper presents an empirical study with a digital educational game (DEG) called 80Days that aims at teaching geographical content. The goal of the study is twofold: (i) investigating the potential of the eye-tracking approach for evaluating DEG; (ii) studying the issue of vicarious learning in the context of DEG. Twenty-four university students were asked to view the videos of playing two micro-missions of 80Days, which varied with regard to the position of the non-player character (NPC) window (i.e. lower right vs. upper left) and the delivery of cognitive hints (i.e. with vs. without) in this text window. Eye movements of the participants were recorded with an eye-tracker. Learning effect and user experience were measured by questionnaires and interviews. Significant differences between the pre- and post-learning assessment tests suggest that observers can benefit from passive viewing of the recorded gameplay. However, the hypotheses that the game versions with cognitive hints and with the NPC window on the upper left corner can induce stronger visual attention and thus better learning effect are refuted.

11 Nov 10

ABSTRACT
Current research increasingly suggests that spatial cognition in humans is accomplished by many specialized mechanisms, each designed to solve a particular adaptive problem. A major adaptive problem for our hominin ancestors, particularly females, was the need to efficiently gather immobile foods which could vary greatly in quality, quantity, spatial location and temporal availability. We propose a cognitive model of a navigational gathering adaptation in humans and test its predictions in samples from the US and Japan. Our results are uniformly supportive: the human mind appears equipped with a navigational gathering adaptation that encodes the location of gatherable foods into spatial memory. This mechanism appears to be chronically active in women and activated under explicit motivation in men.

10 Sep 10

ABSTRACT
In laboratory experiments, infants can learn patterns of features that co-occur (e.g., Fiser & Aslin, 2002). This finding leaves two questions unanswered: What do infants do with the knowledge acquired from such statistical learning, and which patterns do infants attend to in the noisy and cluttered world outside of the laboratory? Here, we show that 9-month-old infants form expectations about co-occurring features remaining fused, an essential skill for object individuation and recognition (e.g., Goldstone, 2000; Schyns & Rodet, 1997). We also show that though social cues may temporarily detract attention away from learning events, they appear to stimulate infants to display learning better in complex situations than when infants learn on their own without attention cues. These findings suggest that infants can use feature co-occurrence to learn about objects and that social cues shape such foundational learning in a noisy environment during infancy.

10 Sep 10

ABSTRACT
Recent studies show both adults and young children possess powerful statistical learning capabilities to solve the word-to-world mapping problem. However, it is still unclear what are the underlying mechanisms supporting seemingly powerful statistical cross-situational learning. To answer this question, the paper uses an eye tracker to record moment-by-moment eye movement data of 14-month-old babies in statistical learning tasks. A simple associative statistical learning is applied to the fine-grained eye movement data. The results are compared with empirical results from those young learners. A strong correlation between these two shows that a simple associative learning mechanism can account for both behavioural data as a group and individual differences, suggesting that the associative learning mechanism with selective attention can provide a cognitively plausible model of statistical learning. The work represents the first steps to use eye movement data to infer underlying learning processes in statistical learning.

16 Aug 10

ABSTRACT
This paper investigates the value of eye tracking in evaluating the usability of a Learning Management System, at an open distance learning university where the users’ computer and Web skills vary significantly. Eye tracking utilize the users’ eye movements, while doing a task, to provide information about the nature, sequence and timing of the cognitive operations that took place. This information supplements, but does not replace standard usability testing with observations. This forces the questions of when the added value of eye tracking justifies the added cost and resources. Existing research has indicated significant differences in the usability experienced by experts and non-experts on the same system. The aim of this paper is to go one step further and shed light on the type and severity of the usability problems experienced by non-expert users. Usability testing with eye tracking is a resource intensive method but our findings indicate that eye tracking adds concise, summarised evidence of usability problems that justifies the cost when testing special groups such as users deficient in Web and computer skills. The contribution of this paper is to highlight the added value of eye tracking as a usability evaluation method in working with Web non-expert users. Furthermore, the findings improve our understanding of the knowledge differences between expert and non-expert Web users and the practical challenges involved in working with non-expert users.

10 Aug 10

ABSTRACT
In categorization, emphasizing task-relevant information is critical for efficient performance. Such attentional optimization can occur concurrently with learning category structures, or may be delayed until after the categories have been mastered. Thus far, delayed attentional optimization has only been found in rule-based categories. The present studies use eye-tracking to investigate attentional optimization in rule-based (RB) and information integration (II) categories. Because working memory capacity is thought to reflect the ability to suppress task-irrelevant information, we also examined the relationship between Aospan performance and attentional optimization. We found that delayed attentional optimization is not a universal characteristic of RB categories, and that working memory predicts early attentional learning in simple categories, but predicts speed of category learning in complex categories. Working memory capacity's influence on optimization and performance does not differ between RB and II learning.

06 Aug 10

ABSTRACT
Infants’ eye movements were recorded while watching feeding actions. (1) The latency of goal directed gaze shifts was dependent on life-time experience being feed. (2) The pupil dilated during observation of irrational feeding actions, irrespective of experience. Jointly these findings suggest that multiple processes guide infants’ everyday action understanding.

06 Aug 10

ABSTRACT
We introduce an algorithm for space-variant filtering of video based on a spatio-temporal Laplacian pyramid and use this algorithm to render videos in order to visualize prerecorded eye movements. Spatio-temporal contrast and colour saturation are reduced as a function of distance to the nearest gaze point of regard, i.e. non-fixated, distracting regions are filtered out, whereas fixated image regions remain unchanged. Results of an experiment in which the eye movements of an expert on instructional videos are visualized with this algorithm, so that the gaze of novices is guided to relevant image locations, show that this visualization technique facilitates the novices' perceptual learning.

06 Aug 10

ABSTRACT
Purpose – The purpose of this paper is to investigate the implications of usability and learnability in learning management systems (LMS) by considering the experiences of information and communications technology (ICT) experts and non-experts in using the LMS of an open-distance university.
Design/methodology/approach – The paper uses task-based usability testing augmented by eye tracking, post-test questionnaires and interviews; and data captured by video recordings, eye tracking, post-test questionnaires and interviews.
Findings – Usability is critical in LMS where students’ ICT skills vary. The learnability of the LMS was high and providing assistance for first-time users to get past the critical errors, rather than redesigning systems to accommodate low ICT skills, should be considered. Designing an LMS for novices may lead to a less efficient design for regular users.
Research limitations/implications – Usability testing is limited to the LMS of one open-distance university. ICT skills are identified as a determinant of LMS usability.
Practical implications – This paper highlights the effect of ICT skills on the usability of LMSs and eventually learning. ICT skills may be an important factor in inhibiting the learning of students from developing communities. If ICT literacy is not recognised and dealt with, the lack of ICT skills may undermine the efforts to use e-learning in bridging the digital divide.
Originality/value – The effect of ICT skills on the usability of LMSs has not been researched in the context of distance education.

06 Aug 10

ABSTRACT
Human infants develop a variety of attentional mechanisms that allow them to extract relevant information from a cluttered ultimodal world. We know that both social and nonsocial cues shift infants’ attention, but not how these cues differentially affect learning of multimodal events. Experiment 1 used social cues to direct 8- and 4-month-olds’ attention to two audiovisual events (i.e., animations of a cat or dog accompanied by particular sounds) while identical distractor events played in another location. Experiment 2 directed 8-month-olds’ attention with colorful flashes to the same events. Experiment 3 measured baseline learning without attention cues both with the familiarization and test trials (no cue condition) and with only the test trials (test control condition). The 8-month-olds exposed to social cues showed specific learning of audiovisual events. The 4-month-olds displayed only general spatial learning from social cues, suggesting that specific learning of audiovisual events from social cues may be a function of experience. Infants cued with the colorful flashes looked indiscriminately to both cued locations during test (similar to the 4-month-olds learning from social cues) despite attending for equal duration to the training trials as the 8-month-olds with the social cues. Results from Experiment 3 indicated that the learning effects in Experiments 1 and 2 resulted from exposure to the different cues and multimodal events. We discuss these findings in terms of the perceptual differences and relevance of the cues.

05 Aug 10

ABSTRACT
Human infants develop a variety of attentional mechanisms that allow them to extract relevant information from a cluttered multimodal world. We know that both social and nonsocial cues shift infants’ attention, but not how these cues differentially affect learning of multimodal events. Experiment 1 used social cues to direct 8- and 4-month-olds’ attention to two audiovisual events (i.e., animations of a cat or dog accompanied by particular sounds) while identical distractor events played in another location. Experiment 2 directed 8-month-olds’ attention with colorful flashes to the same events. Experiment 3 measured baseline learning without attention cues both with the familiarization and test trials (no cue condition) and with only the test trials (test control condition). The 8-month-olds exposed to social cues showed specific learning of audiovisual events. The 4-month-olds displayed only general spatial learning from social cues, suggesting that specific learning of audiovisual events from social cues may be a function of experience. Infants cued with the colorful flashes looked indiscriminately to both cued locations during test (similar to the 4-month-olds learning from social cues) despite attending for equal duration to the training trials as the 8-month-olds with the social cues. Results from Experiment 3 indicated that the learning effects in Experiments 1 and 2 resulted from exposure to the different cues and multimodal events. We discuss these findings in terms of the perceptual differences and relevance of the cues.

05 Aug 10

ABSTRACT
Previous studies have suggested that signaling enhances multimedia learning. However, there is not enough evidence showing why signaling leads to better performance. The goal of this study was to examine the effects of signaling on learning outcomes and to reveal the underlying reasons for this effect by using eye movement measures. The participants were 40 undergraduate students who were presented with either signaled or nonsignaled multimedia materials. Labels in the illustration were signaled by temporarily changing the color of the items. The results suggest that the signaled group outperformed the nonsignaled group on transfer and matching tests. Eye movement data shows that signaling guided attention to relevant information and improved the efficiency and effectiveness of finding necessary information.

05 Aug 10

ABSTRACT
In this paper, we describe how eyetracking has been used in exploratory experiments to inform the design of screening tests for dyslexic students by examining their eye gaze while reading Arabic texts. Findings reveal differences in the intensity of eye gaze and reading patterns between dyslexic readers and non-dyslexic controls. Dyslexics consistently exhibited longer fixation durations, shorter saccades, and more regressions. Moreover, results suggest that eye movement patterns are a reflection of the cognitive processes occurring during reading of texts in both Arabic deep and shallow orthographies. Applicability of eye movement analysis in investigating the nature of the reading problems and tailoring interventions to the particular needs of individuals with dyslexia is discussed.

in list: Linguistics

21 Jul 10

ABSTRACT
Learning to identify objects as members of categories is an essential cognitive skill and learning to deploy attention effectively is a core component of that process. The present study investigated an assumption imbedded in formal models of categorization: error is necessary for attentional learning. Eye-trackers were used to record participants’ allocation of attention to task relevant and irrelevant features while learning a complex categorization task. It was found that participants optimized their fixation patterns in the absence of both performance errors and corrective external feedback. Optimization began immediately after each category was mastered and continued for many trials. These results demonstrate that error is neither necessary nor sufficient for all forms of attentional learning.

20 Jul 10

ABSTRACT
Most research on text-based synchronous computer-mediated communication (SCMC) in language learning has used output logs as the sole data source. I review interactionist and sociocultural SCMC research, focusing in particular on the question of technological determinism, and conclude that, from whichever perspective, reliance on output logs leads to an impoverished picture of the experience of SCMC users and of phenomena relevant to learning. The assumption that output logs are an adequate data source fails to give due weight to the specificities of this form of communication, in particular the constraints and affordances of the computer interface. I examine the potential contribution of other data sources, providing by way of illustration an analysis of sample eye-tracker data from a tandem SCMC session.

in list: Linguistics

20 Jul 10

ABSTRACT
The purpose of this study was to examine typically developing infants' integration of audio-visual sensory information as a fundamental process involved in early word learning. One hundred sixty pre-linguistic children were randomly assigned to watch one of four counterbalanced versions of audio-visual video sequences. The infants' eye-movements were recorded and their looking behavior was analyzed throughout three repetitions of exposure-test-phases. The results indicate that the infants were able to learn covariance between shapes and colors of arbitrary geometrical objects and to them corresponding nonsense words. Implications of audio-visual integration in infants and in non-human animals for modeling within speech recognition systems, neural networks and robotics are discussed.

in list: Linguistics

1 - 20 of 33 Next ›
20 items/page

Highlighter, Sticky notes, Tagging, Groups and Network: integrated suite dramatically boosting research productivity. Learn more »

Join Diigo