From the moment infants are born, they seem to prefer orienting to social
stimuli, over objects and non-social stimuli. This preference lasts throughout
adulthood and is believed to play a crucial role in social-communicative
development. By following up a group of infants at the age of 6, 8, and 12
months, this study explored the role of social orienting in the early
development of joint attention skills. The expected association between social
orienting and joint attention was partially confirmed. Social orienting in
real-life photographs of everyday situations was not related to later joint
attention skills, however fixation to the eyes in a neutral face was related to
response to joint attention skills, and fixation to the eyes in a dynamic video
clip of a talking person was predictive of initiating joint attention skills.
Several alternative interpretations of the results are discussed.
A conversation is made up of visual and auditory signals in a complex flow of events. What is the relative importance of these components for young children's ability to maintain attention on a conversation? In the present set of experiments the visual and auditory signals were disentangled in four filmed events. The visual events were either accompanied by the speech sounds of the conversation or by matched motor sounds and the auditory events by either the natural visual turn taking of the conversation or a matched turn taking of toy trucks. A cornea-reflection technique was used to record the gaze-pattern of subjects while they were looking at the films. Three age groups of typically developing children were studied; 6-month-olds, 1-year-olds and 3-year-olds. The results show that the children are more attracted by the social component of the conversation independent of the kind of sound used. Older children find spoken language more interesting than motor sound. Children look longer at the speaking agent when humans maintain the conversation. The study revealed that children are more attracted to the mouth than to the eyes area. The ability to make more predictive gaze shifts develops gradually over age.
Previous research on lexical development has aimed to identify the factors that enable accurate initial word-referent mappings based on the assumption that the accuracy of initial word-referent associations is critical for word learning. The present study challenges this assumption. Adult English speakers learned an artificial language within a cross-situational learning paradigm. Visual fixation data were used to assess the direction of visual attention. Participants whose longest fixations in the initial trials fell more often on distracter images performed significantly better at test than participants whose longest fixations fell more often on referent images. Thus, inaccurate initial word-referent mappings may actually benefit learning.
This study relies on eye tracking technology to investigate how humans perceive others' feeding actions. Results demonstrate that 6-month-olds (n = 54) anticipate that food is brought to the mouth when observing an adult feeding herself with a spoon. Still, they fail to anticipate self-propelled (SP) spoons that move toward the mouth and manual combing actions directed toward the head. Ten-month-olds (n = 54) and adults (n = 32) anticipate SP spoons; however, only adults anticipate combing actions. These results suggest that goal anticipation during observation of feeding actions develops earlier and is less dependent on directly perceived actions than goal anticipation during observation of other manual actions. These results are discussed in relation to experience and a possible phylogenetic influence on perception and understanding of feeding.
Previous research indicates that adult learners are able to use co-occurrence information to learn word-to-object mappings and form object categories simultaneously. The current eye-tracking study investigated the dynamics of attention allocation during concurrent statistical learning of words and categories. The results showed that the participants’ learning performance was associated with the numbers of short and mid-length fixations generated during training. Moreover, the learners’ patterns of attention allocation indicated online interaction and bi-directional bootstrapping between word and category learning processes.
Eye-movements represent a great interest in studying the specificity of the reading difficulties that individuals with developmental dyslexia have. In the present study dyslexic children were pair-matched with control children in a sentence reading task. The children read sentences in Bulgarian – a Cyrillic alphabet language with regular orthography. Target nouns with controlled frequency and length were embedded in the sentences. Eye movements
revealed highly significant group differences in the gaze time and the total fixation times, word frequency and word length effects as well as interaction for both frequency and length with the group factor. These results, especially the frequency effect found in the dyslexic children, are discussed in the context of previous studies.
An eye tracking paradigm was used to investigate how infants’ attention is modulated by observed goal-directed manual grasping actions. In Experiment 1, we presented 3-, 5-, and 7-month-old infants with a static picture of a grasping hand, followed by a target appearing at a location either congruent or incongruent with the grasping direction of the hand. The latency of infants gaze shift from the hand to the target was recorded and compared between congruent and incongruent trials. Results demonstrate a congruency effect from 5 months of age. A second experiment illustrated that the congruency effect of Experiment 1 does not extend to a visually similar mechanical claw (instead of the grasping hand). Together these two experiments describe the onset of covert attention shifts in response to manual actions and relate these findings to the onset of manual grasping.
How infants learn new words is a fundamental puzzle in language acquisition. To guide their word learning, infants exploit systematic word-learning heuristics that allow them to link new words to likely referents. By 17 months, infants show a tendency to associate a novel noun with a novel object rather than a familiar one, a heuristic known as disambiguation. Yet, the developmental origins of this heuristic remain unknown. We compared disambiguation in 17- to 18-month-old infants from different language backgrounds to determine whether language experience influences its development, or whether disambiguation instead emerges as a result of maturation or social experience. Monolinguals showed strong use of disambiguation, bilinguals showed marginal use, and trilinguals showed no disambiguation. The number of languages being learned, but not vocabulary size, predicted performance. The results point to a key role for language experience in the development of disambiguation, and help to distinguish among theoretical accounts of its emergence.
Four-, 6-, and 11-month old infants were presented with movies in which two adult actors conversed about everyday events, either by facing each other or looking in opposite directions. Infants from 6 months of age made more gaze shifts between the actors, in accordance with the flow of conversation, when the actors were facing each other. A second experiment demonstrated that gaze following alone did not cause this difference. Instead the results are consistent with a social cognitive interpretation, suggesting that infants perceive the difference between face-to-face and back-to-back conversations and that they prefer to attend to a typical pattern of social interaction from 6 months of age.
Research demonstrates that individuals with autism process facial information in a different manner than typically developing individuals. Several accounts of the face recognition deficit in individuals with autism have been posited with possible underlying mechanisms as the source of the deficit in face recognition skills. The current study proposed a new account that individuals with autism are less sensitive at perceiving configural manipulations between faces than typically developing individuals leading to their difficulty recognizing faces. A change detection task was used to measure perceptual sensitivity to varying levels of configural manipulations involving the eye and mouth regions. Participants with and without autism, matched on chronological age, verbal IQ, performance IQ, full scale IQ, visual acuity, and gender, studied upright and inverted faces in a delayed same/different face recognition test. An eye tracker recorded eye gaze throughout the experiment. Results revealed a significant group difference with respect to detection accuracy. The control group was more accurate at detecting subtle changes between upright faces than the autism group, particularly with manipulations to the spatial relation of eyes. Furthermore, an analysis of detection accuracy within groups revealed that a greater proportion of participants in the control group were better at detecting differences at subtler levels of spatial manipulations. Eye tracking results revealed a significant group difference in number of fixations to relevant vs. irrelevant areas of interest; however, both groups utilized eye information more than mouth information to detect changes in both upright and inverted faces.
Furthermore, there was some indication that eye gaze differed within groups, with a small proportion of individuals in both the autism and control groups demonstrating a bias to look more toward the mouth than eyes. Results are discussed with respect to featural vs. configural processing in autism and the use of eye vs. m
Four- to ten-month-old infants (n=58) were examined on their ability to match magnitude across modalities. Their looking behaviour was recorded as they were presented with an intensity modulated auditory stimulus and three possible visual matches. The mean looking times towards a visual target (size envelope matching intensity envelope of the auditory stimulus) and a non-target were calculated. Fivemonth-olds and seven- to ten-month-olds show a significant preference looking towards the target, as do an adult control group. Four- and six-month-olds do not.
Reaching is an important and early emerging motor skill that allows infants to interact with the physical and social world. However, few studies have considered how reaching experiences shape infants’ own motor development and their perception of actions performed by others. In the current study, two groups of infants received daily parent guided play sessions over a two-week training period. Using “Sticky Mittens”, one group was enabled to independently pick up objects whereas the other group only passively observed their parent’s actions on objects. Following training, infants’ manual and visual exploration of objects, agents, and actions in a live and a televised context were assessed. Our results showed that only infants who experienced independent object apprehension advanced in their reaching behavior, and showed changes in their visual exploration of agents and objects in a live setting. Passive observation was not sufficient to change infants’ behavior. To our surprise, the effects of the training did not seem to generalize to a televised observation context. Together, our results suggest that early motor training can jump-start infants’ transition into reaching and inform their perception of others’ actions.
Early identification efforts are essential for the early treatment of the symptoms of autism but can only occur if robust risk factors are found. Children with autism often engage in repetitive behaviors and anecdotally prefer to visually examine geometric repetition, such as the moving blade of a fan or the spinning of a car wheel. The extent to which a preference for looking at geometric repetition is an early risk factor for autism has yet to be examined.
To determine if toddlers with an autism spectrum disorder (ASD) aged 14 to 42 months prefer to visually examine dynamic geometric images more than social images and to determine if visual fixation patterns can correctly classify a toddler as having an ASD.
Toddlers were presented with a 1-minute movie depicting moving geometric patterns on 1 side of a video monitor and children in high action, such as dancing or doing yoga, on the other. Using this preferential looking paradigm, total fixation duration and the number of saccades within each movie type were examined using eye tracking technology.
University of California, San Diego Autism Center of Excellence.
One hundred ten toddlers participated in final analyses (37 with an ASD, 22 with developmental delay, and 51 typical developing toddlers).
MAIN OUTCOME MEASURE
Total fixation time within the geometric patterns or social images and the number of saccades were compared between diagnostic groups.
Overall, toddlers with an ASD as young as 14 months spent significantly more time fixating on dynamic geometric images than other diagnostic groups. If a toddler spent more than 69% of his or her time fixating on geometric patterns, then the positive predictive value for accurately classifying that toddler as having an ASD was 100%.
A preference for geometric patterns early in life may be a novel and easily detectable early signature of infants and toddlers at risk for autism.
In laboratory experiments, infants can learn patterns of features that co-occur (e.g., Fiser & Aslin, 2002). This finding leaves two questions unanswered: What do infants do with the knowledge acquired from such statistical learning, and which patterns do infants attend to in the noisy and cluttered world outside of the laboratory? Here, we show that 9-month-old infants form expectations about co-occurring features remaining fused, an essential skill for object individuation and recognition (e.g., Goldstone, 2000; Schyns & Rodet, 1997). We also show that though social cues may temporarily detract attention away from learning events, they appear to stimulate infants to display learning better in complex situations than when infants learn on their own without attention cues. These findings suggest that infants can use feature co-occurrence to learn about objects and that social cues shape such foundational learning in a noisy environment during infancy.
Recent studies show both adults and young children possess powerful statistical learning capabilities to solve the word-to-world mapping problem. However, it is still unclear what are the underlying mechanisms supporting seemingly powerful statistical cross-situational learning. To answer this question, the paper uses an eye tracker to record moment-by-moment eye movement data of 14-month-old babies in statistical learning tasks. A simple associative statistical learning is applied to the fine-grained eye movement data. The results are compared with empirical results from those young learners. A strong correlation between these two shows that a simple associative learning mechanism can account for both behavioural data as a group and individual differences, suggesting that the associative learning mechanism with selective attention can provide a cognitively plausible model of statistical learning. The work represents the first steps to use eye movement data to infer underlying learning processes in statistical learning.
Here we report evidence from a new eye-tracking measure of relational memory that suggests that 9-month-old infants can encode memories in terms of the relations among items, a function putatively subserved by the hippocampus. Infants learned about the association between faces that were superimposed on unique scenic backgrounds. During test trials, infants were shown three faces presented on a familiar scene. All three faces were equally familiar; however, one had been presented with the test background earlier. Visual behavior was recorded continuously using a TOBII eye tracker. Infants looked preferentially at the face that matched the test background very early in the trial; however, the time course of this preferential looking effect varied as a function of delay. These results suggest that by 9 months of age infants can form memories that represent the relations among items and maintain them over short delays.
This dissertation explored the questions of when and how infants develop an understanding of intention—that is, an understanding of human behavior as guided by subjective internal states that underlie and are separate from actions and objects in the world. Failed action understanding was used as a marker of intention understanding because, unlike in the case of successful actions, understanding failed actions requires recognizing that the observed pattern of movement is distinct from the intention that motivates it.
To explore the development of intention understanding in the first year of life, two key studies examined an understanding of successful- versus failed-reaching actions. Study 1 used a habituation design to assess both when infants (8-, 10-, and 12-month-olds) understand that a failed action is intentional and whether an understanding of successful actions precedes an understanding of failed actions. Study 2 extended this work to explore the process by which 8- and 10-month-olds develop an understanding of intention. Eye-tracking methodology was used to examine how infants process and predict the goals of ongoing successful and failed reaching actions. Moreover, performance was explored in relation to parent-report measures of infants’ social and motor behaviors.
Three central findings emerged. First, already within the first year of life (by 10 months), infants understand and can predict the goal of a failed-reaching action. Second, during the course of development, understanding successful actions precedes understanding failed actions. Third, failed (but not successful) action understanding is strongly associated with infants’ tendency to initiate joint attention and their ability to locomote independently.
Overall, results from this dissertation support a developmental picture wherein a rudimentary understanding of action as motivated by subjective internal states emerges during the first year of life from an antecedent understanding of action that does not go deeper than the surface rela
Is information from vision and audition mutually facilitative to categorization in infants? Ten-month-old infants can detect categories on the basis of correlations of five attributes of visual stimuli; four- and seven-month-olds are sensitive only to the specific attributes, rather than the correlations. If younger infants can detect specific attributes of visual stimuli, is there a way to facilitate the perception of these attributes as a meaningful correlation, and hence, as a category? The current studies investigate whether integrating information from two domains—speech within the auditory system together with shapes in the visual domain—could facilitate categorization. I hypothesized that 4-month-old infants could categorize audio-visual information by pairing correlation-based stimuli in the auditory domain (monosyllables) with correlation-based stimuli in the visual domain (line-drawn animals). In Experiment 1, infants were exposed to a series of line-drawn animals whose features were correlated to form two animal categories. During test, infants experienced three test trials: a novel member of a previously-shown category, a non-member of the categories (that shared similar features), and a completely novel animal. Experiment 2 used the same animals and paradigm, but each animal was presented with a speech stimulus (a repeating monosyllable) whose auditory features were correlated in order to form two categories. In Experiment 3, categorization of the auditory stimuli was investigated in the absence of the correlated visual information. Experiment 4 addressed some potential confounds of the findings from Experiment 2. Results from this series of studies show that 4-month-olds fail categorize in both visual-only and auditory-only conditions. However, when each visual exemplar is paired with a corresponding, correlated speech exemplar, infants can categorize; they look longer at a new, within-category exemplar than a new, category violator. These findings provide evidence that infants extract corr
Fixation duration for same-race (i.e., Asian) and other-race (i.e., Caucasian) female faces by Asian infant participants between 4 and 9 months of age was investigated with an eye-tracking procedure. The age range tested corresponded with prior reports of processing differences between same- and other-race faces observed in behavioral looking time studies, with preference for same-race faces apparent at 3 months of age and recognition memory differences in favor of same-race faces emerging between 3 and 9 months of age. The eye-tracking results revealed both similarity and difference in infants’ processing of own- and other-race faces. There was no overall fixation time difference between same race and other race for the whole face stimuli. In addition, although fixation time was greater for the upper half of the face than for the lower half of the face and trended higher on the right side of the face than on the left side of the face, face race did not impact these effects. However, over the age range tested, there was a gradual decrement in fixation time on the internal features of other-race faces and a maintenance of fixation time on the internal features of same-race faces. Moreover, the decrement in fixation time for the internal features of other-race faces was most prominent on the nose. The findings suggest that (a) same-race preferences may be more readily evidenced in paired comparison testing formats, (b) the behavioral decline in recognition memory for other-race faces corresponds in timing with a decline in fixation on the internal features of other-race faces, and (c) the center of the face (i.e., the nose) is a differential region for processing same- versus other-race faces by Asian infants.
Background. Research on infant cognition has long been concerned with how infants process static vs. moving objects (e.g. Van de Walle & Spelke, 1996; Rakison & Poulin-Dubois, 2002). We are interested in comparing infants' visual working memory (VWM) for speed and luminance. Here we focus on our revised ‘salience-mapping’ technique (Kaldy & Blaser, 2009) that allows us to generate comparison objects with iso-salient differences from a common baseline object, thereby ensuring fair VWM tests.
Methods. Subjects' age was 5;0-6;30. A Tobii T120 eye-tracker measured infant' gaze direction. Experiment 1 (ISM): Salience was calibrated in a preferential looking paradigm by pitting a baseline object (a slowly rotating green star) against a range of objects that increased either in luminance or in speed of rotation. Salience functions were obtained for each of the dimensions. We chose speed and luminance values that were at the 75% iso-salience level. In this way we defined three objects that had the following relationship: the salience difference between the baseline and the luminance comparison and the baseline and the speed comparison was equal. Experiment 2 (VWM): In this in-progress experiment, two of the three such defined objects are presented for 3.5 seconds. The two objects disappear for 2 seconds, then reappear, but with one changed in luminance or speed (by the previously calibrated amount) while the other reappears unchanged. Preference, determined from looking time, for the changed (vs. unchanged) object is evidence for memory.
Results. Iso-salient differences for luminance and motion were successfully measured in Experiment 1 using our revised salience-mapping technique. While data collection for Experiment 2 is ongoing, we expect better VWM for motion as opposed to luminance.
Discussion. In service to VWM experiments, we demonstrated an innovative method for producing psychophysically comparable stimulus differences for infants along the dimensions of speed and luminance.