Search results for "Visual Attention"
showing 10 items of 45 documents
Audiovisual attention boosts letter-speech sound integration
2013
We studied attention effects on the integration of written and spoken syllables in fluent adult readers by using event-related brain potentials. Auditory consonant-vowel syllables, including consonant and frequency changes, were presented in synchrony with written syllables or their scrambled images. Participants responded to longer-duration auditory targets (auditory attention), longer-duration visual targets (visual attention), longer-duration auditory and visual targets (audiovisual attention), or counted backwards mentally. We found larger negative responses for spoken consonant changes when they were accompanied by written syllables than when they were accompanied by scrambled text. Th…
Visual Attention in Virtual Reality Settings: An Abstract
2019
Virtual reality, VR, is challenging marketers to understand both brand interaction and the customer experience at the point of sale. In order to test the usefulness of VR in retailing, research must address how the customer’s visual attention in a VR setting affects his/her behavior. There are two main drivers for this research. First, there has been an enormous growth in recent years in the use of neurophysiological methods to measure visual attention. At the store level, previous works show that attention to products measured through eye tracking influences consumer decision. Second, VR is being increasingly adopted by brands, but research is lacking into comparisons between VR formats. W…
Semantic Analysis of the Driving Environment in Urban Scenarios
2021
Understanding urban scenes require recognizing the semantic constituents of a scene and the complex interactions between them. In this work, we explore and provide effective representations for understanding urban scenes based on in situ perception, which can be helpful for planning and decision-making in various complex urban environments and under a variety of environmental conditions. We first present a taxonomy of deep learning methods in the area of semantic segmentation, the most studied topic in the literature for understanding urban driving scenes. The methods are categorized based on their architectural structure and further elaborated with a discussion of their advantages, possibl…
Hand-Held texting is less distracting than texting with the phone in a holder: anyway, don't do it
2015
We studied the effects of texting while driving and the effects of mobile phone position (hand-held, holder) on drivers' lane-keeping performance, experienced workload, and in-car glance durations in a motion-platform driving simulator with 24 participants. Overall, we found the known negative effects of texting on lane-keeping performance, workload, and visual attention on road, suggesting that texting on the road in any manner is not risk-free. As a novel finding, we found that hand-held texting led to fewer lane-keeping errors and shorter total glance times off road compared to texting with the phone in a holder. We suggest the explanation is that the drivers had considerably more experi…
Combining Top-down and Bottom-up Visual Saliency for Firearms Localization
2014
Object detection is one of the most challenging issues for computer vision researchers. The analysis of the human visual attention mechanisms can help automatic inspection systems, in order to discard useless information and improving performances and efficiency. In this paper we proposed our attention based method to estimate firearms position in images of people holding firearms. Both top-down and bottom-up mechanisms are involved in our system. The bottom-up analysis is based on a state-of-the-art approach. The top-down analysis is based on the construction of a probabilistic model of the firearms position with respect to the people’s face position. This model has been created by analyzi…
Dialogue through the eyes: Exploring teachers’ focus of attention during educational dialogue
2020
Abstract This study explored teachers’ focus of attention during educational dialogue. Teachers’ focus of attention was recorded in 54 Grade 1 classrooms using Tobii Pro Glasses 2 mobile eye-tracking device. From the video recordings, episodes of educational dialogue were identified and categorised by quality. Teacher’s focus of attention on students was examined during the dialogue episodes. Results showed that teachers allocated their attention relatively unevenly among the students. More students got visual attention during high-quality educational dialogue than during moderate-quality dialogue. This study provides insight into the quality of educational dialogue by combining assessment …
A Multi-Scale Colour and Keypoint Density-Based Approach for Visual Saliency Detection
2020
In the first seconds of observation of an image, several visual attention processes are involved in the identification of the visual targets that pop-out from the scene to our eyes. Saliency is the quality that makes certain regions of an image stand out from the visual field and grab our attention. Saliency detection models, inspired by visual cortex mechanisms, employ both colour and luminance features. Furthermore, both locations of pixels and presence of objects influence the Visual Attention processes. In this paper, we propose a new saliency method based on the combination of the distribution of interest points in the image with multiscale analysis, a centre bias module and a machine …
2020
Domain-specific understanding of digitally represented graphs is necessary for successful learning within and across domains in higher education. Two recent studies conducted a cross-sectional analysis of graph understanding in different contexts (physics and finance), task concepts, and question types among students of physics, psychology, and economics. However, neither changes in graph processing nor changes in test scores over the course of one semester have been sufficiently researched so far. This eye-tracking replication study with a pretest-posttest design examines and contrasts changes in physics and economics students' understanding of linear physics and finance graphs. It analyze…
Give us today our daily bread: The effect of hunger on consumers’ visual attention towards bread and the role of time orientation
2021
Abstract This study investigated the effect of hunger on consumers’ visual attention during a food choice task, and the role of time orientation (i.e., present and future orientation) in this interplay. A lab-based eye-tracking experiment including 102 participants was conducted, with hunger as the manipulated factor (hungry, satiated). Participants in the satiated condition were served a breakfast buffet before the experimental tasks, whereas participants in the hungry condition were served the buffet after completion of the tasks. Both groups were exposed to a set of planograms depicting supermarket shelves and were asked to choose an option they could consider buying, while their eye mov…
Cognitive predictors of single-digit and procedural calculation skills and their covariation with reading skill.
2006
Abstract This study examined the extent to which children’s cognitive abilities in kindergarten and their mothers’ education predict their single-digit and procedural calculation skills and the covariance of these with reading skill in Grade 4. In kindergarten, we assessed children’s (N = 178) basic number skills, linguistic skills, and visual attention. In Grade 4, we assessed their calculation and reading skills. Data on children’s cognitive ability at 5 years of age and their mothers’ level of education were also collected. The results showed that both of the core components of calculation, single-digit and procedural calculation, as well as their covariance with reading, were predicted …