Search results for "Artificial Intelligence"
showing 10 items of 6122 documents
Monocular Versus Binocular Calibrations in Evaluating Fixation Disparity With a Video-Based Eye-Tracker
2015
When measuring fixation disparity (an oculomotor vergence error), the question arises as to whether a monocular or binocular calibration is more precise and physiologically more appropriate. In monocular calibrations, a single eye fixates on a calibration target that is taken as having been projected onto the center of the fovea; the corresponding vergence state represents the heterophoria (the resting vergence position), which has no effect on the calibration procedure. In binocular calibrations, a vergence error may be present and may affect the subsequent measurement of the fixation disparity during binocular recordings. This study includes a test of the precision of both monocular and …
Measuring Perceived Ceiling Height in a Visual Comparison Task
2017
When judging interior space, a dark ceiling is judged to be lower than a light ceiling. The method of metric judgments (e.g., on a centimetre scale) that has typically been used in such tasks may reflect a genuine perceptual effect or it may reflect a cognitively mediated impression. We employed a height-matching method in which perceived ceiling height had to be matched with an adjustable pillar, thus obtaining psychometric functions that allowed for an estimation of the point of subjective equality (PSE) and the difference limen (DL). The height-matching method developed in this paper allows for a direct visual match and does not require metric judgment. It has the added advantage of pro…
Quantifying the Wollaston Illusion
2020
In the early 19th century, William H. Wollaston impressed the Royal Society of London with engravings of portraits. He manipulated facial features, such as the nose, and thereby dramatically changed the perceived gaze direction, although the eye region with iris and eye socket had remained unaltered. This Wollaston illusion has been replicated numerous times but never with the original stimuli. We took the eyes (pupil and iris) from Wollaston’s most prominent engraving and measured their perceived gaze direction in an analog fashion. We then systematically added facial features (eye socket, eyebrows, nose, skull, and hair). These features had the power to divert perceived gaze direction by…
The Auditory Kuleshov Effect: Multisensory Integration in Movie Editing
2016
Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facia…
Allocentric time-to-contact and the devastating effect of perspective
2014
AbstractWith regard to impending object–object collisions, observers may use different sources of information to judge time to contact (tC). We introduced changes of the observer’s vantage point to test among three sets of hypotheses: (1) Observers may use a distance-divided-by-velocity algorithm or, alternatively, elaborated τ-formulae, all of which give exact tC information; (2) observers may use simple τ-formulae (i.e., formulae of the type: visual angle divided by its own first temporal derivative); (3) observers may capitalize on non-τ variables. Hypotheses (2) and (3) imply specific patterns of errors. We presented animated, impending collisions between a moving object and a stationar…
Did you see that? Dissociating advanced visual information and ball flight constrains perception and action processes during one-handed catching
2013
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised ac…
Neural Correlates of Visual versus Abstract Letter Processing in Roman and Arabic Scripts
2013
In alphabetic orthographies, letter identification is a critical process during the recognition of visually presented words. In the present experiment, we examined whether and when visual form influences letter processing in two very distinct alphabets (Roman and Arabic). Disentangling visual versus abstract letter representations was possible because letters in the Roman alphabet may look visually similar/dissimilar in lowercase and uppercase forms (e.g., c-C vs. r-R) and letters in the Arabic alphabet may look visually similar/dissimilar, depending on their position within a word (e.g., [Formula: see text] - [Formula: see text] vs. [Formula: see text] - [Formula: see text]). We employed a…
The big picture: effects of surround on immersion and size perception.
2014
Despite the fear of the entertainment industry that illegal downloads of films might ruin their business, going to the movies continues to be a popular leisure activity. One reason why people prefer to watch movies in cinemas may be the surround of the movie screen or its physically huge size. To disentangle the factors that might contribute to the size impression, we tested several measures of subjective size and immersion in different viewing environments. For this purpose we built a model cinema that provided visual angle information comparable with that of a real cinema. Subjects watched identical movie clips in a real cinema, a model cinema, and on a display monitor in isolation. Wher…
Timing flickers across sensory modalities
2009
In tasks requiring a comparison of the duration of a reference and a test visual cue, the spatial position of test cue is likely to be implicitly coded, providing a form of a congruency effect or introducing a response bias according to the environmental scale or its vectorial reference. The precise mechanism generating these perceptual shifts in subjective duration is not understood, although several studies suggest that spatial attentional factors may play a critical role. Here we use a duration comparison task within and across sensory modalities to examine if temporal performance is also modulated when people are exposed to spatial distractors involving different sensory modalities. Di…
Kinematic features of movement tunes perception and action coupling
2005
How do we extrapolate the final position of hand trajectory that suddenly vanishes behind a wall? Studies showing maintenance of cortical activity after objects in motion disappear suggest that internal model of action may be recalled to reconstruct the missing part of the trajectory. Although supported by neurophysiological and brain imaging studies, behavioural evidence for this hypothesis is sparse. Further, in humans, it is unknown if the recall of internal model of action at motion observation can be tuned with kinematic features of movement. Here, we propose a novel experiment to address this question. Each stimulus consisted of a dot moving either upwards or downwards, and correspond…