Search results for "gesture"
showing 10 items of 186 documents
Supramodal neural processing of abstract information conveyed by speech and gesture
2013
Abstractness and modality of interpersonal communication have a considerable impact on comprehension. They are relevant for determining thoughts and constituting internal models of the environment. Whereas concrete object-related information can be represented in mind irrespective of language, abstract concepts require a representation in speech. Consequently, modality-independent processing of abstract information can be expected. Here we investigated the neural correlates of abstractness (abstract vs. concrete) and modality (speech vs. gestures), to identify an abstractness-specific supramodal neural network. During fMRI data acquisition 20 participants were presented with videos of an ac…
Teachers' embodied allocations in instructional interaction
2012
This paper describes how teachers employ gaze, head nods and pointing gestures in allocating response turns to students in whole-class instructional interaction. Specifically, it focuses on examining teachers’ embodied allocations – that is, turn-allocations produced (mostly) by embodied means – and the sequential positions in which they are performed within the tripartite instructional sequence of IRE. While prior studies have noted their use in classroom interaction, the way in which they are drawn on by teachers has not been examined in detail. By using conversation analysis in conjunction with the study of embodied interaction, this article aims to show how these ephemeral embodied reso…
Is displacement possible without language? Evidence from preverbal infants and chimpanzees
2013
Is displacement possible without language? This question was addressed in a recent work by Liszkowski and colleagues (Liszkowski, Schafer, Carpenter, & Tomasello, 2009). The authors carried out an experiment to demonstrate that 12-month-old prelinguistic infants can communicate about absent entities by using pointing gestures, while chimpanzees cannot. The main hypothesis of their study is that displacement does not depend on language but is, however, exclusively human and instead depends on species-specific social-cognitive human skills. Against this hypothesis, we will argue that a symbolic representation is needed to intentionally communicate absence and that this symbolic representa…
La voce, il gesto, la scena. Elementi teatrali nelle commedie latine del XII e XIII secolo
2019
In questo volume, dopo aver delineato un panorama organico delle cosiddette commedie latine elegiache del XII e XIII secolo, viene discussa l'antica questione della reale possibilità di rappresentare questi testi, cercando di mostrare - anche sulla base di alcune glosse di Arnolfo di Orléans e un'intera serie di spie e indizi interni dei testi stessi - come è ipotizzabile una rappresentazione teatrale, almeno per alcune di queste commedie. Gli argomenti qui trattati riguardano l'alternanza, nelle composizioni comico-elegiache, tra parti narrative e sezioni monologiche e / o dialogiche; le didascalie "interne"; possibili relazioni con la tradizione giullaresca; la convivenza, all'interno di …
All Eyes on Me
2020
Duo musicians exhibit a broad variety of bodily gestures, but it is unclear how soloists’ and accompanists’ movements differ and to what extent they attract observers’ visual attention. In Experiment 1, seven musical duos’ body movements were tracked while they performed two pieces in two different conditions. In a congruent condition, soloist and accompanist behaved according to their expected musical roles; in an incongruent condition, the soloist behaved as accompanist and vice versa. Results revealed that behaving as soloist, regardless of the condition, led to more, smoother, and faster head and shoulder movements over a larger area than behaving as accompanist. Moreover, accompanists …
Child–display interaction:Lessons learned on touchless avatar-based large display interfaces
2020
AbstractDuring the last decade, touchless gestural interfaces have been widely studied as one of the most promising interaction paradigms in the context of pervasive displays. In particular, avatars and silhouettes have proved to be effective in making the touchless capacity of displays self-evident. In this paper, we focus on a child–display interaction approach to avatar-based touchless gestural interfaces. We believe that large displays offer an opportunity to stimulate children’s experiences and engagement; for instance, learning about art is very engaging for children but can bring a number of challenges. Our study aims to contribute to the literature on both pervasive displays and chi…
A framework for sign language sentence recognition by common sense context
2007
This correspondence proposes a complete framework for sign language recognition that integrates a commonsense engine in order to deal with sentence recognition. The proposed system is based on a multilevel architecture that allows modeling and managing of the knowledge of the recognition process in a simple and robust way. The final abstraction level of this architecture introduces the semantic context and the analysis of the correctness of a sentence given in a sequence of recognized signs. Experimentations are presented using a set of signs from the Italian sign language (LIS) for domotic applications. The implemented system maintains a high recognition rate when the set of signs grows, c…
Multidimensional optical sensing and imaging for displays, computational imaging, optical security, and healthcare
2016
In this invited paper, we present an overview of our recently published work on 3D imaging, visualization and displays, including optical security using quantum imaging principles, 3D microscopy, healthcare, automated disease identification with 3D imaging, fatigue free augmented reality 3D glasses, and optical security and authentication using photon counting for IC inspection, polarimetric photon counting 3D imaging, and 3D human gesture recognition
Deep Learning-Based Sign Language Digits Recognition From Thermal Images With Edge Computing System
2021
The sign language digits based on hand gestures have been utilized in various applications such as human-computer interaction, robotics, health and medical systems, health assistive technologies, automotive user interfaces, crisis management and disaster relief, entertainment, and contactless communication in smart devices. The color and depth cameras are commonly deployed for hand gesture recognition, but the robust classification of hand gestures under varying illumination is still a challenging task. This work presents the design and deployment of a complete end-to-end edge computing system that can accurately provide the classification of hand gestures captured from thermal images. A th…
The implicit in "In search of lost ime" : study on an aspect of proustien speech
2013
The implicit is defined as content present in speech without being formally expressed. Presupposition and implied content are the two fundamental elements of this concept. They act as information implied in speech whose essence the speaker can grasp or decrypt using the theories of pragmatics and enunciative linguistics. Proustian speech constitutes a remarkable example of the use of the implicit and its concepts. The present work is entirely devoted to the search for the implicit in Proust’s In Search of Lost Time. In our work, the development of this concept emerges especially in the verbal interaction between Proust's characters, also, through the speech of the narrator who opts for a ne…