Search results for "gesture"
showing 10 items of 186 documents
Global functorial hypergestures over general skeleta for musical performance
2016
Musical performance theory using Lagrangian formalism, inspired by physical string theory, has been described in previous research. That approach was restricted to zero-addressed hypergestures of local character, and also to digraph skeleta of simple arrow type. In this article, we extend the theory to hypergestures that are defined functorially over general topological categories as addresses, are global, and are also defined for general skeleta. We also prove several versions of the important Escher Theorem for this general setup. This extension is highly motivated by theoretical and practical musical performance requirements of which we give concrete examples.
Hypergestures in Complex Time: Creative Performance Between Symbolic and Physical Reality
2015
Musical performance and composition imply hypergestural transformation from symbolic to physical reality and vice versa. But most scores require movements at infinite physical speed that can only be performed approximately by trained musicians. To formally solve this divide between symbolic notation and physical realization, we introduce complex time (\(\mathbb {C}\)-time) in music. In this way, infinite physical speed is “absorbed” by a finite imaginary speed. Gestures thus comprise thought (in imaginary time) and physical realization (in real time) as a world-sheet motion in space-time, corresponding to ideas from physical string theory. Transformation from imaginary to real time gives us…
Embodied sound design
2018
Abstract Embodied sound design is a process of sound creation that involves the designer’s vocal apparatus and gestures. The possibilities of vocal sketching were investigated by means of an art installation. An artist–designer interpreted several vocal self-portraits and rendered the corresponding synthetic sketches by using physics-based and concatenative sound synthesis. Both synthesis techniques afforded a broad range of artificial sound objects, from concrete to abstract, all derived from natural vocalisations. The vocal-to-synthetic transformation process was then automated in SEeD, a tool allowing to set and play interactively with physics- or corpus-based sound models. The voice-dri…
User Evaluation of the Smartphone Screen Reader VoiceOver with Visually Disabled Participants
2018
Touchscreen assistive technology is designed to support speech interaction between visually disabled people and mobile devices, allowing hand gestures to interact with a touch user interface. In a global perspective, the World Health Organisation estimates that around 285 million people are visually disabled with 2/3 of them over 50 years old. This paper presents the user evaluation of VoiceOver, a built-in screen reader in Apple Inc. products, with a detailed analysis of the gesture interaction, familiarity and training by visually disabled users and the system response. Six participants with prescribed visual disability took part in the tests in a usability laboratory under controlled con…
Evaluation of touchscreen assistive technology for visually disabled users
2017
Touchscreen assistive technology is designed to support speech interaction between visually disabled people and mobile devices, allowing the use of a choreography of gestures to interact with a touch user interface. This paper presents the evaluation of VoiceOver, a screen reader in Apple Inc. products, made in the research project Visually impaired users touching the screen- A user evaluation of assistive technology together with six visually disabled test participants. The aim was to identify challenges related to the performance of the gestures for screen interaction and evaluate the system response to the gestures. The main results showed that most of the hand gestures were easy to perf…
Recognizing actions with the associative self-organizing map
2013
When artificial agents interact and cooperate with other agents, either human or artificial, they need to recognize others’ actions and infer their hidden intentions from the sole observation of their surface level movements. Indeed, action and intention understanding in humans is believed to facilitate a number of social interactions and is supported by a complex neural substrate (i.e. the mirror neuron system). Implementation of such mechanisms in artificial agents would pave the route to the development of a vast range of advanced cognitive abilities, such as social interaction, adaptation, and learning by imitation, just to name a few. We present a first step towards a fully-fledged int…
Drawings, Gestures and Discourses: A Case Study with Kindergarten Students Discovering Lego Bricks
2020
This paper presents a study aimed at investigating the didactic potentiality of the use of an artefact, useful to construct mathematical meanings concerning the coordination of different points of view, in the observation of a real object/toy. In our view, the process of meaning construction can be fostered by the use of adequate artefacts, but it requires a teaching/learning model, which explicitly takes care of the evolution of meanings, from those personal, emerging through the activities, to the mathematical ones, aims of the teaching intervention. The main hypothesis of this study is that the alternation between different semiotic systems, graphical system, verbal system and system of …
Accessing and selecting menu items by in-air touch
2019
Is it possible to realize a non-visual, purely tactile version of an icon-based menu? Driven by such question, a hierarchical tactile dock was designed for an array of ultrasound emitters. The icons were conceived as spatio-temporal variable-speed sequences of tactile stimulation points, that are passively perceived as trajectories drawn on the palm of the hand. The recognition rate on four icons largely improved prior performance results obtained by active haptic exploration. As a result, a four-icons set can be used as the first level of a hierarchy of symbols that can be navigated by touch and gesture. The design process, based on controlled recognition experiments and exploration of dis…
Real-Time Hand Pose Recognition Based on a Neural Network Using Microsoft Kinect
2013
The Microsoft Kinect sensor is largely used to detect and recognize body gestures and layout with enough reliability, accuracy and precision in a quite simple way. However, the pretty low resolution of the optical sensors does not allow the device to detect gestures of body parts, such as the fingers of a hand, with the same straightforwardness. Given the clear application of this technology to the field of the user interaction within immersive multimedia environments, there is the actual need to have a reliable and effective method to detect the pose of some body parts. In this paper we propose a method based on a neural network to detect in real time the hand pose, to recognize whether it…
Child-display interaction: Exploring avatar-based touchless gestural interfaces
2019
During the last decade, touchless gestural interfaces have been widely studied as one of the most promising interaction paradigms in the context of pervasive displays. In particular, avatars and silhouettes have been proved to be effective in communicating the touchless gestural interactivity supported by displays. In the paper, we take a child-display interaction perspective by exploring avatar-based touchless gestural interfaces. We believe that large displays offer an opportunity to stimulate child experience and engagement, for instance when learning about art, as well as bringing a number of challenges. The purpose of this study is twofold: 1) identifying the relevant aspects of childr…