Search results for "Action recognition"
showing 10 items of 12 documents
Hankelet-based action classification for motor intention recognition
2017
Powered lower-limb prostheses require a natural, and an easy-to-use, interface for communicating amputee’s motor intention in order to select the appropriate motor program in any given context, or simply to commute from active (powered) to passive mode of functioning. To be widely accepted, such an interface should not put additional cognitive load at the end-user, it should be reliable and minimally invasive. In this paper we present a one such interface based on a robust method for detecting and recognizing motor actions from a low-cost wearable sensor network mounted on a sound leg providing inertial (accelerometer, gyrometer and magnetometer) data in real-time. We assume that the sensor…
Encoding of human action in Broca's area.
2009
International audience; Broca's area has been considered, for over a century, as the brain centre responsible for speech production. Modern neuroimaging and neuropsychological evidence have suggested a wider functional role is played by this area. In addition to the evidence that it is involved in syntactical analysis, mathematical calculation and music processing, it has recently been shown that Broca's area may play some role in language comprehension and, more generally, in understanding actions of other individuals. As shown by functional magnetic resonance imaging, Broca's area is one of the cortical areas activated by hand/mouth action observation and it has been proposed that it may …
Simulating Actions with the Associative Self-Organizing Map
2013
We present a system that can learn to represent actions as well as to internally simulate the likely continuation of their initial parts. The method we propose is based on the Associative Self Organizing Map (A-SOM), a variant of the Self Organizing Map. By emulating the way the human brain is thought to perform pattern recognition tasks, the A- SOM learns to associate its activity with di erent inputs over time, where inputs are observations of other's actions. Once the A-SOM has learnt to recognize actions, it uses this learning to predict the continuation of an observed initial movement of an agent, in this way reading its intentions. We evaluate the system's ability to simulate actions …
Gesture Modeling by Hanklet-Based Hidden Markov Model
2015
In this paper we propose a novel approach for gesture modeling. We aim at decomposing a gesture into sub-trajectories that are the output of a sequence of atomic linear time invariant (LTI) systems, and we use a Hidden Markov Model to model the transitions from the LTI system to another. For this purpose, we represent the human body motion in a temporal window as a set of body joint trajectories that we assume are the output of an LTI system. We describe the set of trajectories in a temporal window by the corresponding Hankel matrix (Hanklet), which embeds the observability matrix of the LTI system that produced it. We train a set of HMMs (one for each gesture class) with a discriminative a…
Convolutional Neural Network-Based Human Movement Recognition Algorithm in Sports Analysis
2021
In order to analyse the sports psychology of athletes and to identify the psychology of athletes in their movements, a human action recognition (HAR) algorithm has been designed in this study. First, a HAR model is established based on the convolutional neural network (CNN) to classify the current action state by analysing the action information of a task in the collected videos. Secondly, the psychology of basketball players displaying fake actions during the offensive and defensive process is investigated by combining with related sports psychological theories. Then, the psychology of athletes is also analysed through the collected videos, so as to predict the next response action of the …
Sensorimotor Coarticulation in the Execution and Recognition of Intentional Actions
2017
Humans excel at recognizing (or inferring) another's distal intentions, and recent experiments suggest that this may be possible using only subtle kinematic cues elicited during early phases of movement. Still, the cognitive and computational mechanisms underlying the recognition of intentional (sequential) actions are incompletely known and it is unclear whether kinematic cues alone are sufficient for this task, or if it instead requires additional mechanisms (e.g., prior information) that may be more difficult to fully characterize in empirical studies. Here we present a computationally-guided analysis of the execution and recognition of intentional actions that is rooted in theories of m…
How do we understand other's intentions? - An implementation of mindreading in artificial systems -
Action Recognition based on Hierarchical Self-Organizing Maps
2014
We propose a hierarchical neural architecture able to recognise observed human actions. Each layer in the architecture represents increasingly complex human activity features. The first layer consists of a SOM which performs dimensionality reduction and clustering of the feature space. It represents the dynamics of the stream of posture frames in action sequences as activity trajectories over time. The second layer in the hierarchy consists of another SOM which clusters the activity trajectories of the first-layer SOM and thus it learns to represent action prototypes independent of how long the activity trajectories last. The third layer of the hierarchy consists of a neural network that le…
Hierarchies of Self-Organizing Maps for action recognition
2016
We propose a hierarchical neural architecture able to recognise observed human actions. Each layer in the architecture represents increasingly complex human activity features. The first layer consists of a SOM which performs dimensionality reduction and clustering of the feature space. It represents the dynamics of the stream of posture frames in action sequences as activity trajectories over time. The second layer in the hierarchy consists of another SOM which clusters the activity trajectories of the first-layer SOM and learns to represent action prototypes. The third - and last - layer of the hierarchy consists of a neural network that learns to label action prototypes of the second-laye…
3D skeleton-based human action classification: A survey
2016
In recent years, there has been a proliferation of works on human action classification from depth sequences. These works generally present methods and/or feature representations for the classification of actions from sequences of 3D locations of human body joints and/or other sources of data, such as depth maps and RGB videos.This survey highlights motivations and challenges of this very recent research area by presenting technologies and approaches for 3D skeleton-based action classification. The work focuses on aspects such as data pre-processing, publicly available benchmarks and commonly used accuracy measurements. Furthermore, this survey introduces a categorization of the most recent…