0000000000856997

AUTHOR

Jorma Laaksonen

Detecting Hand-Head Occlusions in Sign Language Video

A large body of current linguistic research on sign language is based on analyzing large corpora of video recordings. This requires either manual or automatic annotation of the videos. In this paper we introduce methods for automatically detecting and classifying hand-head occlusions in sign language videos. Linguistically, hand-head occlusions are an important and interesting subject of study as the head is a structural place of articulation in many signs. Our method combines easily calculable local video properties with more global hand tracking. The experiments carried out with videos of the Suvi on-line dictionary of Finnish Sign Language show that the sensitivity of the proposed local …

research product

Head Pose Estimation for Sign Language Video

We address the problem of estimating three head pose angles in sign language video using the Pointing04 data set as training data. The proposed model employs facial landmark points and Support Vector Regression learned from the training set to identify yaw and pitch angles independently. A simple geometric approach is used for the roll angle. As a novel development, we propose to use the detected skin tone areas within the face bounding box as additional features for head pose estimation. The accuracy level of the estimators we obtain compares favorably with published results on the same data, but the smaller number of pose angles in our setup may explain some of the observed advantage.

research product

Estimating head pose and state of facial elements for sign language video

In this work we present methods for automatic estimation of non-manual gestures in sign language videos. More specifically, we study the estimation of three head pose angles (yaw, pitch, roll) and the state of facial elements (eyebrow position, eye openness, and mouth state). This kind of estimation facilitates automatic annotation of sign language videos and promotes more prolific production of annotated sign language corpora. The proposed estimation methods are incorporated in our publicly available SLMotion software package for sign language video processing and analysis. Our method implements a model-based approach: for head pose we employ facial landmarks and skins masks as features, a…

research product

Annotated Video Corpus of FinSL with Kinect and Computer-Vision Data

This paper presents an annotated video corpus of Finnish Sign Language (FinSL) to which has been appended Kinect and computer-vision data. The video material consists of signed retellings of the stories Snowman and Frog, where are you?, elicited from 12 native FinSL signers in a dialogue setting. The recordings were carried out with 6 cameras directed toward the signers from different angles, and 6 signers were also recorded with one Kinect motion and depth sensing input device. All the material has been annotated in ELAN for signs, translations, grammar and prosody. To further facilitate research into FinSL prosody, computer-vision data describing the head movements and the aperture change…

research product

On the rhythm of head movements in Finnish and Swedish Sign Language sentences

This paper investigates, with the help of computer-vision technology, the similarities and differences in the rhythm of the movements of the head in sentences in Finnish (FinSL) and Swedish Sign Language (SSL). The results show that the movement of the head in the two languages is often very similar: in both languages, the instances when the movement of the head changes direction were distributed similarly with regard to clause-boundaries, and the contours of the roll (tilting-like) motion of the head during the sentences were similar. Concerning differences, direction changes were found to be used more effectively in the marking of clause-boundaries in FinSL, and in SSL the head moved near…

research product