Search results for "Robot"
showing 10 items of 1036 documents
Resolving ambiguities in a grounded human-robot interaction
2009
In this paper we propose a trainable system that learns grounded language models from examples with a minimum of user intervention and without feedback. We have focused on the acquisition of grounded meanings of spatial and adjective/noun terms. The system has been used to understand and subsequently to generate appropriate natural language descriptions of real objects and to engage in verbal interactions with a human partner. We have also addressed the problem of resolving eventual ambiguities arising during verbal interaction through an information theoretic approach.
An Application of Iterative Identification and Control in the Robotics Field
2006
The plant model appropriate for designing the control strongly depends on the requirements. Simple models are enough to compute nondemanding controls. The parameters of well-defined structural models of flexible robot manipulators are difficult to determine because their effect is only visible if the manipulator is under strong actions or with high-frequency excitation. Thus, in this chapter, an iterative approach is suggested. This approach is applied to a one-degree-of-freedom flexible robot manipulator, first using some well-known models and then controlling a lab prototype. This approach can be used with a variety of control design and/or identification techniques.
A Direct Approach to Robot Soccer Agents: Description for the Team Mainz Rolling rains Simulation League of RoboCup ’98
1999
In the team described in this paper we realize a direct approach to soccer agents for the simulation league of the RoboCup '98- tournament. Its backbone is formed by a detailed world model. Based on information which is reconstructed on the world model level, the rule-based decision levels chose a relevant action. The architecture for the goalie is different from the regular players, introducing heterogeneousness into the team, which combines the advantages of the different control strategies.
The role of synergies within generative models of action execution and recognition: A computational perspective
2015
Controlling the body – given its huge number of degrees of freedom – poses severe computational challenges. Mounting evidence suggests that the brain alleviates this problem by exploiting “synergies”, or patterns of muscle activities (and/or movement dynamics and kinematics) that can be combined to control action, rather than controlling individual muscles of joints [1–10]. D’Ausilio et al. [11] explain how this view of motor organization based on synergies can profoundly change the way we interpret studies of action recognition in humans and monkeys, and in particular the controversy on the “granularity” of the mirror neuron system (MNs): whether it encodes either (lower) kinematic aspects…
Automatic place detection and localization in autonomous robotics
2007
This paper presents an approach for the simultaneous learning and recognition of places applied to autonomous robotics. While noteworthy results have been achieved with respect to off-line training process for appearance-based navigation, novel issues arise when recognition and learning are simultaneous and unsupervised processes. The approach adopted here uses a Gaussian mixture model estimated by a novel incremental MML-EM to model the probability distribution of features extracted by image-preprocessing. A place detector decides which features belong to which place integrating odometric information and a hidden Markov model. Tests demonstrate that the proposed system performs as well as …
Multimodal 2D Image to 3D Model Registration via a Mutual Alignment of Sparse and Dense Visual Features
2018
International audience; Many fields of application could benefit from an accurate registration of measurements of different modalities over a known 3D model. However, aligning a 2D image to a 3D model is a challenging task and is even more complex when the two have a different modality. Most of the 2D/3D registration methods are based on either geometric or dense visual features. Both have their own advantages and their own drawbacks. We propose, in this paper, to mutually exploit the advantages of one feature type to reduce the drawbacks of the other one. For this, an hybrid registration framework has been designed to mutually align geometrical and dense visual features in order to obtain …
Evaluating State-Based Intention Recognition Algorithms against Human Performance
2014
In this paper, we describe a novel intention recognition approach based on the representation of state information in a cooperative human-robot environment. We compare the output of the intention recognition algorithms to those of an experiment involving humans attempting to recognize the same intentions in a manufacturing kitting domain. States are represented by a combination of spatial relationships in a Cartesian frame along with cardinal direction information. Based upon a set of predefined high-level states relationships that must be true for future actions to occur, a robot can use the approaches described in this paper to infer the likelihood of subsequent actions occurring. This wo…
On the advantages of combining differential algorithms and log-polar vision for detection of self-motion from a mobile robot
2001
Abstract This paper describes the design and implementation on programmable hardware (FPGAs) of an algorithm for the detection of self-mobile objects as seen from a mobile robot. In this context, ‘self-mobile’ refers to those objects that change in the image plane due to their own movement, and not to the movement of the camera on board of the mobile robot. The method consists on adapting the original algorithm from Chen and Nandhakumar [A simple scheme for motion boundary detection, in: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, 1994] by using foveal images obtained with a special camera whose optical axis points towards the direction of advance. It i…
Ontology-based state representations for intention recognition in human–robot collaborative environments
2013
In this paper, we describe a novel approach for representing state information for the purpose of intention recognition in cooperative human-robot environments. States are represented by a combination of spatial relationships in a Cartesian frame along with cardinal direction information. This approach is applied to a manufacturing kitting operation, where humans and robots are working together to develop kits. Based upon a set of predefined high-level state relationships that must be true for future actions to occur, a robot can use the detailed state information described in this paper to infer the probability of subsequent actions occurring. This would allow the robot to better help the …