Search results for "VISION"
showing 10 items of 5066 documents
Vertex Distinguishing Edge- and Total-Colorings of Cartesian and other Product Graphs
2012
International audience; This paper studies edge- and total-colorings of graphs in which (all or only adjacent) vertices are distinguished by their sets of colors. We provide bounds for the minimum number of colors needed for such colorings for the Cartesian product of graphs along with exact results for generalized hypercubes. We also present general bounds for the direct, strong and lexicographic products.
Multiple Structured Light-Based Depth Sensors for Human Motion Analysis: A Review
2012
Human motion analysis is an increasingly important active research domain with various applications in surveillance, human-machine interaction and human posture analysis. The recent developments in depth sensor technology, especially with the release of the Kinect device, have attracted significant attention to the question of how to take advantage of this technology in order to achieve accurate motion tracking and action detection in marker-less approaches. In this paper, we review the benefits and limitations deriving from the adoption of structured light-based depth sensors in human motion analysis applications. Surveying the relevant literature, we have identified in calibration, interf…
Evolutionary-based 3D reconstruction using an uncalibrated stereovision system: application of building a panoramic object view
2010
In this paper, we propose an original evolutionary-based method for 3D panoramic reconstruction from an uncalibrated stereovision system (USS). The USS is composed of five cameras located on an arc of a circle around the object to be analyzed. The main originality of this work concerns the process of the calculation of the 3D information. Actually, with our method, 3D coordinates are directly obtained without any prior estimation of the fundamental matrix. The method operates in two steps. Firstly, points of interest are detected in pairs of images acquired by two consecutive cameras of the USS are matched. And secondly, using evolutionary algorithms, we jointly compute the transformed matr…
Deep Reinforcement Learning with Omnidirectional Images: application to UAV Navigation in Forests
2022
Deep Reinforcement Learning (DRL) is highly efficient for solving complex tasks such as drone obstacle avoidance using cameras. However, these methods are often limited by the camera perception capabilities. In this paper, we demonstrate that point-goal navigation performances can be improved by using cameras with a wider Field-Of-View (FOV). To this end, we present a DRL solution based on equirectangular images and demonstrates its relevance, especially compared to its perspective version. Several visual modalities are compared: ground truth depth, RGB, and depth directly estimated from these 360°R GB images using Deep Learning methods. Next, we propose a spherical adaptation to take into …
On Keyframe Positioning for Pose Graphs Applied to Visual SLAM
2013
International audience; In this work, a new method is introduced for localization and keyframe identification to solve a Simultaneous Localization and Mapping (SLAM) problem. The proposed approach is based on a dense spherical acquisition system that synthesizes spherical intensity and depth images at arbitrary locations. The images are related by a graph of 6 degrees-of-freedom (DOF) poses which are estimated through spherical registration. A direct image-based method is provided to estimate pose by using both depth and color information simultaneously. A new keyframe identification method is proposed to build the map of the environment by using the covariance matrix between raletive 6 DOF…
Time to Contact Estimation on Paracatadioptric Cameras
2012
International audience; Time to contact or time to collision (TTC) is the time available to a robot before reaching an object. In this paper, we propose to estimate this time using a catadioptric camera embedded on th erobot. Indeed, whereas a lot of works have shown the utility of this kind of cameras in robotic applications (monitoring, locali- sation, motion,...), a few works deal with the problem of time to contact estimation on it. Thus, in this paper, we propose a new work which allows to define and to estimate the TTC on catadioptric camera. This method will be validated on simulated and real data.
Performance evaluation of Wireless Sensor Networks based on ZigBee technology in smart home
2013
International audience; Wireless Sensor Networks (WSNs) has diverse application domains such as smart home, smart care, industrial, etc. In this paper, we present a WSN system based on the ZigBee technology (IEEE 802.15.4) in Smart Home. To have a good sensor networks communication implanted in a smart home, studies of operating performance on this network is important. In this work, we investigate the performance of our ZigBee sensor networks. The study of performance is based on measurements of the Received Signal Strength Indicator (RSSI) in different parts of the Home. We will also discuss the impact of electromagnetic noise on the communication performance of a ZigBee Sensor Network in…
Stratified Autocalibration of Cameras with Euclidean Image Plane
2020
International audience; This paper tackles the problem of stratified autocalibration of a moving camera with Euclidean image plane (i.e. zero skew and unit aspect ratio) and constant intrinsic parameters. We show that with these assumptions, in addition to the polynomial derived from the so-called modulus constraint, each image pair provides a new quartic polynomial in the unknown plane at infinity. For three or more images, the plane at infinity estimation is stated as a constrained polynomial optimization problem that can efficiently be solved using Lasserre's hierarchy of semidefinite relaxations. The calibration parameters and thus a metric reconstruction are subsequently obtained by so…
Perspective-n-Learned-Point: Pose Estimation from Relative Depth
2019
International audience; In this paper we present an online camera pose estimation method that combines Content-Based Image Retrieval (CBIR) and pose refinement based on a learned representation of the scene geometry extracted from monocular images. Our pose estimation method is two-step, we first retrieve an initial 6 Degrees of Freedom (DoF) location of an unknown-pose query by retrieving the most similar candidate in a pool of geo-referenced images. In a second time, we refine the query pose with a Perspective-n-Point (PnP) algorithm where the 3D points are obtained thanks to a generated depth map from the retrieved image candidate. We make our method fast and lightweight by using a commo…
Approche multicritère pour la caractérisation des adventices par imagerie
2019
La réduction des produits phytosanitaires représente un des enjeux majeurs du secteur agricole. Les plans gouvernementaux Ecophyto, Ecophyto II et Ecophyto II+ visent à réduire fortement leurs usages et les solutions actuelles ne permettent pas d’obtenir les résultats escomptés. La détection des adventices par imagerie est un des axes de travail devant permettre cette réduction. La qualité de la discrimination cultures/adventices est fortement liée au type de méthodes utilisées, à la résolution spatiale des images et au stade de développement des plantes présentes. L'objectif de ce travail, est donc d'évaluer l’impact des différents critères pouvant être extraits depuis des images acquises …