Search results for "Vision"

showing 10 items of 5066 documents

Adaptive Neural Control of MIMO Nonstrict-Feedback Nonlinear Systems with Time Delay

2016

In this paper, an adaptive neural output-feedback tracking controller is designed for a class of multiple-input and multiple-output nonstrict-feedback nonlinear systems with time delay. The system coefficient and uncertain functions of our considered systems are both unknown. By employing neural networks to approximate the unknown function entries, and constructing a new input-driven filter, a backstepping design method of tracking controller is developed for the systems under consideration. The proposed controller can guarantee that all the signals in the closed-loop systems are ultimately bounded, and the time-varying target signal can be tracked within a small error as well. The main con…

0209 industrial biotechnologyComputer scienceMIMOAdaptive trackingoutput-feedback controller02 engineering and technologyNonlinear controlmultiple-input and multiple-output (MIMO)020901 industrial engineering & automationControl theoryAdaptive system0202 electrical engineering electronic engineering information engineeringElectrical and Electronic EngineeringArtificial neural networkControl engineeringComputer Science Applications1707 Computer Vision and Pattern RecognitionFilter (signal processing)neural networksComputer Science ApplicationsHuman-Computer InteractionNonlinear systemControl and Systems EngineeringBackstepping020201 artificial intelligence & image processingAdaptive tracking; multiple-input and multiple-output (MIMO); neural networks; output-feedback controller; Control and Systems Engineering; Software; Information Systems; Human-Computer Interaction; Computer Science Applications1707 Computer Vision and Pattern Recognition; Electrical and Electronic EngineeringSoftwareInformation Systems
researchProduct

Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments — a case study

2019

Traditional industry is seeing an increasing demand for more autonomous and flexible manufacturing in unstructured settings, a shift away from the fixed, isolated workspaces where robots perform predefined actions repetitively. This work presents a case study in which a robotic manipulator, namely a KUKA KR90 R3100, is provided with smart sensing capabilities such as vision and adaptive reasoning for real-time collision avoidance and online path planning in dynamically-changing environments. A machine vision module based on low-cost cameras and color detection in the hue, saturation, value (HSV) space is developed to make the robot aware of its changing environment. Therefore, this vision a…

0209 industrial biotechnologyComputer scienceMachine visionTKReal-time computingRobot manipulator02 engineering and technologyWorkspaceAdaptive Reasoninglcsh:Chemical technologyBiochemistryHuman–robot interactionArticleAnalytical ChemistrySettore ING-IND/14 - Progettazione Meccanica E Costruzione Di Macchinehuman-robot interaction020901 industrial engineering & automation0202 electrical engineering electronic engineering information engineeringlcsh:TP1-1185Motion planningElectrical and Electronic EngineeringInstrumentationpath planningCollision avoidancerobot controlsmart sensingAdaptive reasoningdynamic environmentsAtomic and Molecular Physics and OpticsRobot control:Engineering::Mechanical engineering [DRNTU]ObstacleDynamic EnvironmentsRobot020201 artificial intelligence & image processingadaptive reasoning
researchProduct

Selective visual odometry for accurate AUV localization

2015

In this paper we present a stereo visual odometry system developed for autonomous underwater vehicle localization tasks. The main idea is to make use of only highly reliable data in the estimation process, employing a robust keypoint tracking approach and an effective keyframe selection strategy, so that camera movements are estimated with high accuracy even for long paths. Furthermore, in order to limit the drift error, camera pose estimation is referred to the last keyframe, selected by analyzing the feature temporal flow. The proposed system was tested on the KITTI evaluation framework and on the New Tsukuba stereo dataset to assess its effectiveness on long tracks and different illumina…

0209 industrial biotechnologyComputer scienceVisual odometryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION02 engineering and technologyKeyframe selectionRANSAC020901 industrial engineering & automationOdometryArtificial Intelligence0202 electrical engineering electronic engineering information engineeringComputer vision14. Life underwaterVisual odometryUnderwaterAUVPoseSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniRANSACSettore INF/01 - InformaticaFeature matchingbusiness.industryProcess (computing)StereoFeature (computer vision)020201 artificial intelligence & image processingArtificial intelligenceUnderwaterbusinessStereo cameraAutonomous Robots
researchProduct

Rapid and robust on-site evaluation of articulated arm coordinate measuring machine performance

2018

International audience

0209 industrial biotechnologyComputer sciencebusiness.industryApplied Mathematics02 engineering and technologySite evaluationCoordinate-measuring machine01 natural sciences[SPI.MECA.GEME]Engineering Sciences [physics]/Mechanics [physics.med-ph]/Mechanical engineering [physics.class-ph]010309 optics020901 industrial engineering & automation0103 physical sciencesComputer visionArtificial intelligencebusiness[SPI.MECA.GEME] Engineering Sciences [physics]/Mechanics [physics.med-ph]/Mechanical engineering [physics.class-ph]InstrumentationEngineering (miscellaneous)ComputingMilieux_MISCELLANEOUS
researchProduct

2D/3D Object Recognition and Categorization Approaches for Robotic Grasping

2017

International audience; Object categorization and manipulation are critical tasks for a robot to operate in the household environment. In this paper, we propose new methods for visual recognition and categorization. We describe 2D object database and 3D point clouds with 2D/3D local descriptors which we quantify with the k-means clustering algorithm for obtaining the Bag of Words (BOW). Moreover, we develop a new global descriptor called VFH-Color that combines the original version of Viewpoint Feature Histogram (VFH) descriptor with the color quantization histogram, thus adding the appearance information that improves the recognition rate. The acquired 2D and 3D features are used for train…

0209 industrial biotechnologyComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONCognitive neuroscience of visual object recognition[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]02 engineering and technology[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Color quantizationDeep belief network[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]ComputingMethodologies_PATTERNRECOGNITION020901 industrial engineering & automationCategorizationBag-of-words modelHistogram0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingComputer visionArtificial intelligenceCluster analysisbusinessClassifier (UML)
researchProduct

Visual tracking with omnidirectional cameras: an efficient approach

2011

International audience; An effective technique for applying visual tracking algorithms to omni- directional image sequences is presented. The method is based on a spherical image representation which allows taking into account the distortions and nonlinear resolution of omnidirectional images. Experimental results show that both deterministic and probabilistic tracking methods can effectively be adapted in order to robustly track an object with an omnidirectional camera.

0209 industrial biotechnologyComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPhysics::Optics[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]02 engineering and technologyTracking (particle physics)[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Computer Science::RoboticsNonlinear system020901 industrial engineering & automationOmnidirectional cameraComputer Science::Computer Vision and Pattern Recognition0202 electrical engineering electronic engineering information engineeringEye tracking020201 artificial intelligence & image processingComputer visionArtificial intelligenceElectrical and Electronic EngineeringRepresentation (mathematics)Omnidirectional antennabusiness
researchProduct

Real-time human collision detection for industrial robot cells

2017

A collision detection system triggering on human motion was developed using the Robot Operating System (ROS) and the Point Cloud Library (PCL). ROS was used as the core of the programs and for the communication with an industrial robot. Combining the depths fields from the 3D cameras was accomplished by the use of PCL. The library was also the underlying tool for segmenting the human from the registrated point clouds. Benchmarking of several collision algorithms was done in order to compare the solution. The registration process gave satisfactory results when testing the repetitiveness and the accuracy of the implementation. The segmentation algorithm was able to segment a person represente…

0209 industrial biotechnologyComputer sciencebusiness.industryPoint cloudProcess (computing)02 engineering and technologyBenchmarkingCollisionlaw.inventionIndustrial robot020901 industrial engineering & automationlawCollision detectionComputer visionSegmentationArtificial intelligencebusinessCollision avoidance2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
researchProduct

3D Point Cloud Descriptor for Posture Recognition

2018

International audience

0209 industrial biotechnologyComputer sciencebusiness.industryPosture recognitionPoint cloud[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]02 engineering and technology[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]020901 industrial engineering & automation[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV][ INFO.INFO-TI ] Computer Science [cs]/Image Processing0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingComputer visionArtificial intelligencebusinessComputingMilieux_MISCELLANEOUS
researchProduct

Quality Assessment of Reconstruction and Relighting from RTI Images: Application to Manufactured Surfaces

2019

In this paper, we propose to evaluate the quality of the reconstruction and relighting from images acquired by a Reflectance Transformation Imaging (RTI) device. Three relighting models, namely the PTM, HSH and DMD, are evaluated using PSNR and SSIM. A visual assessment of how the reconstructed surfaces are perceived is also carried out through a sensory experiment. This study allows to estimate the relevance of these models to reproduce the appearance of the manufactured surfaces. It also shows that DMD reproduces the most accurate reconstruction/relighting to an acquired measurement and that a higher sampling density don't mean necessarily a higher perceptual quality.

0209 industrial biotechnologyComputer sciencebusiness.industryQuality assessmentmedia_common.quotation_subject02 engineering and technologyIterative reconstruction020901 industrial engineering & automationVisual assessment0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingComputer visionQuality (business)Relevance (information retrieval)Artificial intelligenceSampling densityPolynomial texture mappingbusinessSurface reconstructionmedia_common2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)
researchProduct

Static and Dynamic Objects Analysis as a 3D Vector Field

2017

International audience; In the context of scene modelling, understanding, and landmark-based robot navigation, the knowledge of static scene parts and moving objects with their motion behaviours plays a vital role. We present a complete framework to detect and extract the moving objects to reconstruct a high quality static map. For a moving 3D camera setup, we propose a novel 3D Flow Field Analysis approach which accurately detects the moving objects using only 3D point cloud information. Further, we introduce a Sparse Flow Clustering approach to effectively and robustly group the motion flow vectors. Experiments show that the proposed Flow Field Analysis algorithm and Sparse Flow Clusterin…

0209 industrial biotechnologyComputer sciencebusiness.industry[INFO.INFO-RB] Computer Science [cs]/Robotics [cs.RO][ INFO.INFO-RB ] Computer Science [cs]/Robotics [cs.RO]ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPoint cloud[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Context (language use)Motion detection02 engineering and technology[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV][INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]020901 industrial engineering & automationFlow (mathematics)Motion estimation0202 electrical engineering electronic engineering information engineering[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]020201 artificial intelligence & image processingSegmentationComputer visionArtificial intelligenceCluster analysisbusinessEuclidean vector2017 International Conference on 3D Vision (3DV)
researchProduct