Search results for "Robot"

showing 10 items of 1036 documents

Localization Based on Parallel Robots Kinematics as an Alternative to Trilateration

2022

In this article, a new scheme for range-based localization is proposed. The main goal is to estimate the position of a mobile point based on distance measurements from fixed devices, called anchors, and on inertial measurements. Due to the nonlinear nature of the problem, an analytic relation to compute the position starting from these measurements does not exist, and often trilateration methods are used, generally based on least-square algorithms. The proposed scheme is based on the modeling of the localization process as a parallel robot, thereby methodologies and control algorithms used in the robotic area can be exploited. In particular, a closed-loop control system is designed for trac…

Computer scienceParallel manipulatorAccelerometers Estimation error Kinematics Localization Location awareness Parallel robots Position measurement rangebased measurements Robots robots kinematics Ultra-Wide Band devicesKinematicsTracking errorExponential stabilityRate of convergenceSettore ING-INF/04 - AutomaticaControl and Systems EngineeringPosition (vector)Control systemElectrical and Electronic EngineeringAlgorithmTrilateration
researchProduct

Three-dimensional Cross-Platform Planning for Complex Spinal Procedures

2017

STUDY DESIGN A feasibility study. OBJECTIVE To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. SUMMARY OF BACKGROUND DATA There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. MATERIALS AND METHODS Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic reson…

Computer sciencePatient Care Planning03 medical and health sciencesDICOMImaging Three-Dimensional0302 clinical medicineSoftwareVirtual patientCross-platformHumansOrthopedic ProceduresOrthopedics and Sports MedicineComputer visionbusiness.industryNavigation systemProstheses and ImplantsRoboticsSpineVisualization030220 oncology & carcinogenesisSurgeryNeurology (clinical)Artificial intelligenceTomographyGuidance systembusiness030217 neurology & neurosurgeryClinical Spine Surgery: A Spine Publication
researchProduct

Embedded Processing and Compression of 3D Sensor Data for Large Scale Industrial Environments

2019

This paper presents a scalable embedded solution for processing and transferring 3D point cloud data. Sensors based on the time-of-flight principle generate data which are processed on a local embedded computer and compressed using an octree-based scheme. The compressed data is transferred to a central node where the individual point clouds from several nodes are decompressed and filtered based on a novel method for generating intensity values for sensors which do not natively produce such a value. The paper presents experimental results from a relatively large industrial robot cell with an approximate size of 10 m &times

Computer sciencePoint cloud02 engineering and technologylcsh:Chemical technologytime-of-flightBiochemistryArticleAnalytical ChemistryComputational sciencelaw.inventionIndustrial robotOctreelawpoint clouds0202 electrical engineering electronic engineering information engineeringdenoisinglcsh:TP1-1185Electrical and Electronic EngineeringInstrumentationlidarscalabilityLocal area network020206 networking & telecommunications020207 software engineering3D sensorscompressionAtomic and Molecular Physics and OpticsScalabilitySensors (Basel, Switzerland)
researchProduct

Incomplete 3D motion trajectory segmentation and 2D-to-3D label transfer for dynamic scene analysis

2017

International audience; The knowledge of the static scene parts and the moving objects in a dynamic scene plays a vital role for scene modelling, understanding, and landmark-based robot navigation. The key information for these tasks lies on semantic labels of the scene parts and the motion trajectories of the dynamic objects. In this work, we propose a method that segments the 3D feature trajectories based on their motion behaviours, and assigns them semantic labels using 2D-to-3D label transfer. These feature trajectories are constructed by using the proposed trajectory recovery algorithm which takes the loss of feature tracking into account. We introduce a complete framework for static-m…

Computer scienceScene UnderstandingComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION02 engineering and technology[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Motion (physics)[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]0502 economics and business0202 electrical engineering electronic engineering information engineering[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]Computer visionSegmentationMotion Segmentation050210 logistics & transportationbusiness.industry[INFO.INFO-RB] Computer Science [cs]/Robotics [cs.RO][ INFO.INFO-RB ] Computer Science [cs]/Robotics [cs.RO]05 social sciences3D reconstruction[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]2D to 3D conversionFeature (computer vision)TrajectoryKey (cryptography)Robot020201 artificial intelligence & image processingArtificial intelligence3D Reconstructionbusiness2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
researchProduct

Uncalibrated Reconstruction: An Adaptation to Structured Light Vision

2003

Abstract Euclidean reconstruction from two uncalibrated stereoscopic views is achievable from the knowledge of geometrical constraints about the environment. Unfortunately, these constraints may be quite difficult to obtain. In this paper, we propose an approach based on structured lighting, which has the advantage of providing geometrical constraints independent of the scene geometry. Moreover, the use of structured light provides a unique solution to the tricky correspondence problem present in stereovision. The projection matrices are first computed by using a canonical representation, and a projective reconstruction is performed. Then, several constraints are generated from the image an…

Computer scienceStereoscopy02 engineering and technology[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]01 natural scienceslaw.invention010309 optics[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Projection (mathematics)Artificial Intelligencelaw0103 physical sciencesEuclidean geometry0202 electrical engineering electronic engineering information engineeringComputer visionCorrespondence problemComputingMilieux_MISCELLANEOUSbusiness.industry[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Mobile robot navigationSignal Processing020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionAffine transformationArtificial intelligencebusinessSoftwareStructured light
researchProduct

Mediated learning materials: visibility checks in telepresence robot mediated classroom interaction

2021

Videoconferencing is increasingly used in education as a way to support distance learning. This article contributes to the emerging interactional literature on video-mediated educational interaction by exploring how a telepresence robot is used to facilitate remote participation in university-level foreign language teaching. A telepresence robot differs from commonly used videoconferencing set-ups in that it allows mobility and remote camera control. A remote student can thus move a classroom-based robot from a distance in order to shift attention between people, objects and environmental structures during classroom activities. Using multimodal conversation analysis, we focus on how partici…

Computer scienceTeaching methodDistance educationComputer-Assisted Instructionluokkatyöskentelycomputer.software_genreEducationMultimodalityVideoconferencingHuman–computer interactionetäosallistuminenvideoneuvottelutlearning materialsmultimodalitymultimodaalisuusoppimateriaali060201 languages & linguisticsTeleroboticskieltenopetus4. Education05 social sciencesVisibility (geometry)Educational technology050301 educationvideo-mediated interaction06 humanities and the artstelepresence robotcomputer-assisted language learning0602 languages and literaturerobotitclassroom interaction516 Educational sciences0503 educationcomputer
researchProduct

Dynamic Augmented Kalman Filtering for Human Motion Tracking under Occlusion Using Multiple 3D Sensors

2020

In this paper real-time human motion tracking using multiple 3D sensors has been demonstrated in a relatively large industrial robot work cell. The proposed solution extends state-of-the-art by augmenting the constant velocity model and Kalman filter with low-pass filtered velocity states. The presented method is able to handle occlusions by dynamically inclusion in the Kalman filter of only those 3D sensors which provide valid human position data. Human motion tracking was achieved at a frame rate of 20 Hz, with a typical delay of 50 ms to 100 ms and an estimation accuracy of typically 0.10 m to 0.15 m.

Computer sciencebusiness.industry010401 analytical chemistry020206 networking & telecommunications02 engineering and technologyKalman filter3d sensorTracking (particle physics)Human motionFrame rate01 natural sciences0104 chemical scienceslaw.inventionIndustrial robotPosition (vector)lawOcclusion0202 electrical engineering electronic engineering information engineeringComputer visionArtificial intelligencebusiness2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA)
researchProduct

Visual Marker Guided Point Cloud Registration in a Large Multi-Sensor Industrial Robot Cell

2018

This paper presents a benchmark and accuracy analysis of 3D sensor calibration in a large industrial robot cell. The sensors used were the Kinect v2 which contains both an RGB and an IR camera measuring depth based on the time-of-flight principle. The approach taken was based on a novel procedure combining Aruco visual markers, methods using region of interest and iterative closest point. The calibration of sensors is performed pairwise, exploiting the fact that time-of-flight sensors can have some overlap in the generated point cloud data. For a volume measuring 10m × 14m × 5m a typical accuracy of the generated point cloud data of 5–10cm was achieved using six sensor nodes.

Computer sciencebusiness.industry010401 analytical chemistryPoint cloudIterative closest pointCloud computing02 engineering and technology01 natural sciences0104 chemical sciencesVisualizationlaw.inventionIndustrial robotlaw0202 electrical engineering electronic engineering information engineeringBenchmark (computing)CalibrationRGB color model020201 artificial intelligence & image processingComputer visionArtificial intelligencebusiness2018 14th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA)
researchProduct

Smart Manufacturing Testbed for the Advancement of Wireless Adoption in the Factory

2020

Wireless communication is a key enabling technology central to the advancement of the goals of the Industry 4.0 smart manufacturing concept. Researchers at the National Institute of Standards and Technology are constructing a testbed to aid in the adoption of wireless technology within the factory workcell and other harsh industrial radio environments. In this paper the authors present a new industrial wireless testbed design that motivates academic research and is relevant to the needs of industry. The testbed is designed to serve as both a demonstration and research platform for the wireless workcell. The work leverages lessons learned from past testbed incarnations that included a dual r…

Computer sciencebusiness.industryComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS020208 electrical & electronic engineeringTestbed02 engineering and technologyRadio propagation0202 electrical engineering electronic engineering information engineeringSystems engineeringKey (cryptography)Factory (object-oriented programming)RobotWireless020201 artificial intelligence & image processingWorkcellbusinessRobotic arm
researchProduct

Bio-Inspired Polarization Vision Techniques for Robotics Applications

2015

Researchers have been inspired by nature to build the next generation of smart robots. Based on the mechanisms adopted by the animal kingdom, research teams have developed solutions to common problems that autonomous robots faced while performing basic tasks. Polarization-based behaviour is one of the most distinctive features of some species of the animal kingdom. Light polarization parameters significantly expand visual capabilities of autonomous robots. Polarization vision can be used for most tasks of color vision, like object recognition, contrast enhancement, camouflage breaking, and signal detection and discrimination. In this chapter, the authors briefly cover polarization-based vis…

Computer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONComputer visionRoboticsArtificial intelligencePolarization (waves)business
researchProduct