Search results for "Sensor fusion"
showing 10 items of 64 documents
Enabling Technologies on Hybrid Camera Networks for Behavioral Analysis of Unattended Indoor Environments and Their Surroundings
2008
This paper presents a layered network architecture and the enabling technologies for accomplishing vision-based behavioral analysis of unattended environments. Specifically the vision network covers both the attended environment and its surroundings by means of multi-modal cameras. The layer overlooking at the surroundings is laid outdoor and tracks people, monitoring entrance/exit points. It recovers the geometry of the site under surveillance and communicates people positions to a higher level layer. The layer monitoring the unattended environment undertakes similar goals, with the addition of maintaining a global mosaic of the observed scene for further understanding. Moreover, it merges …
Improvement of multimodal images classification based on DSMT using visual saliency model fusion with SVM
2019
Multimodal images carry available information that can be complementary, redundant information, and overcomes the various problems attached to the unimodal classification task, by modeling and combining these information together. Although, this classification gives acceptable classification results, it still does not reach the level of the visual perception model that has a great ability to classify easily observed scene thanks to the powerful mechanism of the human brain.
 In order to improve the classification task in multimodal image area, we propose a methodology based on Dezert-Smarandache formalism (DSmT), allowing fusing the combined spectral and dense SURF features extracted …
Fusing optical and SAR time series for LAI gap filling with multioutput Gaussian processes
2019
The availability of satellite optical information is often hampered by the natural presence of clouds, which can be problematic for many applications. Persistent clouds over agricultural fields can mask key stages of crop growth, leading to unreliable yield predictions. Synthetic Aperture Radar (SAR) provides all-weather imagery which can potentially overcome this limitation, but given its high and distinct sensitivity to different surface properties, the fusion of SAR and optical data still remains an open challenge. In this work, we propose the use of Multi-Output Gaussian Process (MOGP) regression, a machine learning technique that learns automatically the statistical relationships among…
Sensor Fusion Combining 3-D and 2-D for Depth Data Enhancement
2012
Time-of-Flight (ToF) cameras are known to be cost-efficient 3-D sensing systems capable of providing full scene depth information at a high frame rate. Among many other advantages, ToF cameras are able to provide distance information regardless of the illumination conditions and with no texture dependency, which makes them very suitable for computer vision and robotic applications where reliable distance measurements are required. However, the resolution of the given depth maps is far below the resolution given by standard 2-D video cameras which, indeed, restricts the use of ToF cameras in real applications such as those for safety and surveillance. In this thesis, we therefore investigate…
Global Long-Term Brightness Temperature Record from L-Band SMOS and Smap Observations
2021
Passive microwave remote sensing observations at L-band provide key and global information on surface soil moisture (SM) and vegetation optical depth (VOD), which are related to the Earth water and carbon cycles. Only two spaceborne L-band sensors are currently operating: SMOS, launched end of 2009 and thus providing now a 11-year global dataset and SMAP, launched beginning of 2015. To ensure SM and L-VOD data continuity in the event of failure of one of the space-borne SMOS or SMAP sensors, we developed a consistent brightness temperature (TB) record by first producing consistent 40° SMOS and SMAP TB estimates based on SMOS-IC and SMAP enhanced data resp., and then fusing them via linear f…
Cutting Conditions and Work Material State Identification through Acoustic Emission Methods
1992
Summary This paper examines the problem of in-process monitoring of metal cutting operations carried out on aluminum alloys for aeronautical applications. Turning tests were conducted on annealed and heat treated aluminum alloy bars, using carbide tools. For both work material conditions, different combinations of cutting parameters were used. During cutting tests, acoustic emission (AE) and cutting force sensor data were detected and processed. The comparison between AE responses from the annealed and heat treated aluminum alloys allowed, with the help of force sensor data, to verify the applicability of AE and sensor fusion techniques for in-process and real time identification of work ma…
Integration of 3D and multispectral data for cultural heritage applications: Survey and perspectives
2013
International audience; Cultural heritage is increasingly put through imaging systems such as multispectral cameras and 3D scanners. Though these acquisition systems are often used independently, they collect complementary information (spectral vs. spatial) used for the study, archiving and visualization of cultural heritage. Recording 3D and multispectral data in a single coordinate system enhances the potential insights in data analysis. Wepresent the state of the art of such acquisition systems and their applications for the study of cultural her- itage. Wealso describe existing registration techniques that can be used to obtain 3D models with multispec- tral texture and explore the idea…
Multiple Classifiers and Data Fusion for Robust Diagnosis of Gearbox Mixed Faults
2019
Detection and isolation of single and mixed faults in a gearbox are very important to enhance the system reliability, lifetime, and service availability. This paper proposes a hybrid learning algorithm, consisting of multilayer perceptron (MLP)- and convolutional neural network (CNN)-based classifiers, for diagnosis of gearbox mixed faults. Domain knowledge features are required to train the MLP classifier, while the CNN classifier can learn features itself, allowing to reduce the required knowledge features for the counterpart. Vibration data from an experimental setup with gearbox mixed faults is used to validate the effectiveness of the algorithms and compare them with conventional metho…
Camera-LiDAR Data Fusion for Autonomous Mooring Operation
2020
Author's accepted manuscript. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The use of camera and LiDAR sensors to sense the environment has gained increasing popularity in robotics. Individual sensors, such as cameras and LiDARs, fail to meet the growing challenges in complex autonomous systems. One such scenario is autonomous mooring, where the ship has to …
Monitoring water and carbon fluxes at fine spatial scales using HyspIRI-like measurements
2012
Remotely sensed observations in the visible to the shortwave infrared (VSWIR) and thermal infrared (TIR) regions of the electromagnetic spectrum can be used synergistically to provide valuable products of land surface properties for reliable assessments of carbon and water fluxes. The high spatial, spectral and temporal resolution VSWIR and TIR observations provided by the proposed Hyperspectral - InfraRed (HyspIRI) mission will enable a new era of global agricultural monitoring, critical for addressing growing issues of food insecurity. To enable predictions at fine spatial resolution (<100m), modeling efforts must rely on a combination of high-frequency temporal and highresolution spa…