Search results for "Computer Science - Computer Vision and Pattern Recognition"
showing 10 items of 105 documents
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images
2021
We demonstrate a new image processing methodology for resolving gas bubbles travelling through liquid metal from dynamic neutron radiography images with an intrinsically low signal-to-noise ratio. Image pre-processing, denoising and bubble segmentation are described in detail, with practical recommendations. Experimental validation is presented—stationary and moving reference bodies with neutron-transparent cavities are radiographed with imaging conditions representative of the cases with bubbles in liquid metal. The new methods are applied to our experimental data from previous and recent imaging campaigns, and the performance of the methods proposed in this paper is compared against our p…
Multispectral image denoising with optimized vector non-local mean filter
2016
Nowadays, many applications rely on images of high quality to ensure good performance in conducting their tasks. However, noise goes against this objective as it is an unavoidable issue in most applications. Therefore, it is essential to develop techniques to attenuate the impact of noise, while maintaining the integrity of relevant information in images. We propose in this work to extend the application of the Non-Local Means filter (NLM) to the vector case and apply it for denoising multispectral images. The objective is to benefit from the additional information brought by multispectral imaging systems. The NLM filter exploits the redundancy of information in an image to remove noise. A …
Extending the Unmixing methods to Multispectral Images
2021
In the past few decades, there has been intensive research concerning the Unmixing of hyperspectral images. Some methods such as NMF, VCA, and N-FINDR have become standards since they show robustness in dealing with the unmixing of hyperspectral images. However, the research concerning the unmixing of multispectral images is relatively scarce. Thus, we extend some unmixing methods to the multispectral images. In this paper, we have created two simulated multispectral datasets from two hyperspectral datasets whose ground truths are given. Then we apply the unmixing methods (VCA, NMF, N-FINDR) to these two datasets. By comparing and analyzing the results, we have been able to demonstrate some…
Depth-Adapted CNN for RGB-D cameras
2020
Conventional 2D Convolutional Neural Networks (CNN) extract features from an input image by applying linear filters. These filters compute the spatial coherence by weighting the photometric information on a fixed neighborhood without taking into account the geometric information. We tackle the problem of improving the classical RGB CNN methods by using the depth information provided by the RGB-D cameras. State-of-the-art approaches use depth as an additional channel or image (HHA) or pass from 2D CNN to 3D CNN. This paper proposes a novel and generic procedure to articulate both photometric and geometric information in CNN architecture. The depth data is represented as a 2D offset to adapt …
Qualitative Comparison of Community Detection Algorithms
2011
Community detection is a very active field in complex networks analysis, consisting in identifying groups of nodes more densely interconnected relatively to the rest of the network. The existing algorithms are usually tested and compared on real-world and artificial networks, their performance being assessed through some partition similarity measure. However, artificial networks realism can be questioned, and the appropriateness of those measures is not obvious. In this study, we take advantage of recent advances concerning the characterization of community structures to tackle these questions. We first generate networks thanks to the most realistic model available to date. Their analysis r…
Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis
2016
This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adap…
Microstructure reconstruction using entropic descriptors
2009
A multi-scale approach to the inverse reconstruction of a pattern's microstructure is reported. Instead of a correlation function, a pair of entropic descriptors (EDs) is proposed for stochastic optimization method. The first of them measures a spatial inhomogeneity, for a binary pattern, or compositional one, for a greyscale image. The second one quantifies a spatial or compositional statistical complexity. The EDs reveal structural information that is dissimilar, at least in part, to that given by correlation functions at almost all of discrete length scales. The method is tested on a few digitized binary and greyscale images. In each of the cases, the persuasive reconstruction of the mic…
RGB-Event Fusion for Moving Object Detection in Autonomous Driving
2022
Moving Object Detection (MOD) is a critical vision task for successfully achieving safe autonomous driving. Despite plausible results of deep learning methods, most existing approaches are only frame-based and may fail to reach reasonable performance when dealing with dynamic traffic participants. Recent advances in sensor technologies, especially the Event camera, can naturally complement the conventional camera approach to better model moving objects. However, event-based works often adopt a pre-defined time window for event representation, and simply integrate it to estimate image intensities from events, neglecting much of the rich temporal information from the available asynchronous ev…
Robust RGB-D Fusion for Saliency Detection
2022
Efficiently exploiting multi-modal inputs for accurate RGB-D saliency detection is a topic of high interest. Most existing works leverage cross-modal interactions to fuse the two streams of RGB-D for intermediate features' enhancement. In this process, a practical aspect of the low quality of the available depths has not been fully considered yet. In this work, we aim for RGB-D saliency detection that is robust to the low-quality depths which primarily appear in two forms: inaccuracy due to noise and the misalignment to RGB. To this end, we propose a robust RGB-D fusion method that benefits from (1) layer-wise, and (2) trident spatial, attention mechanisms. On the one hand, layer-wise atten…
N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks
2022
Monocular depth estimation has been a popular area of research for several years, especially since self-supervised networks have shown increasingly good results in bridging the gap with supervised and stereo methods. However, these approaches focus their interest on dense 3D reconstruction and sometimes on tiny details that are superfluous for autonomous navigation. In this paper, we propose to address this issue by estimating the navigation map under a quadtree representation. The objective is to create an adaptive depth map prediction that only extract details that are essential for the obstacle avoidance. Other 3D space which leaves large room for navigation will be provided with approxi…