Search results for "image processing"
showing 10 items of 3285 documents
Space variant vision and pipelined architecture for time to impact computation
2002
Image analysis is one of the most interesting ways for a mobile vehicle to understand its environment. One of the tasks of an autonomous vehicle is to get accurate information of what it has in front, to avoid collision or find a way to a target. This task requires real-time restrictions depending on the vehicle speed and external object movement. The use of normal cameras, with homogeneous (squared) pixel distribution, for real-time image processing, usually requires high performance computing and high image rates. A different approach makes use of a CMOS space-variant camera that yields a high frame rate with low data bandwidth. The camera also performs the log-polar transform, simplifyin…
An FPGA-based design for real-time Super Resolution Reconstruction
2018
Since several decades, the camera spatial resolution is gradually increasing with the CMOS technology evolution. The image sensors provide more and more pixels, generating new constraints for the suitable optics. As an alternative, promising solutions propose Super Resolution (SR) image reconstruction to extend the image size without modifying the sensor architecture. Convincing state-of art studies demonstrate that these methods could even be implemented in real-time. Nevertheless, artifacts can be observed in highly textured areas of the image. In this paper, we propose a Local Adaptive Spatial Super Resolution (LASSR) method to fix this limitation. A real-time texture analysis is include…
Speeding-Up Differential Motion Detection Algorithms Using a Change-Driven Data Flow Processing Strategy
2007
A constraint of real-time implementation of differential motion detection algorithms is the large amount of data to be processed. Full image processing is usually the classical approach for these algorithms: spatial and temporal derivatives are calculated for all pixels in the image despite the fact that the majority of image pixels may not have changed from one frame to the next. By contrast, the data flow model works in a totally different way as instructions are only fired when the data needed for these instructions are available. Here we present a method to speed-up low level motion detection algorithms. This method is based on pixel change instead of full image processing and good spee…
Unsupervised deep feature extraction of hyperspectral images
2014
This paper presents an effective unsupervised sparse feature learning algorithm to train deep convolutional networks on hyperspectral images. Deep convolutional hierarchical representations are learned and then used for pixel classification. Features in lower layers present less abstract representations of data, while higher layers represent more abstract and complex characteristics. We successfully illustrate the performance of the extracted representations in a challenging AVIRIS hyperspectral image classification problem, compared to standard dimensionality reduction methods like principal component analysis (PCA) and its kernel counterpart (kPCA). The proposed method largely outperforms…
A crop field modeling to simulate agronomic images
2010
In precision agriculture, crop/weed discrimination is often based on image analysis but though several algorithms using spatial information have been proposed, not any has been tested on relevant databases. A simple model that simulates virtual fields is developed to evaluate these algorithms. Virtual fields are made of crops, arranged according to agricultural practices and represented by simple patterns, and weeds that are spatially distributed using a statistical approach. Then, experimental devices using cameras are simulated with a pinhole model. Its ability to characterize the spatial reality is demonstrated through different pairs (real, virtual) of pictures. Two spatial descriptors …
Shape Description for Content-Based Image Retrieval
2000
The present work is focused on a global image characterization based on a description of the 2D displacements of the different shapes present in the image, which can be employed for CBIR applications.To this aim, a recognition system has been developed, that detects automatically image ROIs containing single objects, and classifies them as belonging to a particular class of shapes.In our approach we make use of the eigenvalues of the covariance matrix computed from the pixel rows of a single ROI. These quantities are arranged in a vector form, and are classified using Support Vector Machines (SVMs). The selected feature allows us to recognize shapes in a robust fashion, despite rotations or…
Automatic analysis of speckle photography fringes
1997
Speckle interferometry is a technique adequate to metrological problems such as the measurement of object deformation. An automatic system of analysis of such measurements is given; it consists of a motorized x-y plate positioner controlled by computer, a CCD video camera, and software for image analysis. A fringe-recognition algorithm determines the spacing and orientation of the fringes and permits the calculation of the magnitude and direction of the displacement of the analyzed object point in images with variable degrees of illumination. For a 256 x 256 pixel image resolution, the procedure allows one to analyze from three fringes to a number of fringes that corresponds to 3 pixels/fri…
Estimating intrinsic image from successive images by solving underdetermined and overdetermined systems of the dichromatic model
2020
International audience; Estimating an intrinsic image from a sequence of successive images taken from an object at different angles of illumination can be used in various applications such as objects recognition, color classification, and the like; because, in so doing, it can provide more visual information. Meanwhile, according to the well-known dichromatic model, each image can be considered a linear combination of three components, including intrinsic image, shading factor, and specularity. In this study, at first, two simple independent constrained and parallelized quadratic programming steps were used for computing values of the shading factor and the specularity of each successive of…
Adapted processing of catadioptric images using polarization imaging
2009
A non parametric method that defines a pixel neighborhood within catadioptric images is presented in this paper. It is based on an accurate modeling of the mirror shape by using polarization imaging. Unlike the most of current processing methods in the literature, this method is non-parametric and can deal with the deformation of catadioptric images. This paper demonstrates how an appropriate neighborhood can be derived from the polarization parameters by estimation of the degree of polarization and the angle of polarization which in return directly provide an adapted neighborhood of each pixel that can be used to perform image derivation, edge detection, interest point detection and namely…
Accuracy of stereotactic coordinate transformation using a localisation frame and computed tomographic imaging
1999
The accuracy of coordinate transformation from the computed tomographic (CT) space to the stereotactic frame space was analysed for frame-based stereotactic systems which use a localisation frame and coordinate transformation based on matrix calculation. The coordinate transformation was divided into three consecutive steps: (1) transforming the localisation frame into the CT image built up from pixels with distinct attenuation values, (2) determining the rod centres of the localisation frame in the CT image, and (3) coordinate transformation from the image to the frame space using the centres of the rods in the image space and algebraic, matrix-based calculation. The error contribution at …