Search results for "Pixel"
showing 10 items of 421 documents
Gridding artifacts on medium-resolution satellite image time series: MERIS case study
2011
Earth observation satellites provide a valuable source of data which when conveniently processed can be used to better understand the Earth system dynamics. In this regard, one of the prerequisites for the analysis of satellite image time series is that the images are spatially coregistered so that the resulting multitemporal pixel entities offer a true temporal view of the area under study. This implies that all the observations must be mapped to a common system of grid cells. This process is known as gridding and, in practice, two common grids can be used as a reference: 1) a grid defined by some kind of external data set (e.g., an existing land-cover map) or 2) a grid defined by one of t…
Space variant vision and pipelined architecture for time to impact computation
2002
Image analysis is one of the most interesting ways for a mobile vehicle to understand its environment. One of the tasks of an autonomous vehicle is to get accurate information of what it has in front, to avoid collision or find a way to a target. This task requires real-time restrictions depending on the vehicle speed and external object movement. The use of normal cameras, with homogeneous (squared) pixel distribution, for real-time image processing, usually requires high performance computing and high image rates. A different approach makes use of a CMOS space-variant camera that yields a high frame rate with low data bandwidth. The camera also performs the log-polar transform, simplifyin…
Real-time low level feature extraction for on-board robot vision systems
2006
Robot vision systems notoriously require large computing capabilities, rarely available on physical devices. Robots have limited embedded hardware, and almost all sensory computation is delegated to remote machines. Emerging gigascale integration technologies offer the opportunity to explore alternative computing architectures that can deliver a significant boost to on-board computing when implemented in embedded, reconfigurable devices. This paper explores the mapping of low level feature extraction on one such architecture, the Georgia Tech SIMD Pixel Processor (SIMPil). The Fast Boundary Web Extraction (fBWE) algorithm is adapted and mapped on SIMPil as a fixed-point, data parallel imple…
An FPGA-based design for real-time Super Resolution Reconstruction
2018
Since several decades, the camera spatial resolution is gradually increasing with the CMOS technology evolution. The image sensors provide more and more pixels, generating new constraints for the suitable optics. As an alternative, promising solutions propose Super Resolution (SR) image reconstruction to extend the image size without modifying the sensor architecture. Convincing state-of art studies demonstrate that these methods could even be implemented in real-time. Nevertheless, artifacts can be observed in highly textured areas of the image. In this paper, we propose a Local Adaptive Spatial Super Resolution (LASSR) method to fix this limitation. A real-time texture analysis is include…
Three-dimensional display by smart pseudoscopic-to-orthoscopic conversion with tunable focus.
2014
The original aim of the integral-imaging concept, reported by Gabriel Lippmann more than a century ago, is the capture of images of 3D scenes for their projection onto an autostereoscopic display. In this paper we report a new algorithm for the efficient generation of microimages for their direct projection onto an integral-imaging monitor. Like our previous algorithm, the smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm, this algorithm produces microimages ready to produce 3D display with full parallax. However, this new algorithm is much simpler than the previous one, produces microimages free of black pixels, and permits fixing at will, between certain limits, the reference …
Speeding-Up Differential Motion Detection Algorithms Using a Change-Driven Data Flow Processing Strategy
2007
A constraint of real-time implementation of differential motion detection algorithms is the large amount of data to be processed. Full image processing is usually the classical approach for these algorithms: spatial and temporal derivatives are calculated for all pixels in the image despite the fact that the majority of image pixels may not have changed from one frame to the next. By contrast, the data flow model works in a totally different way as instructions are only fired when the data needed for these instructions are available. Here we present a method to speed-up low level motion detection algorithms. This method is based on pixel change instead of full image processing and good spee…
A class-separability-based method for multi/hyperspectral image color visualization
2010
In this paper, a new color visualization technique for multi- and hyperspectral images is proposed. This method is based on a maximization of the perceptual distance between the scene endmembers as well as natural constancy of the resulting images. The stretched CMF principle is used to transform reflectance into values in the CIE L*a*b* colorspace combined with an a priori known segmentation map for separability enhancement between classes. Boundaries are set in the a*b* subspace to balance the natural palette of colors in order to ease interpretation by a human expert. Convincing results on two different images are shown.
Deep Learning-Based Sign Language Digits Recognition From Thermal Images With Edge Computing System
2021
The sign language digits based on hand gestures have been utilized in various applications such as human-computer interaction, robotics, health and medical systems, health assistive technologies, automotive user interfaces, crisis management and disaster relief, entertainment, and contactless communication in smart devices. The color and depth cameras are commonly deployed for hand gesture recognition, but the robust classification of hand gestures under varying illumination is still a challenging task. This work presents the design and deployment of a complete end-to-end edge computing system that can accurately provide the classification of hand gestures captured from thermal images. A th…
Unsupervised deep feature extraction of hyperspectral images
2014
This paper presents an effective unsupervised sparse feature learning algorithm to train deep convolutional networks on hyperspectral images. Deep convolutional hierarchical representations are learned and then used for pixel classification. Features in lower layers present less abstract representations of data, while higher layers represent more abstract and complex characteristics. We successfully illustrate the performance of the extracted representations in a challenging AVIRIS hyperspectral image classification problem, compared to standard dimensionality reduction methods like principal component analysis (PCA) and its kernel counterpart (kPCA). The proposed method largely outperforms…
A support vector domain method for change detection in multitemporal images
2010
This paper formulates the problem of distinguishing changed from unchanged pixels in multitemporal remote sensing images as a minimum enclosing ball (MEB) problem with changed pixels as target class. The definition of the sphere-shaped decision boundary with minimal volume that embraces changed pixels is approached in the context of the support vector formalism adopting a support vector domain description (SVDD) one-class classifier. SVDD maps the data into a high dimensional feature space where the spherical support of the high dimensional distribution of changed pixels is computed. Unlike the standard SVDD, the proposed formulation of the SVDD uses both target and outlier samples for defi…