Search results for "Pixel"

showing 10 items of 421 documents

Gridding artifacts on medium-resolution satellite image time series: MERIS case study

2011

Earth observation satellites provide a valuable source of data which when conveniently processed can be used to better understand the Earth system dynamics. In this regard, one of the prerequisites for the analysis of satellite image time series is that the images are spatially coregistered so that the resulting multitemporal pixel entities offer a true temporal view of the area under study. This implies that all the observations must be mapped to a common system of grid cells. This process is known as gridding and, in practice, two common grids can be used as a reference: 1) a grid defined by some kind of external data set (e.g., an existing land-cover map) or 2) a grid defined by one of t…

PixelComputer scienceImaging spectrometerLand coverGrid cellGridEarth observation satelliteMETIS-304168Data setITC-ISI-JOURNAL-ARTICLEGeneral Earth and Planetary SciencesSatelliteSatellite Image Time SeriesElectrical and Electronic EngineeringImage resolutionRemote sensingIEEE Transactions on Geoscience and Remote Sensing
researchProduct

Space variant vision and pipelined architecture for time to impact computation

2002

Image analysis is one of the most interesting ways for a mobile vehicle to understand its environment. One of the tasks of an autonomous vehicle is to get accurate information of what it has in front, to avoid collision or find a way to a target. This task requires real-time restrictions depending on the vehicle speed and external object movement. The use of normal cameras, with homogeneous (squared) pixel distribution, for real-time image processing, usually requires high performance computing and high image rates. A different approach makes use of a CMOS space-variant camera that yields a high frame rate with low data bandwidth. The camera also performs the log-polar transform, simplifyin…

PixelComputer sciencebusiness.industryComputationBandwidth (signal processing)ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONImage processingRemotely operated underwater vehicleFrame rateComputer Science::Computer Vision and Pattern RecognitionDigital image processingComputer visionArtificial intelligencebusinessField-programmable gate array
researchProduct

Real-time low level feature extraction for on-board robot vision systems

2006

Robot vision systems notoriously require large computing capabilities, rarely available on physical devices. Robots have limited embedded hardware, and almost all sensory computation is delegated to remote machines. Emerging gigascale integration technologies offer the opportunity to explore alternative computing architectures that can deliver a significant boost to on-board computing when implemented in embedded, reconfigurable devices. This paper explores the mapping of low level feature extraction on one such architecture, the Georgia Tech SIMD Pixel Processor (SIMPil). The Fast Boundary Web Extraction (fBWE) algorithm is adapted and mapped on SIMPil as a fixed-point, data parallel imple…

PixelComputer sciencebusiness.industryComputationvision systems real-timeFeature extractionNull (SQL)Computer architectureEmbedded systemRobotSIMDArchitectureUnconventional computingbusiness
researchProduct

An FPGA-based design for real-time Super Resolution Reconstruction

2018

Since several decades, the camera spatial resolution is gradually increasing with the CMOS technology evolution. The image sensors provide more and more pixels, generating new constraints for the suitable optics. As an alternative, promising solutions propose Super Resolution (SR) image reconstruction to extend the image size without modifying the sensor architecture. Convincing state-of art studies demonstrate that these methods could even be implemented in real-time. Nevertheless, artifacts can be observed in highly textured areas of the image. In this paper, we propose a Local Adaptive Spatial Super Resolution (LASSR) method to fix this limitation. A real-time texture analysis is include…

PixelComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION020207 software engineering02 engineering and technologyIterative reconstructionImage (mathematics)CMOSImage texture0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingComputer visionArtificial intelligenceImage sensorField-programmable gate arraybusinessImage resolution[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processingComputingMilieux_MISCELLANEOUS[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing
researchProduct

Three-dimensional display by smart pseudoscopic-to-orthoscopic conversion with tunable focus.

2014

The original aim of the integral-imaging concept, reported by Gabriel Lippmann more than a century ago, is the capture of images of 3D scenes for their projection onto an autostereoscopic display. In this paper we report a new algorithm for the efficient generation of microimages for their direct projection onto an integral-imaging monitor. Like our previous algorithm, the smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm, this algorithm produces microimages ready to produce 3D display with full parallax. However, this new algorithm is much simpler than the previous one, produces microimages free of black pixels, and permits fixing at will, between certain limits, the reference …

PixelComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONField of viewStereo displayAtomic and Molecular Physics and OpticsOpticsAutostereoscopyElectrical and Electronic EngineeringParallaxFocus (optics)businessProjection (set theory)Engineering (miscellaneous)Image resolutionApplied optics
researchProduct

Speeding-Up Differential Motion Detection Algorithms Using a Change-Driven Data Flow Processing Strategy

2007

A constraint of real-time implementation of differential motion detection algorithms is the large amount of data to be processed. Full image processing is usually the classical approach for these algorithms: spatial and temporal derivatives are calculated for all pixels in the image despite the fact that the majority of image pixels may not have changed from one frame to the next. By contrast, the data flow model works in a totally different way as instructions are only fired when the data needed for these instructions are available. Here we present a method to speed-up low level motion detection algorithms. This method is based on pixel change instead of full image processing and good spee…

PixelComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONImage processingMotion detectionData flow diagramMotion fieldComputer Science::Computer Vision and Pattern RecognitionMotion estimationDigital image processingComputer visionArtificial intelligencebusinessAlgorithmFeature detection (computer vision)
researchProduct

A class-separability-based method for multi/hyperspectral image color visualization

2010

In this paper, a new color visualization technique for multi- and hyperspectral images is proposed. This method is based on a maximization of the perceptual distance between the scene endmembers as well as natural constancy of the resulting images. The stretched CMF principle is used to transform reflectance into values in the CIE L*a*b* colorspace combined with an a priori known segmentation map for separability enhancement between classes. Boundaries are set in the a*b* subspace to balance the natural palette of colors in order to ease interpretation by a human expert. Convincing results on two different images are shown.

PixelComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPalette (computing)Hyperspectral imagingImage segmentationColor spaceVisualizationSegmentationComputer visionArtificial intelligencebusinessSubspace topology2010 IEEE International Conference on Image Processing
researchProduct

Deep Learning-Based Sign Language Digits Recognition From Thermal Images With Edge Computing System

2021

The sign language digits based on hand gestures have been utilized in various applications such as human-computer interaction, robotics, health and medical systems, health assistive technologies, automotive user interfaces, crisis management and disaster relief, entertainment, and contactless communication in smart devices. The color and depth cameras are commonly deployed for hand gesture recognition, but the robust classification of hand gestures under varying illumination is still a challenging task. This work presents the design and deployment of a complete end-to-end edge computing system that can accurately provide the classification of hand gestures captured from thermal images. A th…

PixelComputer sciencebusiness.industryDeep learning010401 analytical chemistryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONRoboticsSign language01 natural sciences0104 chemical sciencesGesture recognitionComputer visionArtificial intelligenceElectrical and Electronic EngineeringbusinessInstrumentationEdge computingTest dataGestureIEEE Sensors Journal
researchProduct

Unsupervised deep feature extraction of hyperspectral images

2014

This paper presents an effective unsupervised sparse feature learning algorithm to train deep convolutional networks on hyperspectral images. Deep convolutional hierarchical representations are learned and then used for pixel classification. Features in lower layers present less abstract representations of data, while higher layers represent more abstract and complex characteristics. We successfully illustrate the performance of the extracted representations in a challenging AVIRIS hyperspectral image classification problem, compared to standard dimensionality reduction methods like principal component analysis (PCA) and its kernel counterpart (kPCA). The proposed method largely outperforms…

PixelComputer sciencebusiness.industryDimensionality reductionFeature extractionHyperspectral imagingPattern recognitionDiscriminative modelKernel (image processing)Principal component analysisComputer visionArtificial intelligencebusinessFeature learning2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)
researchProduct

A support vector domain method for change detection in multitemporal images

2010

This paper formulates the problem of distinguishing changed from unchanged pixels in multitemporal remote sensing images as a minimum enclosing ball (MEB) problem with changed pixels as target class. The definition of the sphere-shaped decision boundary with minimal volume that embraces changed pixels is approached in the context of the support vector formalism adopting a support vector domain description (SVDD) one-class classifier. SVDD maps the data into a high dimensional feature space where the spherical support of the high dimensional distribution of changed pixels is computed. Unlike the standard SVDD, the proposed formulation of the SVDD uses both target and outlier samples for defi…

PixelComputer sciencebusiness.industryFeature vectorComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONThresholdingMultispectral pattern recognitionSupport vector machineKernel methodArtificial IntelligenceComputer Science::Computer Vision and Pattern RecognitionSignal ProcessingOutlierDecision boundaryComputer visionComputer Vision and Pattern RecognitionArtificial intelligencebusinessSoftwareChange detectionPattern Recognition Letters
researchProduct