Search results for "data compression"

showing 10 items of 99 documents

Implementation of JPEG2000 arithmetic decoder using dynamic reconfiguration of FPGA

2005

This paper describes implementation of a part of JPEG2000 algorithm (MQ-Decoder and arithmetic decoder) on a FPGA board using dynamic reconfiguration. Comparison between static and dynamic reconfiguration is presented and new analysis criteria (time performance, logic cost, spatio-temporal efficiency) are defined. MQ-decoder and arithmetic decoder can be classified in the most attractive case for dynamic reconfiguration implementation: applications without parallelism by functions. This implementation is done on an architecture designed to study dynamic reconfiguration of FPGAs: the ARDOISE architecture. The implementation obtained, based on four partial configurations of arithmetic decoder…

Soft-decision decoderComputer scienceJPEG 2000Control reconfigurationcomputer.file_formatHardware_ARITHMETICANDLOGICSTRUCTURESArithmeticField-programmable gate arraycomputerDecoding methodsData compression2004 International Conference on Image Processing, 2004. ICIP '04.
researchProduct

Lossless and near-lossless image compression based on multiresolution analysis

2013

There are applications in data compression, where quality control is of utmost importance. Certain features in the decoded signal must be exactly, or very accurately recovered, yet one would like to be as economical as possible with respect to storage and speed of computation. In this paper, we present a multi-scale data-compression algorithm within Harten's interpolatory framework for multiresolution that gives a specific estimate of the precise error between the original and the decoded signal, when measured in the L"~ and in the L"p (p=1,2) discrete norms. The proposed algorithm does not rely on a tensor-product strategy to compress two-dimensional signals, and it provides a priori bound…

Lossless compressionApplied MathematicsMultiresolution analysisComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONData compression ratioData_CODINGANDINFORMATIONTHEORYLossy compressionPeak signal-to-noise ratioComputational MathematicsQuantization (image processing)AlgorithmMathematicsImage compressionData compressionJournal of Computational and Applied Mathematics
researchProduct

Subjective image fidelity metric based on bit allocation of the human visual system in the DCT domain

1997

Until now, subjective image distortion measures have partially used diverse empirical facts concerning human perception: non-linear perception of luminance, masking of the impairments by a highly textured surround, linear filtering by the threshold contrast frequency response of the visual system, and non-linear post-filtering amplitude corrections in the frequency domain. In this work, we develop a frequency and contrast dependent metric in the DCT domain using a fully non-linear and suprathreshold contrast perception model: the Information Allocation Function (IAF) of the visual system. It is derived from experimental data about frequency and contrast incremental thresholds and it is cons…

Frequency responsegenetic structuresImage qualitybusiness.industrymedia_common.quotation_subjectDistortionFrequency domainSignal ProcessingMetric (mathematics)Human visual system modelContrast (vision)Computer visionComputer Vision and Pattern RecognitionArtificial intelligencebusinessData compressionmedia_commonMathematics
researchProduct

Image compression based on a multi-directional map-dependent algorithm

2007

Abstract This work is devoted to the construction of a new multi-directional edge-adapted compression algorithm for images. It is based on a multi-scale transform that is performed in two steps: a detection step producing a map of edges and a prediction/multi-resolution step which takes into account the information given by the map. A short analysis of the multi-scale transform is performed and an estimate of the error associated to the largest coefficients for a piecewise regular function with Lipschitz edges is provided. Comparisons between this map-dependent algorithm and different classical algorithms are given.

Lossless compressionWork (thermodynamics)Texture compressionApplied MathematicsPiecewiseFunction (mathematics)Lipschitz continuityAlgorithmMathematicsImage compressionData compressionApplied and Computational Harmonic Analysis
researchProduct

Massively Parallel ANS Decoding on GPUs

2019

In recent years, graphics processors have enabled significant advances in the fields of big data and streamed deep learning. In order to keep control of rapidly growing amounts of data and to achieve sufficient throughput rates, compression features are a key part of many applications including popular deep learning pipelines. However, as most of the respective APIs rely on CPU-based preprocessing for decoding, data decompression frequently becomes a bottleneck in accelerated compute systems. This establishes the need for efficient GPU-based solutions for decompression. Asymmetric numeral systems (ANS) represent a modern approach to entropy coding, combining superior compression results wit…

020203 distributed computingComputer science020206 networking & telecommunicationsData_CODINGANDINFORMATIONTHEORY02 engineering and technologyParallel computingCUDAScalability0202 electrical engineering electronic engineering information engineeringCodecSIMDEntropy encodingMassively parallelDecoding methodsData compressionProceedings of the 48th International Conference on Parallel Processing
researchProduct

Compression of binary images based on covering

1995

The paper describes a new technique to compress binary images based on an image covering algorithm. The idea is that binary images can be always covered by rectangles, univocally described by a vertex and two adjacent edges (L-shape). Some optimisations are necessary to consider degenerate configurations. The method has been tested on several images representing drawings and typed texts. The comparison with existing image file compression techniques shows a good performance of our approach. Further optimisations are under development.

Vertex (computer graphics)Medial axisComputer scienceCompression (functional analysis)Binary imageComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONImage file formatscomputer.file_formatcomputerAlgorithmData compressionImage compressionImage (mathematics)
researchProduct

Data Compression with ENO Schemes: A Case Study

2001

Abstract We study the compresion properties of ENO-type nonlinear multiresolution transformations on digital images. Specific error control algorithms are used to ensure a prescribed accuracy. The numerical results reveal that these methods strongly outperform the more classical wavelet decompositions in the case of piecewise smooth geometric images.

Nonlinear systemDigital imageWaveletTheoretical computer scienceApplied MathematicsMathematicsofComputing_NUMERICALANALYSISComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPiecewiseError detection and correctionAlgorithmComputingMethodologies_COMPUTERGRAPHICSMathematicsData compressionApplied and Computational Harmonic Analysis
researchProduct

Merging the transform step and the quantization step for Karhunen-Loeve transform based image compression

2000

Transform coding is one of the most important methods for lossy image compression. The optimum linear transform - known as Karhunen-Loeve transform (KLT) - was difficult to implement in the classic way. Now, due to continuous improvements in neural network's performance, the KLT method becomes more topical then ever. We propose a new scheme where the quantization step is merged together with the transform step during the learning phase. The new method is tested for different levels of quantization and for different types of quantizers. Experimental results presented in the paper prove that the new proposed scheme always gives better results than the state-of-the-art solution.

business.industryFractal transformVector quantizationTop-hat transformPattern recognitionArtificial intelligencebusinessQuantization (image processing)S transformTransform codingFractional Fourier transformData compressionMathematicsProceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium
researchProduct

Boosting Textual Compression in Optimal Linear Time

2005

We provide a general boosting technique for Textual Data Compression. Qualitatively, it takes a good compression algorithm and turns it into an algorithm with a better compression performance guarantee. It displays the following remarkable properties: (a) it can turn any memoryless compressor into a compression algorithm that uses the “best possible” contexts; (b) it is very simple and optimal in terms of time; and (c) it admits a decompression algorithm again optimal in time. To the best of our knowledge, this is the first boosting technique displaying these properties.Technically, our boosting technique builds upon three main ingredients: the Burrows--Wheeler Transform, the Suffix Tree d…

Theoretical computer scienceBurrows–Wheeler transformSuffix treeString (computer science)Data_CODINGANDINFORMATIONTHEORYBurrows-Wheeler transformSubstringArithmetic codinglaw.inventionLempel-Ziv compressorsArtificial IntelligenceHardware and ArchitectureControl and Systems Engineeringlawtext compressionempirical entropyArithmetic codingGreedy algorithmTime complexityAlgorithmSoftwareInformation SystemsMathematicsData compression
researchProduct

Overlapped moving windows followed by principal component analysis to extract information from chromatograms and application to classification analys…

2015

Variable generation from chromatograms is conveniently accomplished using unsupervised rather than manual techniques. With unsupervised techniques, there is no need for selecting a few peaks for manual integration and valuable information is quickly and efficiently collected. The generation of variables can be performed by using either peak searching or moving window (MW) strategies. With a MW approach, the peaks are ignored and many variables, only part of them carrying information, are generated. Thus, variable generation by MWs should be followed by data compression to generate the variables to be further used for classification or quantitation purposes. In this work, unsupervised proces…

Chromatographybusiness.industryGeneral Chemical EngineeringGeneral EngineeringPattern recognitionMoving windowLinear discriminant analysisAnalytical Chemistrylaw.inventionVariable (computer science)Window WidthlawPrincipal component analysisRange (statistics)Flame ionization detectorArtificial intelligencebusinessData compressionMathematicsAnalytical Methods
researchProduct