Search results for "Data Compression"

showing 10 items of 99 documents

On parsing optimality for dictionary-based text compression—the Zip case

2013

Dictionary-based compression schemes are the most commonly used data compression schemes since they appeared in the foundational paper of Ziv and Lempel in 1977, and generally referred to as LZ77. Their work is the base of Zip, gZip, 7-Zip and many other compression software utilities. Some of these compression schemes use variants of the greedy approach to parse the text into dictionary phrases; others have left the greedy approach to improve the compression ratio. Recently, two bit-optimal parsing algorithms have been presented filling the gap between theory and best practice. We present a survey on the parsing problem for dictionary-based text compression, identifying noticeable results …

Theoretical computer scienceComputer scienceData_CODINGANDINFORMATIONTHEORYTop-down parsingcomputer.software_genreTheoretical Computer ScienceParsing optimalityCompression (functional analysis)Discrete Mathematics and CombinatoricsLossless compressionParsingLZ77 algorithmSettore INF/01 - InformaticaDeflate algorithmbusiness.industryDictionary-based text compressionComputational Theory and MathematicsData compressionDEFLATECompression ratioArtificial intelligencebusinesscomputerNatural language processingBottom-up parsingData compressionJournal of Discrete Algorithms
researchProduct

Dictionary-symbolwise flexible parsing

2012

AbstractLinear-time optimal parsing algorithms are rare in the dictionary-based branch of the data compression theory. A recent result is the Flexible Parsing algorithm of Matias and Sahinalp (1999) that works when the dictionary is prefix closed and the encoding of dictionary pointers has a constant cost. We present the Dictionary-Symbolwise Flexible Parsing algorithm that is optimal for prefix-closed dictionaries and any symbolwise compressor under some natural hypothesis. In the case of LZ78-like algorithms with variable costs and any, linear as usual, symbolwise compressor we show how to implement our parsing algorithm in linear time. In the case of LZ77-like dictionaries and any symbol…

Theoretical computer scienceComputer science[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS][INFO.INFO-DS] Computer Science [cs]/Data Structures and Algorithms [cs.DS]Data_CODINGANDINFORMATIONTHEORY0102 computer and information sciences02 engineering and technologycomputer.software_genre01 natural sciencesDirected acyclic graphTheoretical Computer ScienceConstant (computer programming)020204 information systemsEncoding (memory)Optimal parsing0202 electrical engineering electronic engineering information engineeringDiscrete Mathematics and CombinatoricsStringologySymbolwise text compressionTime complexityLossless compressionParsingSettore INF/01 - InformaticaDictionary-based compressionOptimal Parsing Lossless Data Compression DAGDirected acyclic graphPrefixComputational Theory and MathematicsText compression010201 computation theory & mathematicsAlgorithmcomputerBottom-up parsingData compressionJournal of Discrete Algorithms
researchProduct

Text Compression Using Antidictionaries

1999

International audience; We give a new text compression scheme based on Forbidden Words ("antidictionary"). We prove that our algorithms attain the entropy for balanced binary sources. They run in linear time. Moreover, one of the main advantages of this approach is that it produces very fast decompressors. A second advantage is a synchronization property that is helpful to search compressed data and allows parallel compression. Our algorithms can also be presented as "compilers" that create compressors dedicated to any previously fixed source. The techniques used in this paper are from Information Theory and Finite Automata.

Theoretical computer scienceFinite-state machineComputer science[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS]010102 general mathematicsforbidden wordData_CODINGANDINFORMATIONTHEORY0102 computer and information sciencesInformation theory01 natural sciencesfinite automatonParallel compressionpattern matching010201 computation theory & mathematicsEntropy (information theory)Pattern matching0101 mathematicsTime complexityAlgorithmdata compressioninformation theoryData compression
researchProduct

Kolmogorov superposition theorem for image compression

2012

International audience; The authors present a novel approach for image compression based on an unconventional representation of images. The proposed approach is different from most of the existing techniques in the literature because the compression is not directly performed on the image pixels, but is rather applied to an equivalent monovariate representation of the wavelet-transformed image. More precisely, the authors have considered an adaptation of Kolmogorov superposition theorem proposed by Igelnik and known as the Kolmogorov spline network (KSN), in which the image is approximated by sums and compositions of specific monovariate functions. Using this representation, the authors trad…

Theoretical computer scienceImage compressionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION02 engineering and technologySuperposition theoremE.4. CODING AND INFORMATION THEORY01 natural sciencesWavelet[ INFO.INFO-TI ] Computer Science [cs]/Image Processing0202 electrical engineering electronic engineering information engineering0101 mathematicsElectrical and Electronic EngineeringMathematicsPixel010102 general mathematicsWavelet transformcomputer.file_formatSpline (mathematics)[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV]Signal ProcessingJPEG 2000Kolmogorov superposition theorem020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionAlgorithmcomputerSoftwareData compressionImage compression
researchProduct

The Burrows-Wheeler Transform between Data Compression and Combinatorics on Words

2013

The Burrows-Wheeler Transform (BWT) is a tool of fundamental importance in Data Compression and, recently, has found many applications well beyond its original purpose. The main goal of this paper is to highlight the mathematical and combinatorial properties on which the outstanding versatility of the $BWT$ is based, i.e. its reversibility and the clustering effect on the output. Such properties have aroused curiosity and fervent interest in the scientific world both for theoretical aspects and for practical effects. In particular, in this paper we are interested both to survey the theoretical research issues which, by taking their cue from Data Compression, have been developed in the conte…

Theoretical computer scienceSettore INF/01 - InformaticaBurrows–Wheeler transformmedia_common.quotation_subjectTheoretical researchContext (language use)Data_CODINGANDINFORMATIONTHEORYBurrows Wheeler transform; Clustering effect; Combinatorial propertiesCombinatorial propertiesBurrows Wheeler transformCombinatorics on wordsClustering effectBWT balancing optimal partitioning text-compressionCuriosityArithmeticCluster analysisFocus (optics)media_commonData compressionMathematics
researchProduct

Mesh connectivity compression using convection reconstruction

2007

International audience; During a highly productive period running from 1995 to about 2002, the research in lossless compression of 3D meshes mainly consisted in a hard battle for the best bitrates. But for a few years, compression rates seem stabilized around 1.5 bit per vertex for the connectivity coding of usual meshes, and more and more work is dedicated to remeshing, lossy compression, or gigantic mesh compression, where memory and CPU optimizations are the new priority. However, the size of 3D models keeps growing, and many application fields keep requiring lossless compression. In this paper, we present a new contribution for single-rate lossless connectivity compression, which first …

Theoretical computer scienceTexture compressionLossless[ MATH.MATH-IT ] Mathematics [math]/Information Theory [math.IT]02 engineering and technologyLossy compression[INFO.INFO-CG]Computer Science [cs]/Computational Geometry [cs.CG][MATH.MATH-IT] Mathematics [math]/Information Theory [math.IT][ INFO.INFO-IT ] Computer Science [cs]/Information Theory [cs.IT]I.3.5 [Computing Methodologies]: Computer Graphics--Computational Geometry and Object Modeling0202 electrical engineering electronic engineering information engineeringPolygon meshComputingMethodologies_COMPUTERGRAPHICSMathematicsMeshConnected componentLossless compressionConnectivityDelaunay triangulationCompression[MATH.MATH-IT]Mathematics [math]/Information Theory [math.IT]020207 software engineering[INFO.INFO-CG] Computer Science [cs]/Computational Geometry [cs.CG][INFO.INFO-IT]Computer Science [cs]/Information Theory [cs.IT][ INFO.INFO-CG ] Computer Science [cs]/Computational Geometry [cs.CG]020201 artificial intelligence & image processing[INFO.INFO-IT] Computer Science [cs]/Information Theory [cs.IT]ReconstructionAlgorithmImage compressionData compressionProceedings of the 2007 ACM symposium on Solid and physical modeling
researchProduct

Compression of binary images based on covering

1995

The paper describes a new technique to compress binary images based on an image covering algorithm. The idea is that binary images can be always covered by rectangles, univocally described by a vertex and two adjacent edges (L-shape). Some optimisations are necessary to consider degenerate configurations. The method has been tested on several images representing drawings and typed texts. The comparison with existing image file compression techniques shows a good performance of our approach. Further optimisations are under development.

Vertex (computer graphics)Medial axisComputer scienceCompression (functional analysis)Binary imageComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONImage file formatscomputer.file_formatcomputerAlgorithmData compressionImage compressionImage (mathematics)
researchProduct

Maximum likelihood difference scaling of image quality in compression-degraded images.

2007

International audience; Lossy image compression techniques allow arbitrarily high compression rates but at the price of poor image quality. We applied maximum likelihood difference scaling to evaluate image quality of nine images, each compressed via vector quantization to ten different levels, within two different color spaces, RGB and CIE 1976 L(*)a(*)b(*). In L(*)a(*)b(*) space, images could be compressed on average by 32% more than in RGB space, with little additional loss in quality. Further compression led to marked perceptual changes. Our approach permits a rapid, direct measurement of the consequences of image compression for human observers.

[ INFO ] Computer Science [cs]Image qualityColorImage processing[INFO] Computer Science [cs]Color space050105 experimental psychology03 medical and health sciences0302 clinical medicineOpticsImage Processing Computer-Assisted[INFO]Computer Science [cs]0501 psychology and cognitive sciences[SDV.MHEP.OS]Life Sciences [q-bio]/Human health and pathology/Sensory OrgansImage resolutionMathematicsColor imagebusiness.industry05 social sciencesVector quantizationData CompressionAtomic and Molecular Physics and OpticsElectronic Optical and Magnetic Materials[SDV.MHEP.OS] Life Sciences [q-bio]/Human health and pathology/Sensory Organs[ SDV.MHEP.OS ] Life Sciences [q-bio]/Human health and pathology/Sensory OrgansRGB color modelComputer Vision and Pattern RecognitionArtifactsbusiness030217 neurology & neurosurgeryImage compression
researchProduct

Visualization of Memory Map Information in Embedded System Design

2018

Data compression is a common requirement for displaying large amounts of information. The goal is to reduce visual clutter. The approach given in this paper uses an analysis of a data set to construct a visual representation. The visualization is compressed using the address ranges of the memory structure. This method produces a compressed version of the initial visualization, retaining the same information as the original. The presented method has been implemented as a Memory Designer tool for ASIC, FPGA and embedded systems using IP-XACT. The Memory Designer is a user-friendly tool for model based embedded system design, providing access and adjustment of the memory layout from a single v…

business.industryComputer science020207 software engineering02 engineering and technologyConstruct (python library)Memory mapVisualizationData visualizationApplication-specific integrated circuit0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingbusinessProgrammerField-programmable gate arrayComputer hardwareData compression2018 21st Euromicro Conference on Digital System Design (DSD)
researchProduct

Use of H.264 real-time video encoding to reduce display wall system bandwidth consumption

2015

This paper compares the DXT and JPEG image compression techniques used in display wall solutions SAGE and DisplayCluster with hardware accelerated H.264 video encoding that is used in the display wall system developed by the authors of this paper. The obtained processing power usage and generated bandwidth measurements presented in this paper demonstrate that hardware accelerated H.264 encoding offers multiple benefits over software implemented H.264, DXT and JPEG.

business.industryComputer scienceBandwidth (signal processing)ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONData_CODINGANDINFORMATIONTHEORYcomputer.file_formatJPEGPower usageReal time videoSoftwareJpeg image compressionbusinesscomputerComputer hardwareTransform codingData compression2015 IEEE 3rd Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE)
researchProduct