Search results for "COD"

showing 10 items of 2985 documents

Boosting Textual Compression in Optimal Linear Time

2005

We provide a general boosting technique for Textual Data Compression. Qualitatively, it takes a good compression algorithm and turns it into an algorithm with a better compression performance guarantee. It displays the following remarkable properties: (a) it can turn any memoryless compressor into a compression algorithm that uses the “best possible” contexts; (b) it is very simple and optimal in terms of time; and (c) it admits a decompression algorithm again optimal in time. To the best of our knowledge, this is the first boosting technique displaying these properties.Technically, our boosting technique builds upon three main ingredients: the Burrows--Wheeler Transform, the Suffix Tree d…

Theoretical computer scienceBurrows–Wheeler transformSuffix treeString (computer science)Data_CODINGANDINFORMATIONTHEORYBurrows-Wheeler transformSubstringArithmetic codinglaw.inventionLempel-Ziv compressorsArtificial IntelligenceHardware and ArchitectureControl and Systems Engineeringlawtext compressionempirical entropyArithmetic codingGreedy algorithmTime complexityAlgorithmSoftwareInformation SystemsMathematicsData compression
researchProduct

OpenCMISS: A multi-physics & multi-scale computational infrastructure for the VPH/Physiome project

2011

The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level b…

Theoretical computer scienceComputer science0206 medical engineeringBiophysics02 engineering and technologyModels BiologicalBiophysical PhenomenaDomain (software engineering)Computational science03 medical and health sciencesSoftwareEncoding (memory)HumansComputer SimulationMolecular BiologyPhysiological Phenomena030304 developmental biology0303 health sciencesbusiness.industryCellMLData structure020601 biomedical engineeringElasticityFinite element methodElectrophysiological PhenomenaPhysiomeFlow (mathematics)businessSoftwareProgress in Biophysics and Molecular Biology
researchProduct

A study on graph representations for genetic programming

2020

Graph representations promise several desirable properties for Genetic Programming (GP); multiple-output programs, natural representations of code reuse and, in many cases, an innate mechanism for neutral drift. Each graph GP technique provides a program representation, genetic operators and overarching evolutionary algorithm. This makes it difficult to identify the individual causes of empirical differences, both between these methods and in comparison to traditional GP. In this work, we empirically study the behavior of Cartesian Genetic Programming (CGP), Linear Genetic Programming (LGP), Evolving Graphs by Graph Programming (EGGP) and traditional GP. By fixing some aspects of the config…

Theoretical computer scienceComputer scienceCode reuseEvolutionary algorithmGenetic programming0102 computer and information sciences02 engineering and technologyGenetic operator01 natural sciencesGraphOperator (computer programming)010201 computation theory & mathematicsProblem domainLinear genetic programming0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingProceedings of the 2020 Genetic and Evolutionary Computation Conference
researchProduct

On parsing optimality for dictionary-based text compression—the Zip case

2013

Dictionary-based compression schemes are the most commonly used data compression schemes since they appeared in the foundational paper of Ziv and Lempel in 1977, and generally referred to as LZ77. Their work is the base of Zip, gZip, 7-Zip and many other compression software utilities. Some of these compression schemes use variants of the greedy approach to parse the text into dictionary phrases; others have left the greedy approach to improve the compression ratio. Recently, two bit-optimal parsing algorithms have been presented filling the gap between theory and best practice. We present a survey on the parsing problem for dictionary-based text compression, identifying noticeable results …

Theoretical computer scienceComputer scienceData_CODINGANDINFORMATIONTHEORYTop-down parsingcomputer.software_genreTheoretical Computer ScienceParsing optimalityCompression (functional analysis)Discrete Mathematics and CombinatoricsLossless compressionParsingLZ77 algorithmSettore INF/01 - InformaticaDeflate algorithmbusiness.industryDictionary-based text compressionComputational Theory and MathematicsData compressionDEFLATECompression ratioArtificial intelligencebusinesscomputerNatural language processingBottom-up parsingData compressionJournal of Discrete Algorithms
researchProduct

On the role of non-effective code in linear genetic programming

2019

In linear variants of Genetic Programming (GP) like linear genetic programming (LGP), structural introns can emerge, which are nodes that are not connected to the final output and do not contribute to the output of a program. There are claims that such non-effective code is beneficial for search, as it can store relevant and important evolved information that can be reactivated in later search phases. Furthermore, introns can increase diversity, which leads to higher GP performance. This paper studies the role of non-effective code by comparing the performance of LGP variants that deal differently with non-effective code for standard symbolic regression problems. As we find no decrease in p…

Theoretical computer scienceComputer scienceIntronContrast (statistics)Genetic programming0102 computer and information sciences02 engineering and technology01 natural sciences010201 computation theory & mathematicsLinear genetic programming0202 electrical engineering electronic engineering information engineeringCode (cryptography)020201 artificial intelligence & image processingSymbolic regressionProceedings of the Genetic and Evolutionary Computation Conference
researchProduct

Dictionary-symbolwise flexible parsing

2012

AbstractLinear-time optimal parsing algorithms are rare in the dictionary-based branch of the data compression theory. A recent result is the Flexible Parsing algorithm of Matias and Sahinalp (1999) that works when the dictionary is prefix closed and the encoding of dictionary pointers has a constant cost. We present the Dictionary-Symbolwise Flexible Parsing algorithm that is optimal for prefix-closed dictionaries and any symbolwise compressor under some natural hypothesis. In the case of LZ78-like algorithms with variable costs and any, linear as usual, symbolwise compressor we show how to implement our parsing algorithm in linear time. In the case of LZ77-like dictionaries and any symbol…

Theoretical computer scienceComputer science[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS][INFO.INFO-DS] Computer Science [cs]/Data Structures and Algorithms [cs.DS]Data_CODINGANDINFORMATIONTHEORY0102 computer and information sciences02 engineering and technologycomputer.software_genre01 natural sciencesDirected acyclic graphTheoretical Computer ScienceConstant (computer programming)020204 information systemsEncoding (memory)Optimal parsing0202 electrical engineering electronic engineering information engineeringDiscrete Mathematics and CombinatoricsStringologySymbolwise text compressionTime complexityLossless compressionParsingSettore INF/01 - InformaticaDictionary-based compressionOptimal Parsing Lossless Data Compression DAGDirected acyclic graphPrefixComputational Theory and MathematicsText compression010201 computation theory & mathematicsAlgorithmcomputerBottom-up parsingData compressionJournal of Discrete Algorithms
researchProduct

Text Compression Using Antidictionaries

1999

International audience; We give a new text compression scheme based on Forbidden Words ("antidictionary"). We prove that our algorithms attain the entropy for balanced binary sources. They run in linear time. Moreover, one of the main advantages of this approach is that it produces very fast decompressors. A second advantage is a synchronization property that is helpful to search compressed data and allows parallel compression. Our algorithms can also be presented as "compilers" that create compressors dedicated to any previously fixed source. The techniques used in this paper are from Information Theory and Finite Automata.

Theoretical computer scienceFinite-state machineComputer science[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS]010102 general mathematicsforbidden wordData_CODINGANDINFORMATIONTHEORY0102 computer and information sciencesInformation theory01 natural sciencesfinite automatonParallel compressionpattern matching010201 computation theory & mathematicsEntropy (information theory)Pattern matching0101 mathematicsTime complexityAlgorithmdata compressioninformation theoryData compression
researchProduct

Analog Multiple Description Joint Source-Channel Coding Based on Lattice Scaling

2015

Joint source-channel coding schemes based on analog mappings for point-to-point channels have recently gained attention for their simplicity and low delay. In this paper, these schemes are extended either to scenarios with or without side information at the decoders to transmit multiple descriptions of a Gaussian source over independent parallel channels. They are based on a lattice scaling approach together with bandwidth reduction analog mappings adapted for this multiple description scenario. The rationale behind lattice scaling is to improve performance through bandwidth expansion. Another important contribution of this paper is the proof of the separation theorem for the communication …

Theoretical computer scienceGaussianBandwidth (signal processing)Data_CODINGANDINFORMATIONTHEORYTopologysymbols.namesakeAdditive white Gaussian noiseBandwidth expansionSignal ProcessingsymbolsMutual fund separation theoremElectrical and Electronic EngineeringScalingDecoding methodsComputer Science::Information TheoryMathematicsCoding (social sciences)IEEE Transactions on Signal Processing
researchProduct

Code Interoperability and Standard Data Formats in Quantum Chemistry and Quantum Dynamics: The Q5/Q5cost Data Model

2014

Code interoperability and the search for domain-specific standard data formats represent critical issues in many areas of computational science. The advent of novel computing infrastructures such as computational grids and clouds make these issues even more urgent. The design and implementation of a common data format for quantum chemistry (QC) and quantum dynamics (QD) computer programs is discussed with reference to the research performed in the course of two Collaboration in Science and Technology Actions. The specific data models adopted, Q5Cost and D5Cost, are shown to work for a number of interoperating codes, regardless of the type and amount of information (small or large datasets) …

Theoretical computer scienceGrid ComputingComputer scienceDistributed computingInteroperability010402 general chemistrycomputer.software_genre01 natural sciencesData typegrid computingData modelingquantum chemistryquantum dynamicQuantum DynamicsCode interoperability0103 physical sciencesprogram interoperabilityCommon Data FormatComputingMilieux_MISCELLANEOUSdata format010304 chemical physicsChemistry (all)General ChemistryQuantum ChemistryGridData Format0104 chemical sciences[CHIM.THEO]Chemical Sciences/Theoretical and/or physical chemistryComputational MathematicsGrid computingData modelProof of conceptcomputerCode interoperability; Quantum Chemistry; Quantum Dynamics; Data Format; Grid ComputingJ. Comput. Chem.
researchProduct

Kolmogorov superposition theorem for image compression

2012

International audience; The authors present a novel approach for image compression based on an unconventional representation of images. The proposed approach is different from most of the existing techniques in the literature because the compression is not directly performed on the image pixels, but is rather applied to an equivalent monovariate representation of the wavelet-transformed image. More precisely, the authors have considered an adaptation of Kolmogorov superposition theorem proposed by Igelnik and known as the Kolmogorov spline network (KSN), in which the image is approximated by sums and compositions of specific monovariate functions. Using this representation, the authors trad…

Theoretical computer scienceImage compressionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION02 engineering and technologySuperposition theoremE.4. CODING AND INFORMATION THEORY01 natural sciencesWavelet[ INFO.INFO-TI ] Computer Science [cs]/Image Processing0202 electrical engineering electronic engineering information engineering0101 mathematicsElectrical and Electronic EngineeringMathematicsPixel010102 general mathematicsWavelet transformcomputer.file_formatSpline (mathematics)[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV]Signal ProcessingJPEG 2000Kolmogorov superposition theorem020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionAlgorithmcomputerSoftwareData compressionImage compression
researchProduct