Search results for "data compression"

showing 10 items of 99 documents

Data Compression Using Wavelet and Local Cosine Transforms

2015

The chapter describes an algorithm that compresses two-dimensional data arrays, which are piece-wise smooth in one direction and have oscillating events in the other direction. Seismic, hyper-spectral and fingerprints data, for example, have such a mixed structure. The transform part of the compression process is an algorithm that combines wavelet and local cosine transform (LCT). The quantization and the entropy coding parts of the compression are taken from the SPIHT codec. To efficiently apply the SPIHT codec to a mixed coefficients array, reordering of the LCT coefficients takes place. On the data arrays, which have the mixed structure, this algorithm outperforms other algorithms that a…

Discrete wavelet transformComputer scienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONWavelet transformData_CODINGANDINFORMATIONTHEORYcomputer.file_formatWavelet packet decompositionSet partitioning in hierarchical treesWaveletJPEG 2000Discrete cosine transformAlgorithmcomputerData compression
researchProduct

The effect of wavelet and discrete cosine transform compression of digital radiographs on the detection of subtle proximal caries. ROC analysis.

2007

The study compared diagnostic performances of 2 different image compression methods: JPEG (discrete cosine transform; Joint Photographic Experts Group compression standard) versus JPEG2000 (discrete wavelet transform), both at a compression ratio of 12:1, from the original uncompressed TIFF radiograph with respect to the detection of non-cavitated carious lesions. Therefore, 100 approximal surfaces of 50 tooth pairs were evaluated on the radiographs by 10 experienced observers using a 5-point confidence scale. Observations were carried out on a standardized viewing monitor under subdued light conditions. The proportion of diseased surfaces was balanced to approximately 50% to avoid bias. Tr…

Discrete wavelet transformDental CariesSensitivity and SpecificityDiagnosis DifferentialWaveletComputer Science::MultimediaDiscrete cosine transformHumansDental EnamelGeneral DentistryLossless JPEGTransform codingMathematicsObserver VariationMicroscopybusiness.industryPattern recognitioncomputer.file_formatMicrotomyRadiography Dental DigitalData CompressionJPEGROC CurveJPEG 2000DentinArtificial intelligencebusinesscomputerAlgorithmsImage compressionCaries research
researchProduct

Compression Methods for Microclimate Data Based on Linear Approximation of Sensor Data

2019

Edge computing is currently one of the main research topics in the field of Internet of Things. Edge computing requires lightweight and computationally simple algorithms for sensor data analytics. Sensing edge devices are often battery powered and have a wireless connection. In designing edge devices the energy efficiency needs to be taken into account. Pre-processing the data locally in the edge device reduces the amount of data and thus decreases the energy consumption of wireless data transmission. Sensor data compression algorithms presented in this paper are mainly based on data linearity. Microclimate data is near linear in short time window and thus simple linear approximation based …

Edge deviceenergiatehokkuusWireless networkComputer sciencesensoriverkot020206 networking & telecommunications02 engineering and technologyEnergy consumptioninternet of thingscompression algorithmedge computingalgoritmit0202 electrical engineering electronic engineering information engineeringElectronic engineeringesineiden internet020201 artificial intelligence & image processingLinear approximationEdge computingEfficient energy useData compression
researchProduct

Multimedia applications in industrial networks: integration of image processing in profibus

2003

This paper analyzes the transmission of images through Profibus for applications concerned with the processing of images integrated in control systems, by means of a detailed study of two real cases with different bandwidth requirements. We analyze the special features of an artificial vision system making use of Profibus as a transport system, the scheduling of the traffic in order to guarantee the delivery of the images and the control traffic before their deadlines, the protocol used, and the compression techniques usable in this kind of system, which enable us to reduce the necessary bandwidth for the applications without degrading the operation of the image processing application. We p…

EngineeringProfibusbusiness.industryMachine visionReal-time computingImage processingUSableAutomationScheduling (computing)Control and Systems EngineeringControl systemElectrical and Electronic EngineeringbusinessData compressionIEEE Transactions on Industrial Electronics
researchProduct

Efficiency improvement of DC* through a Genetic Guidance

2017

DC∗ is a method for generating interpretable fuzzy information granules from pre-classified data. It is based on the subsequent application of LVQ1 for data compression and an ad-hoc procedure based on A∗ to represent data with the minimum number of fuzzy information granules satisfying some interpretability constraints. While being efficient in tackling several problems, the A∗ procedure included in DC∗ may happen to require a long computation time because the A∗ algorithm has exponential time complexity in the worst case. In this paper, we approach the problem of driving the search process of A∗ by suggesting a close-to-optimal solution that is produced through a Genetic Algorithm (GA). E…

Exponential complexity0209 industrial biotechnologyMathematical optimizationComputationProcess (computing)02 engineering and technologyFuzzy logic020901 industrial engineering & automationGenetic algorithm0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingAlgorithmMathematicsInterpretabilityData compression2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
researchProduct

P2D: a self-supervised method for depth estimation from polarimetry

2021

Monocular depth estimation is a recurring subject in the field of computer vision. Its ability to describe scenes via a depth map while reducing the constraints related to the formulation of perspective geometry tends to favor its use. However, despite the constant improvement of algorithms, most methods exploit only colorimetric information. Consequently, robustness to events to which the modality is not sensitive to, like specularity or transparency, is neglected. In response to this phenomenon, we propose using polarimetry as an input for a self-supervised monodepth network. Therefore, we propose exploiting polarization cues to encourage accurate reconstruction of scenes. Furthermore, we…

FOS: Computer and information sciences0209 industrial biotechnologyMonocularComputer sciencebusiness.industryComputer Vision and Pattern Recognition (cs.CV)PolarimetryComputer Science - Computer Vision and Pattern RecognitionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]02 engineering and technology010501 environmental sciences01 natural sciencesRegularization (mathematics)Term (time)020901 industrial engineering & automation[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]SpecularityRobustness (computer science)Depth mapComputer visionArtificial intelligenceTransparency (data compression)business0105 earth and related environmental sciences
researchProduct

Novel Results on the Number of Runs of the Burrows-Wheeler-Transform

2021

The Burrows-Wheeler-Transform (BWT), a reversible string transformation, is one of the fundamental components of many current data structures in string processing. It is central in data compression, as well as in efficient query algorithms for sequence data, such as webpages, genomic and other biological sequences, or indeed any textual data. The BWT lends itself well to compression because its number of equal-letter-runs (usually referred to as $r$) is often considerably lower than that of the original string; in particular, it is well suited for strings with many repeated factors. In fact, much attention has been paid to the $r$ parameter as measure of repetitiveness, especially to evalua…

FOS: Computer and information sciencesBurrows–Wheeler transformSettore INF/01 - InformaticaCombinatorics on wordsFormal Languages and Automata Theory (cs.FL)Computer scienceString (computer science)Search engine indexingCompressed data structuresComputer Science - Formal Languages and Automata TheoryString indexingData structureMeasure (mathematics)Burrows-Wheeler-TransformRepetitivenessCombinatorics on wordsBurrows-Wheeler-Transform Compressed data structures String indexing Repetitiveness Combinatorics on wordsTransformation (function)Computer Science - Data Structures and AlgorithmsData Structures and Algorithms (cs.DS)AlgorithmData compression
researchProduct

The rightmost equal-cost position problem.

2013

LZ77-based compression schemes compress the input text by replacing factors in the text with an encoded reference to a previous occurrence formed by the couple (length, offset). For a given factor, the smallest is the offset, the smallest is the resulting compression ratio. This is optimally achieved by using the rightmost occurrence of a factor in the previous text. Given a cost function, for instance the minimum number of bits used to represent an integer, we define the Rightmost Equal-Cost Position (REP) problem as the problem of finding one of the occurrences of a factor whose cost is equal to the cost of the rightmost one. We present the Multi-Layer Suffix Tree data structure that, for…

FOS: Computer and information sciencesOffset (computer science)Computer scienceSuffix treeComputer Science - Information Theorylaw.inventionCombinatoricslawLog-log plotComputer Science - Data Structures and AlgorithmsCompression schemetext compressiondictionary text compressionData Structures and Algorithms (cs.DS)LZ77 compressiondata compressionLossless compressionfull text indexSuffix Tree Data StructuresSettore INF/01 - InformaticaInformation Theory (cs.IT)Data structurePrefixCompression ratioCompression scheme; Constant time; Suffix Tree Data StructuresAlgorithmData compressionConstant time
researchProduct

Constructing Antidictionaries in Output-Sensitive Space

2021

A word $x$ that is absent from a word $y$ is called minimal if all its proper factors occur in $y$. Given a collection of $k$ words $y_1,y_2,\ldots,y_k$ over an alphabet $\Sigma$, we are asked to compute the set $\mathrm{M}^{\ell}_{y_{1}\#\ldots\#y_{k}}$ of minimal absent words of length at most $\ell$ of word $y=y_1\#y_2\#\ldots\#y_k$, $\#\notin\Sigma$. In data compression, this corresponds to computing the antidictionary of $k$ documents. In bioinformatics, it corresponds to computing words that are absent from a genome of $k$ chromosomes. This computation generally requires $\Omega(n)$ space for $n=|y|$ using any of the plenty available $\mathcal{O}(n)$-time algorithms. This is because a…

FOS: Computer and information sciencesSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniOutput sensitive algorithmsString algorithmsPhysicsAntidictionarieSettore INF/01 - InformaticaOutput sensitive algorithm0102 computer and information sciencesAbsent wordsSpace (mathematics)01 natural sciencesAntidictionariesCombinatorics010201 computation theory & mathematicsTheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITYData compressionComputer Science - Data Structures and AlgorithmsData Structures and Algorithms (cs.DS)Computer Science::Symbolic Computation[INFO]Computer Science [cs]Absent wordAlphabetWord (group theory)2019 Data Compression Conference (DCC)
researchProduct

Large-scale compression of genomic sequence databases with the Burrows-Wheeler transform

2012

Motivation The Burrows-Wheeler transform (BWT) is the foundation of many algorithms for compression and indexing of text data, but the cost of computing the BWT of very large string collections has prevented these techniques from being widely applied to the large sets of sequences often encountered as the outcome of DNA sequencing experiments. In previous work, we presented a novel algorithm that allows the BWT of human genome scale data to be computed on very moderate hardware, thus enabling us to investigate the BWT as a tool for the compression of such datasets. Results We first used simulated reads to explore the relationship between the level of compression and the error rate, the leng…

FOS: Computer and information sciencesStatistics and ProbabilityBurrows–Wheeler transformComputer scienceData_CODINGANDINFORMATIONTHEORYBurrows-Wheeler transformcomputer.software_genreBiochemistryBurrows-Wheeler transform; Data Compression; Next-generation sequencingComputer Science - Data Structures and AlgorithmsEscherichia coliCode (cryptography)HumansOverhead (computing)Data Structures and Algorithms (cs.DS)Computer SimulationQuantitative Biology - GenomicsMolecular BiologyGenomics (q-bio.GN)Genome HumanString (computer science)Search engine indexingSortingGenomicsSequence Analysis DNAConstruct (python library)Data CompressionComputer Science ApplicationsComputational MathematicsComputational Theory and MathematicsFOS: Biological sciencesNext-generation sequencingData miningDatabases Nucleic AcidcomputerAlgorithmsData compression
researchProduct