Search results for "Image compression"

showing 10 items of 53 documents

The impact of irreversible image data compression on post-processing algorithms in computed tomography

2020

PURPOSE: We aimed to evaluate the influence of irreversible image compression at varying levels on image post-processing algorithms (3D volume rendering of angiographs, computer-assisted detection of lung nodules, segmentation and volumetry of liver lesions, and automated evaluation of functional cardiac imaging) in computed tomography (CT). METHODS: Uncompressed CT image data (30 angiographs of the lower limbs, 38 lung exams, 20 liver exams and 30 cardiac exams) were anonymized and subsequently compressed using the JPEG2000 algorithm with compression ratios of 8:1, 10:1, and 15:1. Volume renderings of CT angiographies obtained from compressed and uncompressed data were compared using objec…

Image processing030218 nuclear medicine & medical imaging03 medical and health sciences0302 clinical medicineGeneral RadiologyImage Processing Computer-AssistedHumansMedicineRadiology Nuclear Medicine and imagingLungCardiac imagingRetrospective Studiesbusiness.industryReproducibility of ResultsExtremitiesHeartVolume renderingData CompressionUncompressed videoLiverTomographyTomography X-Ray ComputedCardiology and Cardiovascular MedicinebusinessAlgorithmVolume (compression)Image compressionData compressionDiagnostic and Interventional Radiology
researchProduct

Low Complexity Image Compression using Pruned 8-point DCT Approximation in Wireless Visual Sensor Networks

2017

International audience; Since the transmission of the uncompressed image in the context of wireless visual sensor networks (WVSNs) consumes less energy than transmitting the compressed image, developing energy-aware compression algorithms are mandatory to extend the camera node's lifetime and thereby the whole network lifetime. The present paper studies a low-complexity image compression algorithm in the context of WVSNs. This algorithm consists of applying a pruning approach on a DCT approximation transform. The scheme is investigated in terms of computation cycles, processing time, energy consumption and image quality. Experimental works are conducted using the Atmel Atmega128 processor o…

Image qualityComputer scienceReal-time computingTransform02 engineering and technology[MATH] Mathematics [math]low-complexity algorithmspruned 8-point DCT0202 electrical engineering electronic engineering information engineeringDiscrete cosine transform[MATH]Mathematics [math]CosineTransform codingapproximate DCTenergy conservation020208 electrical & electronic engineeringEnergy consumptionpruning approach[SPI.TRON] Engineering Sciences [physics]/Electronicsimage compression[SPI.TRON]Engineering Sciences [physics]/ElectronicsUncompressed videoWVSNsDiscrete020201 artificial intelligence & image processingWireless sensor networkData compressionImage compression
researchProduct

Improving Karhunen-Loeve based transform coding by using square isometries

2002

We propose, for an image compression system based on the Karhunen-Loeve transform implemented by neural networks, to take into consideration the 8 square isometries of an image block. The proper isometry applied puts the 8*8 square image block in a standard position, before applying the image block as input to the neural network architecture. The standard position is defined based on the variance of its four 4*4 sub-blocks (quadro partitioned) and brings the sub-block having the greatest variance in a specific corner and in another specific adjoining corner the sub-block having the second variance (if this is not possible the third is considered). The use of this "preprocessing" phase was e…

Karhunen–Loève theoremTheoretical computer scienceArtificial neural networkCompression (functional analysis)ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONAlgorithmSquare (algebra)Transform codingData compressionMathematicsBlock (data storage)Image compression
researchProduct

Lossless and near-lossless image compression based on multiresolution analysis

2013

There are applications in data compression, where quality control is of utmost importance. Certain features in the decoded signal must be exactly, or very accurately recovered, yet one would like to be as economical as possible with respect to storage and speed of computation. In this paper, we present a multi-scale data-compression algorithm within Harten's interpolatory framework for multiresolution that gives a specific estimate of the precise error between the original and the decoded signal, when measured in the L"~ and in the L"p (p=1,2) discrete norms. The proposed algorithm does not rely on a tensor-product strategy to compress two-dimensional signals, and it provides a priori bound…

Lossless compressionApplied MathematicsMultiresolution analysisComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONData compression ratioData_CODINGANDINFORMATIONTHEORYLossy compressionPeak signal-to-noise ratioComputational MathematicsQuantization (image processing)AlgorithmMathematicsImage compressionData compressionJournal of Computational and Applied Mathematics
researchProduct

Statistical Modeling of Huffman Tables Coding

2005

An innovative algorithm for automatic generation of Huffman coding tables for semantic classes of digital images is presented. Collecting statistics over a large dataset of corresponding images, we generated Huffman tables for three images classes: landscape, portrait and document. Comparisons between the new tables and the JPEG standard coding tables, using also different quality settings, have shown the effectiveness of the proposed strategy in terms of final bit size (e.g. compression ratio).

Lossless compressionComputer scienceTunstall codingComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONStatistical modelData_CODINGANDINFORMATIONTHEORYcomputer.file_formatHuffman codingcomputer.software_genreJPEGHuffman coding JPEG image compressionDigital imagesymbols.namesakeCanonical Huffman codesymbolsData miningcomputerAlgorithmCoding (social sciences)
researchProduct

Efficient image compression using directionlets

2007

Directionlets are built as basis functions of critically sampled perfect-reconstruction transforms with directional vanishing moments imposed along different directions. We combine the directionlets with the space-frequency quantization (SFQ) image compression method, originally based on the standard two-dimensional wavelet transform. We show that our new compression method outperforms the standard SFQ as well as the state-of-the-art image compression methods, such as SPIHT and JPEG-2000, in terms of the quality of compressed images, especially in a low-rate compression regime. We also show that the order of computational complexity remains the same, as compared to the complexity of the sta…

Lossless compressionTexture compressionbusiness.industryWavelet transformSet partitioning in hierarchical treesWaveletComputer visionArtificial intelligencebusinessQuantization (image processing)AlgorithmMathematicsData compressionImage compression2007 6th International Conference on Information, Communications & Signal Processing
researchProduct

Image compression based on a multi-directional map-dependent algorithm

2007

Abstract This work is devoted to the construction of a new multi-directional edge-adapted compression algorithm for images. It is based on a multi-scale transform that is performed in two steps: a detection step producing a map of edges and a prediction/multi-resolution step which takes into account the information given by the map. A short analysis of the multi-scale transform is performed and an estimate of the error associated to the largest coefficients for a piecewise regular function with Lipschitz edges is provided. Comparisons between this map-dependent algorithm and different classical algorithms are given.

Lossless compressionWork (thermodynamics)Texture compressionApplied MathematicsPiecewiseFunction (mathematics)Lipschitz continuityAlgorithmMathematicsImage compressionData compressionApplied and Computational Harmonic Analysis
researchProduct

Performance and Implementation Modeling of Gated Linear Networks on FPGA for Lossless Image Compression

2020

Over recent years, imaging systems have seen explosive increase in resolution. These trends present a challenge for resource-constrained embedded imaging devices. Efficient image compression is essential to reduce bandwidth consumption and to increase the capability of on-board storage. Especially, for imaging systems where information loss is not allowed, for example, in medical, military and remote sensing imaging systems. This paper explores the use of Gated Linear Networks (GLNs) for development of embedded lossless compression systems. GLNs have proved themselves via PAQ archiver series, that have been ranked among the top across several lossless compression benchmarks. We propose an a…

Lossless compressionbusiness.industryComputer scienceEmphasis (telecommunications)02 engineering and technologyInformation loss020202 computer hardware & architecture020204 information systemsScalability0202 electrical engineering electronic engineering information engineeringBandwidth (computing)businessField-programmable gate arrayThroughput (business)Computer hardwareImage compression2020 9th Mediterranean Conference on Embedded Computing (MECO)
researchProduct

A new minimum spanning tree-based method for shape description and matching working in Discrete Cosine space

2009

In this article, a new minimum spanning tree-based method for shape description and matching is proposed. Its properties are checked through the problem of graphical symbols recognition. Recognition invariance in front shift and multi-oriented noisy objects was studied in the context of small and low resolution binary images. The approach seems to have many desirable properties, even if the construction of graphs induces an expensive algorithmic cost. In order to reduce time computing, an alternative solution based on image compression concepts is provided. The recognition is realized in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discuss…

Matching (graph theory)business.industryBinary imageFeature extraction020206 networking & telecommunicationsPattern recognition02 engineering and technologyMinimum spanning treeArtificial IntelligenceRobustness (computer science)0202 electrical engineering electronic engineering information engineeringDiscrete cosine transform020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionArtificial intelligencebusinessSoftwareTransform codingComputingMilieux_MISCELLANEOUSMathematicsImage compression
researchProduct

The Reconstruction of Polyominoes from Approximately Orthogonal Projections

2001

The reconstruction of discrete two-dimensional pictures from their projection is one of the central problems in the areas of medical diagnostics, computer-aided tomography, pattern recognition, image processing, and data compression. In this note, we determine the computational complexity of the problem of reconstruction of polyominoes from their approximately orthogonal projections. We will prove that it is NP-complete if we reconstruct polyominoes, horizontal convex polyominoes and vertical convex polyominoes. Moreover we will give the polynomial algorithm for the reconstruction of hv-convex polyominoes that has time complexity O(m3n3).

Mathematics::CombinatoricsPolyominoComputational complexity theoryComputer scienceOrthographic projectionRegular polygonVector projectionComputer Science::Computational GeometryCombinatoricsProjection (mathematics)Computer Science::Discrete MathematicsTomographyAlgorithmTime complexityComputer Science::Formal Languages and Automata TheoryImage compression
researchProduct