Search results for "Data_CODINGANDINFORMATIONTHEORY"

showing 10 items of 196 documents

Efficient pipeline FFT processors for WLAN MIMO-OFDM systems

2005

The most area-efficient pipeline FFT processors for WLAN MIMO-OFDM systems are presented. It is shown that although the R2/sup 3/SDF architecture is the most area-efficient approach for implementing pipeline FFT processors, RrMDC architectures are more efficient in MIMO-OFDM systems when more than three channels are used.

Engineeringbusiness.industryOrthogonal frequency-division multiplexingPipeline (computing)ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKSFast Fourier transformData_CODINGANDINFORMATIONTHEORYIntegrated circuitMIMO-OFDMlaw.inventionlawEmbedded systemWireless lanCircuit architectureWi-FiHardware_ARITHMETICANDLOGICSTRUCTURESElectrical and Electronic EngineeringbusinessComputer hardwareElectronics Letters
researchProduct

The design of measurement-based underwater acoustic channel simulators using the INLSA algorithm

2015

This paper utilizes the iterative nonlinear least square approximation (INLSA) algorithm for designing measurement-based wideband shallow underwater acoustic (UWA) channel simulators. Measurement-based channel simulators are essential for the test, optimization, and performance analysis of UWA communication systems. The aim is to fit the time-variant channel impulse response (TVCIR) of the simulation model to that of the measured UWA channel. The performance of the designed UWA channel simulator is assessed by comparing the time-frequency correlation function (TFCF), the power delay profile (PDP), and the probability density function (PDF) of the channel envelope with the corresponding quan…

Engineeringbusiness.industryRayleigh distributionProbability density functionData_CODINGANDINFORMATIONTHEORYPropagation delayCorrelation function (quantum field theory)Electronic engineeringAlgorithm designWidebandbusinessPower delay profileAlgorithmComputer Science::Information TheoryCommunication channelOCEANS 2015 - Genova
researchProduct

Adaptive learning of compressible strings

2020

Suppose an oracle knows a string $S$ that is unknown to us and that we want to determine. The oracle can answer queries of the form "Is $s$ a substring of $S$?". In 1995, Skiena and Sundaram showed that, in the worst case, any algorithm needs to ask the oracle $\sigma n/4 -O(n)$ queries in order to be able to reconstruct the hidden string, where $\sigma$ is the size of the alphabet of $S$ and $n$ its length, and gave an algorithm that spends $(\sigma-1)n+O(\sigma \sqrt{n})$ queries to reconstruct $S$. The main contribution of our paper is to improve the above upper-bound in the context where the string is compressible. We first present a universal algorithm that, given a (computable) compre…

FOS: Computer and information sciencesCentroid decompositionGeneral Computer ScienceString compressionAdaptive learningKolmogorov complexityContext (language use)Data_CODINGANDINFORMATIONTHEORYString reconstructionTheoretical Computer ScienceCombinatoricsString reconstruction; String learning; Adaptive learning; Kolmogorov complexity; String compression; Lempel-Ziv; Centroid decomposition; Suffix treeSuffix treeIntegerComputer Science - Data Structures and AlgorithmsOrder (group theory)Data Structures and Algorithms (cs.DS)Adaptive learning; Centroid decomposition; Kolmogorov complexity; Lempel-Ziv; String compression; String learning; String reconstruction; Suffix treeTime complexityComputer Science::DatabasesMathematicsLempel-ZivSettore INF/01 - InformaticaLinear spaceString (computer science)SubstringBounded functionString learningTheoretical Computer Science
researchProduct

End-to-end Optimized Image Compression

2016

We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer.…

FOS: Computer and information sciencesComputer Science - Information TheoryComputer Vision and Pattern Recognition (cs.CV)Information Theory (cs.IT)Computer Science - Computer Vision and Pattern RecognitionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONData_CODINGANDINFORMATIONTHEORY
researchProduct

Nash codes for noisy channels

2012

This paper studies the stability of communication protocols that deal with transmission errors. We consider a coordination game between an informed sender and an uninformed decision maker, the receiver, who communicate over a noisy channel. The sender's strategy, called a code, maps states of nature to signals. The receiver's best response is to decode the received channel output as the state with highest expected receiver payoff. Given this decoding, an equilibrium or "Nash code" results if the sender encodes every state as prescribed. We show two theorems that give sufficient conditions for Nash codes. First, a receiver-optimal code defines a Nash code. A second, more surprising observati…

FOS: Computer and information sciencesComputer Science::Computer Science and Game TheoryTheoretical computer scienceComputer scienceInformation Theory (cs.IT)Computer Science - Information TheoryStochastic gamejel:C72jel:D82Stability (learning theory)Data_CODINGANDINFORMATIONTHEORYManagement Science and Operations Researchsender-receiver game communication noisy channel91A28Computer Science ApplicationsComputer Science - Computer Science and Game TheoryBest responseCode (cryptography)Coordination gameQA MathematicsDecoding methodsCommunication channelComputer Science and Game Theory (cs.GT)Computer Science::Information Theory
researchProduct

Quantum autoencoders via quantum adders with genetic algorithms

2017

The quantum autoencoder is a recent paradigm in the field of quantum machine learning, which may enable an enhanced use of resources in quantum technologies. To this end, quantum neural networks with less nodes in the inner than in the outer layers were considered. Here, we propose a useful connection between quantum autoencoders and quantum adders, which approximately add two unknown quantum states supported in different quantum systems. Specifically, this link allows us to employ optimized approximate quantum adders, obtained with genetic algorithms, for the implementation of quantum autoencoders for a variety of initial states. Furthermore, we can also directly optimize the quantum autoe…

FOS: Computer and information sciencesComputer Science::Machine Learning0301 basic medicineComputer Science - Machine LearningAdderPhysics and Astronomy (miscellaneous)Quantum machine learningField (physics)Computer scienceMaterials Science (miscellaneous)Computer Science::Neural and Evolutionary ComputationFOS: Physical sciencesData_CODINGANDINFORMATIONTHEORYTopology01 natural sciencesMachine Learning (cs.LG)Statistics::Machine Learning03 medical and health sciencesQuantum state0103 physical sciencesNeural and Evolutionary Computing (cs.NE)Electrical and Electronic Engineering010306 general physicsQuantumQuantum PhysicsArtificial neural networkComputer Science - Neural and Evolutionary ComputingTheoryofComputation_GENERALAutoencoderAtomic and Molecular Physics and OpticsQuantum technology030104 developmental biologyComputerSystemsOrganization_MISCELLANEOUSQuantum Physics (quant-ph)
researchProduct

Improving table compression with combinatorial optimization

2002

We study the problem of compressing massive tables within the partition-training paradigm introduced by Buchsbaum et al. [SODA'00], in which a table is partitioned by an off-line training procedure into disjoint intervals of columns, each of which is compressed separately by a standard, on-line compressor like gzip. We provide a new theory that unifies previous experimental observations on partitioning and heuristic observations on column permutation, all of which are used to improve compression rates. Based on the theory, we devise the first on-line training algorithms for table compression, which can be applied to individual files, not just continuously operating sources; and also a new, …

FOS: Computer and information sciencesComputer scienceHeuristic (computer science)E.4G.2.1Data_CODINGANDINFORMATIONTHEORYDisjoint setsTravelling salesman problemPermutationArtificial IntelligenceCompression (functional analysis)Computer Science - Data Structures and AlgorithmsH.1.8H.2.7Data Structures and Algorithms (cs.DS)E.4; F.1.3; F.2.2; G.2.1; H.1.1; H.1.8; H.2.7H.1.1Dynamic programmingHardware and ArchitectureControl and Systems EngineeringCombinatorial optimizationTable (database)F.1.3F.2.2AlgorithmSoftwareInformation SystemsJournal of the ACM
researchProduct

On Combinatorial Generation of Prefix Normal Words

2014

A prefix normal word is a binary word with the property that no substring has more 1s than the prefix of the same length. This class of words is important in the context of binary jumbled pattern matching. In this paper we present an efficient algorithm for exhaustively listing the prefix normal words with a fixed length. The algorithm is based on the fact that the language of prefix normal words is a bubble language, a class of binary languages with the property that, for any word w in the language, exchanging the first occurrence of 01 by 10 in w results in another word in the language. We prove that each prefix normal word is produced in O(n) amortized time, and conjecture, based on expe…

FOS: Computer and information sciencesDiscrete Mathematics (cs.DM)Computer Science - Data Structures and AlgorithmsData Structures and Algorithms (cs.DS)Data_CODINGANDINFORMATIONTHEORYComputer Science - Discrete Mathematics
researchProduct

Normal, Abby Normal, Prefix Normal

2014

A prefix normal word is a binary word with the property that no substring has more 1s than the prefix of the same length. This class of words is important in the context of binary jumbled pattern matching. In this paper we present results about the number $pnw(n)$ of prefix normal words of length $n$, showing that $pnw(n) =\Omega\left(2^{n - c\sqrt{n\ln n}}\right)$ for some $c$ and $pnw(n) = O \left(\frac{2^n (\ln n)^2}{n}\right)$. We introduce efficient algorithms for testing the prefix normal property and a "mechanical algorithm" for computing prefix normal forms. We also include games which can be played with prefix normal words. In these games Alice wishes to stay normal but Bob wants t…

FOS: Computer and information sciencesDiscrete Mathematics (cs.DM)Formal Languages and Automata Theory (cs.FL)Computer Science - Data Structures and AlgorithmsFOS: MathematicsMathematics - CombinatoricsData Structures and Algorithms (cs.DS)Computer Science - Formal Languages and Automata TheoryCombinatorics (math.CO)Data_CODINGANDINFORMATIONTHEORYComputer Science - Discrete Mathematics
researchProduct

Generating a Gray code for prefix normal words in amortized polylogarithmic time per word

2020

A prefix normal word is a binary word with the property that no substring has more $1$s than the prefix of the same length. By proving that the set of prefix normal words is a bubble language, we can exhaustively list all prefix normal words of length $n$ as a combinatorial Gray code, where successive strings differ by at most two swaps or bit flips. This Gray code can be generated in $\Oh(\log^2 n)$ amortized time per word, while the best generation algorithm hitherto has $\Oh(n)$ running time per word. We also present a membership tester for prefix normal words, as well as a novel characterization of bubble languages.

FOS: Computer and information sciencesGeneral Computer ScienceFormal Languages and Automata Theory (cs.FL)Property (programming)combinatorial Gray codeComputer Science - Formal Languages and Automata TheoryData_CODINGANDINFORMATIONTHEORY0102 computer and information sciences02 engineering and technologyCharacterization (mathematics)01 natural sciencesTheoretical Computer ScienceCombinatoricsSet (abstract data type)Gray codeComputer Science - Data Structures and Algorithms0202 electrical engineering electronic engineering information engineeringData Structures and Algorithms (cs.DS)MathematicsAmortized analysisSettore INF/01 - Informaticaprefix normal wordsSubstringcombinatorial generationPrefixjumbled pattern matching010201 computation theory & mathematics020201 artificial intelligence & image processingbinary languagesprefix normal words binary languages combinatorial Gray code combinatorial generation jumbled pattern matchingWord (computer architecture)Theoretical Computer Science
researchProduct