Search results for "A* algorithm"

showing 10 items of 2538 documents

Forecasting time series with missing data using Holt's model

2009

This paper deals with the prediction of time series with missing data using an alternative formulation for Holt's model with additive errors. This formulation simplifies both the calculus of maximum likelihood estimators of all the unknowns in the model and the calculus of point forecasts. In the presence of missing data, the EM algorithm is used to obtain maximum likelihood estimates and point forecasts. Based on this application we propose a leave-one-out algorithm for the data transformation selection problem which allows us to analyse Holt's model with multiplicative errors. Some numerical results show the performance of these procedures for obtaining robust forecasts.

Statistics and ProbabilityApplied MathematicsAutocorrelationExponential smoothingLinear modelData transformation (statistics)EstimatorMissing dataExpectation–maximization algorithmStatisticsStatistics Probability and UncertaintyAdditive modelAlgorithmMathematicsJournal of Statistical Planning and Inference
researchProduct

Asymptotic optimality of myopic information-based strategies for Bayesian adaptive estimation

2016

This paper presents a general asymptotic theory of sequential Bayesian estimation giving results for the strongest, almost sure convergence. We show that under certain smoothness conditions on the probability model, the greedy information gain maximization algorithm for adaptive Bayesian estimation is asymptotically optimal in the sense that the determinant of the posterior covariance in a certain neighborhood of the true parameter value is asymptotically minimal. Using this result, we also obtain an asymptotic expression for the posterior entropy based on a novel definition of almost sure convergence on "most trials" (meaning that the convergence holds on a fraction of trials that converge…

Statistics and ProbabilityAsymptotic analysisMathematical optimizationPosterior probabilityBayesian probabilityMathematics - Statistics TheoryStatistics Theory (math.ST)050105 experimental psychologydifferential entropyDifferential entropyactive data selection03 medical and health sciences0302 clinical medicineactive learningFOS: Mathematics0501 psychology and cognitive sciencescost of observationdecision theoryMathematicsD-optimalityBayes estimatorSequential estimation05 social sciencesBayesian adaptive estimationAsymptotically optimal algorithmConvergence of random variablesasymptotic optimalitysequential estimation030217 neurology & neurosurgery
researchProduct

Algorithms and tools for protein-protein interaction networks clustering, with a special focus on population-based stochastic methods

2014

Abstract Motivation: Protein–protein interaction (PPI) networks are powerful models to represent the pairwise protein interactions of the organisms. Clustering PPI networks can be useful for isolating groups of interacting proteins that participate in the same biological processes or that perform together specific biological functions. Evolutionary orthologies can be inferred this way, as well as functions and properties of yet uncharacterized proteins. Results: We present an overview of the main state-of-the-art clustering methods that have been applied to PPI networks over the past decade. We distinguish five specific categories of approaches, describe and compare their main features and …

Statistics and ProbabilityComputer sciencePopulationPopulation basedMachine learningcomputer.software_genreBiochemistryProtein protein interaction networkgenetic algorithmsProtein–protein interactionBioinformatics Clustering Biological NetworksPPI networkscomplex detectionProtein Interaction MappingAnimalsCluster AnalysisHumanseducationCluster analysisMolecular BiologyTopology (chemistry)Class (computer programming)education.field_of_studybusiness.industryfood and beveragesProteinsComputer Science ApplicationsComputational MathematicsComputational Theory and MathematicsArtificial intelligenceData miningbusinessFocus (optics)computerAlgorithms
researchProduct

Comments on “Unobservable Selection and Coefficient Stability

2019

Abstract–: We establish a link between the approaches proposed by Oster (2019) and Pei, Pischke, and Schwandt (2019) which contribute to the development of inferential procedures for causal effects in the challenging and empirically relevant situation where the unknown data-generation process is not included in the set of models considered by the investigator. We use the general misspecification framework recently proposed by De Luca, Magnus, and Peracchi (2018) to analyze and understand the implications of the restrictions imposed by the two approaches.

Statistics and ProbabilityEconomics and EconometricEconomics and EconometricsTestingSettore SECS-P/05 - EconometriaOLSInconsistency01 natural sciencesUnobservable010104 statistics & probabilityBiaStability theory0502 economics and businessInconsistent Statistics and ProbabilityEconometrics0101 mathematicsSelection (genetic algorithm)050205 econometrics 05 social sciencesCausal effectConfoundingMean squared error (MSE)MisspecificationStatistics Probability and UncertaintyPsychologySocial Sciences (miscellaneous)Journal of Business and Economic Statistics
researchProduct

A heuristic method for estimating attribute importance by measuring choice time in a ranking task

2012

The evaluation of a product or service in terms of its attributes has been broadly studied in marketing, management and decision sciences. However, methods for finding important attributes have theoretical and practical limitations. The former are related to the selection of the most appropriate model; the latter are due to large number of variables that affect the specific experimental context. This study aims to present a new methodology that captures attribute preferences from a respondent and in particular, by using the choice time in a ranking task, it allows to indirectly obtain the importance weights for several tested attributes through a simple, fast and inexpensive procedure. More…

Statistics and ProbabilityEconomics and EconometricsService (systems architecture)HeuristicComputer scienceSettore SECS-S/02 - Statistica Per La Ricerca Sperimentale E TecnologicaVariable and attributeContext (language use)computer.software_genreTask (project management)RankingRespondentData miningStatistics Probability and UncertaintySettore SECS-S/01 - StatisticacomputerFinanceSelection (genetic algorithm)CHOICE TIME response time response latency attribute rating choice models
researchProduct

Maximum likelihood estimation for the exponential power function parameters

1995

This paper addresses the problem of obtaining maximum likelihood estimates for the three parameters of the exponential power function; the information matrix is derived and the covariance matrix is here presented; the regularity conditions which ensure asymptotic normality and efficiency are examined. A numerical investigation is performed for exploring the bias and variance of the maximum likelihood estimates and their dependence on sample size and shape parameter.

Statistics and ProbabilityEstimation theoryRestricted maximum likelihoodMaximum likelihood sequence estimationLikelihood principlesymbols.namesakeEstimation of covariance matricesModeling and SimulationStatisticsExpectation–maximization algorithmsymbolsFisher informationLikelihood functionMathematicsCommunications in Statistics - Simulation and Computation
researchProduct

Establishing some order amongst exact approximations of MCMCs

2016

Exact approximations of Markov chain Monte Carlo (MCMC) algorithms are a general emerging class of sampling algorithms. One of the main ideas behind exact approximations consists of replacing intractable quantities required to run standard MCMC algorithms, such as the target probability density in a Metropolis-Hastings algorithm, with estimators. Perhaps surprisingly, such approximations lead to powerful algorithms which are exact in the sense that they are guaranteed to have correct limiting distributions. In this paper we discover a general framework which allows one to compare, or order, performance measures of two implementations of such algorithms. In particular, we establish an order …

Statistics and ProbabilityFOS: Computer and information sciences65C05Mathematical optimizationMonotonic function01 natural sciencesStatistics - ComputationPseudo-marginal algorithm010104 statistics & probabilitysymbols.namesake60J05martingale couplingalgoritmitFOS: MathematicsApplied mathematics60J220101 mathematicsComputation (stat.CO)Mathematics65C40 (Primary) 60J05 65C05 (Secondary)Martingale couplingMarkov chainmatematiikkapseudo-marginal algorithm010102 general mathematicsProbability (math.PR)EstimatorMarkov chain Monte Carloconvex orderDelta methodMarkov chain Monte CarloOrder conditionsymbolsStatistics Probability and UncertaintyAsymptotic variance60E15Martingale (probability theory)Convex orderMathematics - ProbabilityGibbs sampling
researchProduct

Adaptive reference-free compression of sequence quality scores

2014

Motivation: Rapid technological progress in DNA sequencing has stimulated interest in compressing the vast datasets that are now routinely produced. Relatively little attention has been paid to compressing the quality scores that are assigned to each sequence, even though these scores may be harder to compress than the sequences themselves. By aggregating a set of reads into a compressed index, we find that the majority of bases can be predicted from the sequence of bases that are adjacent to them and hence are likely to be less informative for variant calling or other applications. The quality scores for such bases are aggressively compressed, leaving a relatively small number at full reso…

Statistics and ProbabilityFOS: Computer and information sciencesComputer sciencemedia_common.quotation_subjectReference-freecomputer.software_genreBiochemistryDNA sequencingSet (abstract data type)Redundancy (information theory)BWTComputer Science - Data Structures and AlgorithmsCode (cryptography)AnimalsHumansQuality (business)Data Structures and Algorithms (cs.DS)Quantitative Biology - GenomicsCaenorhabditis elegansMolecular Biologymedia_commonGenomics (q-bio.GN)SequenceGenomeSettore INF/01 - Informaticareference-free compressionHigh-Throughput Nucleotide SequencingGenomicsSequence Analysis DNAData CompressioncompressionComputer Science ApplicationsComputational MathematicsComputational Theory and MathematicsFOS: Biological sciencesData miningquality scoreMetagenomicscomputerBWT; compression; quality score; reference-free compressionAlgorithmsReference genome
researchProduct

Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?

2011

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …

Statistics and ProbabilityFOS: Computer and information sciencesIdentity matrixMathematics - Statistics TheoryStatistics Theory (math.ST)Upper and lower boundsStatistics - Computation93E3593E15Combinatorics60J27Mathematics::ProbabilityLaw of large numbers65C40 60J27 93E15 93E35stochastic approximationFOS: MathematicsEigenvalues and eigenvectorsComputation (stat.CO)Metropolis algorithmMathematicsProbability (math.PR)Zero (complex analysis)CovariancestabilityUniform continuityBounded function65C40Statistics Probability and Uncertaintyadaptive Markov chain Monte CarloMathematics - Probability
researchProduct

Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance

2017

We establish an ordering criterion for the asymptotic variances of two consistent Markov chain Monte Carlo (MCMC) estimators: an importance sampling (IS) estimator, based on an approximate reversible chain and subsequent IS weighting, and a standard MCMC estimator, based on an exact reversible chain. Essentially, we relax the criterion of the Peskun type covariance ordering by considering two different invariant probabilities, and obtain, in place of a strict ordering of asymptotic variances, a bound of the asymptotic variance of IS by that of the direct MCMC. Simple examples show that IS can have arbitrarily better or worse asymptotic variance than Metropolis-Hastings and delayed-acceptanc…

Statistics and ProbabilityFOS: Computer and information sciencesdelayed-acceptanceMarkovin ketjut01 natural sciencesStatistics - Computationasymptotic variance010104 statistics & probabilitysymbols.namesake60J22 65C05unbiased estimatorFOS: MathematicsApplied mathematics0101 mathematicsComputation (stat.CO)stokastiset prosessitestimointiMathematicsnumeeriset menetelmätpseudo-marginal algorithmApplied Mathematics010102 general mathematicsProbability (math.PR)EstimatorMarkov chain Monte CarloCovarianceInfimum and supremumWeightingMarkov chain Monte CarloMonte Carlo -menetelmätDelta methodimportance samplingModeling and SimulationBounded functionsymbolsImportance samplingMathematics - Probability
researchProduct