0000000001085831

AUTHOR

Matti Vihola

showing 29 related works from this author

On the stability and ergodicity of adaptive scaling Metropolis algorithms

2011

The stability and ergodicity properties of two adaptive random walk Metropolis algorithms are considered. The both algorithms adjust the scaling of the proposal distribution continuously based on the observed acceptance probability. Unlike the previously proposed forms of the algorithms, the adapted scaling parameter is not constrained within a predefined compact interval. The first algorithm is based on scale adaptation only, while the second one incorporates also covariance adaptation. A strong law of large numbers is shown to hold assuming that the target density is smooth enough and has either compact support or super-exponentially decaying tails.

Statistics and ProbabilityStochastic approximationMathematics - Statistics TheoryStatistics Theory (math.ST)Law of large numbersMultiple-try Metropolis01 natural sciencesStability (probability)010104 statistics & probabilityModelling and Simulation65C40 60J27 93E15 93E35Adaptive Markov chain Monte CarloFOS: Mathematics0101 mathematicsScalingMetropolis algorithmMathematicsta112Applied Mathematics010102 general mathematicsRejection samplingErgodicityProbability (math.PR)ta111CovarianceRandom walkMetropolis–Hastings algorithmModeling and SimulationAlgorithmStabilityMathematics - ProbabilityStochastic Processes and their Applications
researchProduct

Identifying territories using presence-only citizen science data : An application to the Finnish wolf population

2022

Citizens, community groups and local institutions participate in voluntary biological monitoring of population status and trends by providing species data e.g. for regulations and conservation. Sophisticated statistical methods are required to unlock the potential of such data in the assessment of wildlife populations. We develop a statistical modelling framework for identifying territories based on presence-only citizen science data. The framework can be used to jointly estimate the number of active animal territories and their locations in time. Our approach is based on a data generating model which consists of a dynamic submodel for the appearance/removal of territories and an observatio…

reviiritEcological Modelingbayesilainen menetelmäcitizen science datasusipaikkatietoanalyysisequential Monte CarloeläinkannatBayesian statisticsterritory identificationMonte Carlo -menetelmätpopulaatiotkansalaishavainnotkansalaistiedepresence-only dataspatio-temporal model
researchProduct

Grapham: Graphical models with adaptive random walk Metropolis algorithms

2008

Recently developed adaptive Markov chain Monte Carlo (MCMC) methods have been applied successfully to many problems in Bayesian statistics. Grapham is a new open source implementation covering several such methods, with emphasis on graphical models for directed acyclic graphs. The implemented algorithms include the seminal Adaptive Metropolis algorithm adjusting the proposal covariance according to the history of the chain and a Metropolis algorithm adjusting the proposal scale based on the observed acceptance probability. Different variants of the algorithms allow one, for example, to use these two algorithms together, employ delayed rejection and adjust several parameters of the algorithm…

FOS: Computer and information sciencesStatistics and ProbabilityMarkov chainAdaptive algorithmApplied MathematicsRejection samplingMarkov chain Monte CarloMultiple-try MetropolisStatistics - ComputationStatistics::ComputationComputational Mathematicssymbols.namesakeMetropolis–Hastings algorithmComputational Theory and MathematicssymbolsGraphical modelAlgorithmComputation (stat.CO)MathematicsGibbs samplingComputational Statistics & Data Analysis
researchProduct

On resampling schemes for particle filters with weakly informative observations

2022

We consider particle filters with weakly informative observations (or `potentials') relative to the latent state dynamics. The particular focus of this work is on particle filters to approximate time-discretisations of continuous-time Feynman--Kac path integral models -- a scenario that naturally arises when addressing filtering and smoothing problems in continuous time -- but our findings are indicative about weakly informative settings beyond this context too. We study the performance of different resampling schemes, such as systematic resampling, SSP (Srinivasan sampling process) and stratified resampling, as the time-discretisation becomes finer and also identify their continuous-time l…

FOS: Computer and information sciencesHidden Markov modelparticle filterStatistics and ProbabilityProbability (math.PR)Markovin ketjutStatistics - ComputationMethodology (stat.ME)resamplingFOS: Mathematicsotantanumeerinen analyysiPrimary 65C35 secondary 65C05 65C60 60J25Statistics Probability and UncertaintyFeynman–Kac modeltilastolliset mallitComputation (stat.CO)path integralMathematics - ProbabilityStatistics - Methodologystokastiset prosessit
researchProduct

QUANTITATIVE CONVERGENCE RATES FOR SUBGEOMETRIC MARKOV CHAINS

2015

We provide explicit expressions for the constants involved in the characterisation of ergodicity of subgeometric Markov chains. The constants are determined in terms of those appearing in the assumed drift and one-step minorisation conditions. The results are fundamental for the study of some algorithms where uniform bounds for these constants are needed for a family of Markov kernels. Our results accommodate also some classes of inhomogeneous chains.

Discrete mathematicsStatistics and ProbabilityMarkov chain mixing timeMarkov chainVariable-order Markov modelGeneral Mathematicsta111Markov chain010102 general mathematicsErgodicity01 natural sciencesInhomogeneous010104 statistics & probability60J05Polynomial ergodicitySubgeometric ergodicityConvergence (routing)60J22Examples of Markov chainsStatistical physics0101 mathematicsStatistics Probability and UncertaintyMathematics
researchProduct

Stochastic order characterization of uniform integrability and tightness

2013

We show that a family of random variables is uniformly integrable if and only if it is stochastically bounded in the increasing convex order by an integrable random variable. This result is complemented by proving analogous statements for the strong stochastic order and for power-integrable dominating random variables. Especially, we show that whenever a family of random variables is stochastically bounded by a p-integrable random variable for some p>1, there is no distinction between the strong order and the increasing convex order. These results also yield new characterizations of relative compactness in Wasserstein and Prohorov metrics.

Statistics and ProbabilityDiscrete mathematicsPure mathematicsRandom fieldMultivariate random variableProbability (math.PR)ta111Random functionRandom element60E15 60B10 60F25Stochastic orderingFunctional Analysis (math.FA)Mathematics - Functional AnalysisRandom variateConvergence of random variablesStochastic simulationFOS: MathematicsStatistics Probability and UncertaintyMathematics - ProbabilityMathematicsStatistics & Probability Letters
researchProduct

Hierarchical log Gaussian Cox process for regeneration in uneven-aged forests

2021

We propose a hierarchical log Gaussian Cox process (LGCP) for point patterns, where a set of points x affects another set of points y but not vice versa. We use the model to investigate the effect of large trees to the locations of seedlings. In the model, every point in x has a parametric influence kernel or signal, which together form an influence field. Conditionally on the parameters, the influence field acts as a spatial covariate in the intensity of the model, and the intensity itself is a non-linear function of the parameters. Points outside the observation window may affect the influence field inside the window. We propose an edge correction to account for this missing data. The par…

0106 biological sciencesStatistics and ProbabilityFOS: Computer and information sciences62F15 (Primary) 62M30 60G55 (Secondary)MCMCGaussianBayesian inferenceMarkovin ketjutStatistics - Applications010603 evolutionary biology01 natural sciencesCox processMethodology (stat.ME)010104 statistics & probabilitysymbols.namesakeregeneraatio (biologia)Applied mathematicsApplications (stat.AP)0101 mathematicsLaplace approximationStatistics - MethodologyGeneral Environmental ScienceParametric statisticsMathematicsspatial random effectsbayesilainen menetelmäMarkov chain Monte CarloFunction (mathematics)15. Life on landMissing dataMonte Carlo -menetelmätcompetition kernelLaplace's methodKernel (statistics)symbolstree regenerationpuustometsänhoitomatemaattiset mallitStatistics Probability and Uncertainty
researchProduct

Establishing some order amongst exact approximations of MCMCs

2016

Exact approximations of Markov chain Monte Carlo (MCMC) algorithms are a general emerging class of sampling algorithms. One of the main ideas behind exact approximations consists of replacing intractable quantities required to run standard MCMC algorithms, such as the target probability density in a Metropolis-Hastings algorithm, with estimators. Perhaps surprisingly, such approximations lead to powerful algorithms which are exact in the sense that they are guaranteed to have correct limiting distributions. In this paper we discover a general framework which allows one to compare, or order, performance measures of two implementations of such algorithms. In particular, we establish an order …

Statistics and ProbabilityFOS: Computer and information sciences65C05Mathematical optimizationMonotonic function01 natural sciencesStatistics - ComputationPseudo-marginal algorithm010104 statistics & probabilitysymbols.namesake60J05martingale couplingalgoritmitFOS: MathematicsApplied mathematics60J220101 mathematicsComputation (stat.CO)Mathematics65C40 (Primary) 60J05 65C05 (Secondary)Martingale couplingMarkov chainmatematiikkapseudo-marginal algorithm010102 general mathematicsProbability (math.PR)EstimatorMarkov chain Monte Carloconvex orderDelta methodMarkov chain Monte CarloOrder conditionsymbolsStatistics Probability and UncertaintyAsymptotic variance60E15Martingale (probability theory)Convex orderMathematics - ProbabilityGibbs sampling
researchProduct

An Adaptive Parallel Tempering Algorithm

2013

Parallel tempering is a generic Markov chainMonteCarlo samplingmethod which allows good mixing with multimodal target distributions, where conventionalMetropolis- Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm with fixed number of temperatures which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm under general conditions. We also prove as a side…

Statistics and ProbabilityScheduleMathematical optimizationta112Adaptive algorithmErgodicityta111Mixing (mathematics)Law of large numbersKernel (statistics)Convergence (routing)Discrete Mathematics and CombinatoricsParallel temperingStatistics Probability and UncertaintyAlgorithmMathematicsJournal of Computational and Graphical Statistics
researchProduct

Convergence of Markovian Stochastic Approximation with discontinuous dynamics

2016

This paper is devoted to the convergence analysis of stochastic approximation algorithms of the form $\theta_{n+1} = \theta_n + \gamma_{n+1} H_{\theta_n}({X_{n+1}})$, where ${\left\{ {\theta}_n, n \in {\mathbb{N}} \right\}}$ is an ${\mathbb{R}}^d$-valued sequence, ${\left\{ {\gamma}_n, n \in {\mathbb{N}} \right\}}$ is a deterministic stepsize sequence, and ${\left\{ {X}_n, n \in {\mathbb{N}} \right\}}$ is a controlled Markov chain. We study the convergence under weak assumptions on smoothness-in-$\theta$ of the function $\theta \mapsto H_{\theta}({x})$. It is usually assumed that this function is continuous for any $x$; in this work, we relax this condition. Our results are illustrated by c…

Control and OptimizationStochastic approximationMarkov processMathematics - Statistics Theorydiscontinuous dynamicsStatistics Theory (math.ST)Stochastic approximation01 natural sciencesCombinatorics010104 statistics & probabilitysymbols.namesake[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]Convergence (routing)FOS: Mathematics0101 mathematics62L20state-dependent noiseComputingMilieux_MISCELLANEOUSMathematicsta112SequenceconvergenceApplied Mathematicsta111010102 general mathematicsFunction (mathematics)[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]16. Peace & justice[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulationcontrolled Markov chainMarkovian stochastic approximationsymbolsStochastic approximat
researchProduct

Coupled conditional backward sampling particle filter

2020

The conditional particle filter (CPF) is a promising algorithm for general hidden Markov model smoothing. Empirical evidence suggests that the variant of CPF with backward sampling (CBPF) performs well even with long time series. Previous theoretical results have not been able to demonstrate the improvement brought by backward sampling, whereas we provide rates showing that CBPF can remain effective with a fixed number of particles independent of the time horizon. Our result is based on analysis of a new coupling of two CBPFs, the coupled conditional backward sampling particle filter (CCBPF). We show that CCBPF has good stability properties in the sense that with fixed number of particles, …

65C05FOS: Computer and information sciencesStatistics and ProbabilityunbiasedMarkovin ketjutTime horizonStatistics - Computation01 natural sciencesStability (probability)backward sampling65C05 (Primary) 60J05 65C35 65C40 (secondary)010104 statistics & probabilityconvergence rateFOS: MathematicsApplied mathematics0101 mathematicscouplingHidden Markov model65C35Computation (stat.CO)Mathematicsstokastiset prosessitBackward samplingSeries (mathematics)Probability (math.PR)Sampling (statistics)conditional particle filterMonte Carlo -menetelmätRate of convergence65C6065C40numeerinen analyysiStatistics Probability and UncertaintyParticle filterMathematics - ProbabilitySmoothing
researchProduct

Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance

2017

We establish an ordering criterion for the asymptotic variances of two consistent Markov chain Monte Carlo (MCMC) estimators: an importance sampling (IS) estimator, based on an approximate reversible chain and subsequent IS weighting, and a standard MCMC estimator, based on an exact reversible chain. Essentially, we relax the criterion of the Peskun type covariance ordering by considering two different invariant probabilities, and obtain, in place of a strict ordering of asymptotic variances, a bound of the asymptotic variance of IS by that of the direct MCMC. Simple examples show that IS can have arbitrarily better or worse asymptotic variance than Metropolis-Hastings and delayed-acceptanc…

Statistics and ProbabilityFOS: Computer and information sciencesdelayed-acceptanceMarkovin ketjut01 natural sciencesStatistics - Computationasymptotic variance010104 statistics & probabilitysymbols.namesake60J22 65C05unbiased estimatorFOS: MathematicsApplied mathematics0101 mathematicsComputation (stat.CO)stokastiset prosessitestimointiMathematicsnumeeriset menetelmätpseudo-marginal algorithmApplied Mathematics010102 general mathematicsProbability (math.PR)EstimatorMarkov chain Monte CarloCovarianceInfimum and supremumWeightingMarkov chain Monte CarloMonte Carlo -menetelmätDelta methodimportance samplingModeling and SimulationBounded functionsymbolsImportance samplingMathematics - Probability
researchProduct

Conditional particle filters with diffuse initial distributions

2020

Conditional particle filters (CPFs) are powerful smoothing algorithms for general nonlinear/non-Gaussian hidden Markov models. However, CPFs can be inefficient or difficult to apply with diffuse initial distributions, which are common in statistical applications. We propose a simple but generally applicable auxiliary variable method, which can be used together with the CPF in order to perform efficient inference with diffuse initial distributions. The method only requires simulatable Markov transitions that are reversible with respect to the initial distribution, which can be improper. We focus in particular on random-walk type transitions which are reversible with respect to a uniform init…

FOS: Computer and information sciencesStatistics and ProbabilityComputer scienceGaussianBayesian inferenceMarkovin ketjut02 engineering and technology01 natural sciencesStatistics - ComputationArticleTheoretical Computer ScienceMethodology (stat.ME)010104 statistics & probabilitysymbols.namesakeAdaptive Markov chain Monte Carlotilastotiede0202 electrical engineering electronic engineering information engineeringStatistical physics0101 mathematicsDiffuse initialisationHidden Markov modelComputation (stat.CO)Statistics - MethodologyState space modelHidden Markov modelbayesian inferenceMarkov chaindiffuse initialisationbayesilainen menetelmäconditional particle filtersmoothingmatemaattiset menetelmät020206 networking & telecommunicationsConditional particle filterCovariancecompartment modelRandom walkCompartment modelstate space modelComputational Theory and MathematicsAutoregressive modelsymbolsStatistics Probability and UncertaintyParticle filterSmoothingSmoothing
researchProduct

Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?

2011

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …

Statistics and ProbabilityFOS: Computer and information sciencesIdentity matrixMathematics - Statistics TheoryStatistics Theory (math.ST)Upper and lower boundsStatistics - Computation93E3593E15Combinatorics60J27Mathematics::ProbabilityLaw of large numbers65C40 60J27 93E15 93E35stochastic approximationFOS: MathematicsEigenvalues and eigenvectorsComputation (stat.CO)Metropolis algorithmMathematicsProbability (math.PR)Zero (complex analysis)CovariancestabilityUniform continuityBounded function65C40Statistics Probability and Uncertaintyadaptive Markov chain Monte CarloMathematics - Probability
researchProduct

On the stability of some controlled Markov chains and its applications to stochastic approximation with Markovian dynamic

2015

We develop a practical approach to establish the stability, that is, the recurrence in a given set, of a large class of controlled Markov chains. These processes arise in various areas of applied science and encompass important numerical methods. We show in particular how individual Lyapunov functions and associated drift conditions for the parametrized family of Markov transition probabilities and the parameter update can be combined to form Lyapunov functions for the joint process, leading to the proof of the desired stability property. Of particular interest is the fact that the approach applies even in situations where the two components of the process present a time-scale separation, w…

65C05FOS: Computer and information sciencesStatistics and ProbabilityLyapunov functionStability (learning theory)Markov processContext (language use)Mathematics - Statistics Theorycontrolled Markov chainsStatistics Theory (math.ST)Stochastic approximation01 natural sciencesMethodology (stat.ME)010104 statistics & probabilitysymbols.namesake60J05stochastic approximationFOS: MathematicsComputational statisticsApplied mathematics60J220101 mathematicsStatistics - MethodologyMathematicsSequenceMarkov chain010102 general mathematicsStability Markov chainssymbolsStatistics Probability and Uncertaintyadaptive Markov chain Monte Carlo
researchProduct

Conditional convex orders and measurable martingale couplings

2014

Strassen's classical martingale coupling theorem states that two real-valued random variables are ordered in the convex (resp.\ increasing convex) stochastic order if and only if they admit a martingale (resp.\ submartingale) coupling. By analyzing topological properties of spaces of probability measures equipped with a Wasserstein metric and applying a measurable selection theorem, we prove a conditional version of this result for real-valued random variables conditioned on a random element taking values in a general measurable space. We also provide an analogue of the conditional martingale coupling theorem in the language of probability kernels and illustrate how this result can be appli…

Statistics and Probability01 natural sciencesStochastic ordering010104 statistics & probabilitysymbols.namesakeMathematics::ProbabilityStrassen algorithmWasserstein metricmartingale couplingvektorit (matematiikka)FOS: MathematicsApplied mathematics0101 mathematicsstokastiset prosessitMathematicsProbability measurekytkentäconvex stochastic ordermatematiikka010102 general mathematicsProbability (math.PR)Random elementMarkov chain Monte Carloconditional couplingincreasing convex stochastic orderpointwise couplingsymbols60E15probability kernelMartingale (probability theory)Random variableMathematics - Probability
researchProduct

Unbiased Inference for Discretely Observed Hidden Markov Model Diffusions

2021

We develop a Bayesian inference method for diffusions observed discretely and with noise, which is free of discretisation bias. Unlike existing unbiased inference methods, our method does not rely on exact simulation techniques. Instead, our method uses standard time-discretised approximations of diffusions, such as the Euler--Maruyama scheme. Our approach is based on particle marginal Metropolis--Hastings, a particle filter, randomised multilevel Monte Carlo, and importance sampling type correction of approximate Markov chain Monte Carlo. The resulting estimator leads to inference without a bias from the time-discretisation as the number of Markov chain iterations increases. We give conver…

FOS: Computer and information sciencesStatistics and ProbabilityDiscretizationComputer scienceMarkovin ketjutInference010103 numerical & computational mathematicssequential Monte CarloBayesian inferenceStatistics - Computation01 natural sciencesMethodology (stat.ME)010104 statistics & probabilitysymbols.namesakediffuusio (fysikaaliset ilmiöt)FOS: MathematicsDiscrete Mathematics and Combinatorics0101 mathematicsHidden Markov modelComputation (stat.CO)Statistics - Methodologymatematiikkabayesilainen menetelmäApplied MathematicsProbability (math.PR)diffusionmatemaattiset menetelmätMarkov chain Monte CarloMarkov chain Monte CarloMonte Carlo -menetelmätNoiseimportance sampling65C05 (primary) 60H35 65C35 65C40 (secondary)Modeling and Simulationsymbolsmatemaattiset mallitStatistics Probability and Uncertaintymultilevel Monte CarloParticle filterAlgorithmMathematics - ProbabilityImportance samplingSIAM/ASA Journal on Uncertainty Quantification
researchProduct

On the use of approximate Bayesian computation Markov chain Monte Carlo with inflated tolerance and post-correction

2020

Approximate Bayesian computation allows for inference of complicated probabilistic models with intractable likelihoods using model simulations. The Markov chain Monte Carlo implementation of approximate Bayesian computation is often sensitive to the tolerance parameter: low tolerance leads to poor mixing and large tolerance entails excess bias. We consider an approach using a relatively large tolerance for the Markov chain Monte Carlo sampler to ensure its sufficient mixing, and post-processing the output leading to estimators for a range of finer tolerances. We introduce an approximate confidence interval for the related post-corrected estimators, and propose an adaptive approximate Bayesi…

FOS: Computer and information sciences0301 basic medicineStatistics and Probabilitytolerance choiceGeneral MathematicsMarkovin ketjutInference01 natural sciencesStatistics - Computationapproximate Bayesian computation010104 statistics & probability03 medical and health sciencessymbols.namesakeMixing (mathematics)adaptive algorithmalgoritmit0101 mathematicsComputation (stat.CO)MathematicsAdaptive algorithmMarkov chainbayesilainen menetelmäApplied MathematicsProbabilistic logicEstimatorMarkov chain Monte CarloAgricultural and Biological Sciences (miscellaneous)Markov chain Monte CarloMonte Carlo -menetelmätimportance sampling030104 developmental biologyconfidence intervalsymbolsStatistics Probability and UncertaintyApproximate Bayesian computationGeneral Agricultural and Biological SciencesAlgorithm
researchProduct

bssm: Bayesian Inference of Non-linear and Non-Gaussian State Space Models in R

2021

We present an R package bssm for Bayesian non-linear/non-Gaussian state space modelling. Unlike the existing packages, bssm allows for easy-to-use approximate inference based on Gaussian approximations such as the Laplace approximation and the extended Kalman filter. The package accommodates also discretely observed latent diffusion processes. The inference is based on fully automatic, adaptive Markov chain Monte Carlo (MCMC) on the hyperparameters, with optional importance sampling post-correction to eliminate any approximation bias. The package implements also a direct pseudo-marginal MCMC and a delayed acceptance pseudo-marginal MCMC using intermediate approximations. The package offers …

Statistics and ProbabilitymallintaminenFOS: Computer and information sciencesNumerical AnalysisMonte Carlo -menetelmätmatematiikkabayesilainen menetelmäMarkovin ketjuttila-avaruusmallitStatistics Probability and Uncertaintymatemaattiset mallitStatistics - ComputationComputation (stat.CO)
researchProduct

Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers

2018

We establish quantitative bounds for rates of convergence and asymptotic variances for iterated conditional sequential Monte Carlo (i-cSMC) Markov chains and associated particle Gibbs samplers. Our main findings are that the essential boundedness of potential functions associated with the i-cSMC algorithm provide necessary and sufficient conditions for the uniform ergodicity of the i-cSMC Markov chain, as well as quantitative bounds on its (uniformly geometric) rate of convergence. Furthermore, we show that the i-cSMC Markov chain cannot even be geometrically ergodic if this essential boundedness does not hold in many applications of interest. Our sufficiency and quantitative bounds rely on…

Statistics and ProbabilityMetropoliswithin-Gibbsgeometric ergodicity01 natural sciencesCombinatorics010104 statistics & probabilitysymbols.namesakeFOS: MathematicsMetropolis-within-GibbsApplied mathematicsErgodic theory0101 mathematicsGibbs measureQAMathematics65C40 (Primary) 60J05 65C05 (Secondary)Particle GibbsMarkov chainGeometric ergodicity010102 general mathematicsErgodicityuniform ergodicityProbability (math.PR)iterated conditional sequential Monte CarloMarkov chain Monte CarloIterated conditional sequential Monte CarloRate of convergencesymbolsUniform ergodicityparticle GibbsParticle filterMathematics - ProbabilityGibbs sampling
researchProduct

Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo

2020

We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelisation and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the sug…

Statistics and ProbabilityHyperparameter05 social sciencesBayesian probabilityStrong consistencyEstimatorContext (language use)Markov chain Monte Carlo01 natural sciencesStatistics::Computation010104 statistics & probabilitysymbols.namesake0502 economics and businesssymbols0101 mathematicsStatistics Probability and UncertaintyParticle filterAlgorithmImportance sampling050205 econometrics MathematicsScandinavian Journal of Statistics
researchProduct

Unbiased Estimators and Multilevel Monte Carlo

2018

Multilevel Monte Carlo (MLMC) and unbiased estimators recently proposed by McLeish (Monte Carlo Methods Appl., 2011) and Rhee and Glynn (Oper. Res., 2015) are closely related. This connection is elaborated by presenting a new general class of unbiased estimators, which admits previous debiasing schemes as special cases. New lower variance estimators are proposed, which are stratified versions of earlier unbiased schemes. Under general conditions, essentially when MLMC admits the canonical square root Monte Carlo error rate, the proposed new schemes are shown to be asymptotically as efficient as MLMC, both in terms of variance and cost. The experiments demonstrate that the variance reduction…

FOS: Computer and information sciencesMonte Carlo methodWord error rate010103 numerical & computational mathematicsstochastic differential equationManagement Science and Operations ResearchStatistics - Computation01 natural sciences010104 statistics & probabilityStochastic differential equationstratificationSquare rootFOS: MathematicsApplied mathematics0101 mathematicsComputation (stat.CO)stokastiset prosessitMathematicsProbability (math.PR)ta111EstimatorVariance (accounting)unbiased estimatorsComputer Science ApplicationsMonte Carlo -menetelmät65C05 (Primary) 65C30 (Secondary)efficiencykerrostuneisuusVariance reductionunbiasemultilevel Monte CarlodifferentiaaliyhtälötMathematics - ProbabilityOperations Research
researchProduct

Adaptive Metropolis algorithm using variational Bayesian adaptive Kalman filter

2013

Markov chain Monte Carlo (MCMC) methods are powerful computational tools for analysis of complex statistical problems. However, their computational efficiency is highly dependent on the chosen proposal distribution, which is generally difficult to find. One way to solve this problem is to use adaptive MCMC algorithms which automatically tune the statistics of a proposal distribution during the MCMC run. A new adaptive MCMC algorithm, called the variational Bayesian adaptive Metropolis (VBAM) algorithm, is developed. The VBAM algorithm updates the proposal covariance matrix using the variational Bayesian adaptive Kalman filter (VB-AKF). A strong law of large numbers for the VBAM algorithm is…

Statistics and ProbabilityMathematical optimizationCovariance matrixApplied MathematicsBayesian probabilityRejection samplingMathematics - Statistics TheoryMarkov chain Monte CarloStatistics Theory (math.ST)Kalman filterStatistics::ComputationComputational Mathematicssymbols.namesakeComputingMethodologies_PATTERNRECOGNITIONMetropolis–Hastings algorithmComputational Theory and MathematicsConvergence (routing)FOS: MathematicsKernel adaptive filtersymbolsMathematicsComputational Statistics & Data Analysis
researchProduct

On the convergence of unconstrained adaptive Markov chain Monte Carlo algorithms

2010

Monte Carlo methodMonte Carlo -menetelmätMarkov processesMarkovin ketjutalgoritmitAlgorithms
researchProduct

Conditional particle filters with bridge backward sampling

2022

Conditional particle filters (CPFs) with backward/ancestor sampling are powerful methods for sampling from the posterior distribution of the latent states of a dynamic model such as a hidden Markov model. However, the performance of these methods deteriorates with models involving weakly informative observations and/or slowly mixing dynamics. Both of these complications arise when sampling finely time-discretised continuous-time path integral models, but can occur with hidden Markov models too. Multinomial resampling, which is commonly employed with CPFs, resamples excessively for weakly informative observations and thereby introduces extra variance. Furthermore, slowly mixing dynamics rend…

Methodology (stat.ME)FOS: Computer and information sciencesStatistics - ComputationComputation (stat.CO)Statistics - Methodology
researchProduct

Theoretical and methodological aspects of MCMC computations with noisy likelihoods

2018

Approximate Bayesian computation (ABC) [11, 42] is a popular method for Bayesian inference involving an intractable, or expensive to evaluate, likelihood function but where simulation from the model is easy. The method consists of defining an alternative likelihood function which is also in general intractable but naturally lends itself to pseudo-marginal computations [5], hence making the approach of practical interest. The aim of this chapter is to show the connections of ABC Markov chain Monte Carlo with pseudo-marginal algorithms, review their existing theoretical results, and discuss how these can inform practice and hopefully lead to fruitful methodological developments. peerReviewed

todennäköisyyslaskentabayesilainen menetelmälikelihoodsBayesian computationStatistics::Computation
researchProduct

Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo

2020

We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelisation and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the sug…

Monte Carlo -menetelmätbayesilainen menetelmätilastomenetelmätMarkovin ketjutMarkov chain Monte Carlo (MCMC)Bayesian analysisotantaStatistics::Computationestimointi
researchProduct

Prediction of leukocyte counts during paediatric acute lymphoblastic leukaemia maintenance therapy

2019

Maintenance chemotherapy with oral 6-mercaptopurine and methotrexate remains a cornerstone of modern therapy for acute lymphoblastic leukaemia. The dosage and intensity of therapy are based on surrogate markers such as peripheral blood leukocyte and neutrophil counts. Dosage based leukocyte count predictions could provide support for dosage decisions clinicians face trying to find and maintain an appropriate dosage for the individual patient. We present two Bayesian nonlinear state space models for predicting patient leukocyte counts during the maintenance therapy. The models simplify some aspects of previously proposed models but allow for some extra flexibility. Our second model is an ext…

MaleTime seriesAdolescentaikasarjatNeutrophilsDatasets as Topiclcsh:MedicinebiomarkkeritModels BiologicalArticleMaintenance ChemotherapyPaediatric cancerLeukocyte CountSyöpätaudit - CancersAntineoplastic Combined Chemotherapy ProtocolsLeukocytesHumansDrug Dosage CalculationsChildlcsh:Sciencetilastolliset mallitStochastic modellingstokastiset prosessitStochastic ProcessesvalkosolutMercaptopurinebayesilainen menetelmäStatisticslcsh:RInfantennusteetBayes TheoremPrecursor Cell Lymphoblastic Leukemia-LymphomaApplied mathematicsMethotrexateChild Preschoollääkehoitoakuutti lymfaattinen leukemiasyöpätauditFemalelcsh:Q
researchProduct

Can the adaptive Metropolis algorithm collapse without the covariance lower bound?

2011

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …

stabiiliusMetropolis-algoritmiAdaptive Markov chain Monte Carlostochastic approximationstokastinen approksimaatiostabilityadaptiivinen Markov chain Monte CarloMetropolis algorithm
researchProduct