Search results for "Statistics::Computation"
showing 10 items of 48 documents
Anti-tempered Layered Adaptive Importance Sampling
2017
Monte Carlo (MC) methods are widely used for Bayesian inference in signal processing, machine learning and statistics. In this work, we introduce an adaptive importance sampler which mixes together the benefits of the Importance Sampling (IS) and Markov Chain Monte Carlo (MCMC) approaches. Different parallel MCMC chains provide the location parameters of the proposal probability density functions (pdfs) used in an IS method. The MCMC algorithms consider a tempered version of the posterior distribution as invariant density. We also provide an exhaustive theoretical support explaining why, in the presented technique, even an anti-tempering strategy (reducing the scaling of the posterior) can …
Test problems for large-scale nonsmooth minimization
2007
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of hundreds or thousands of variables with various constraints. However, there exist only few large-scale academic test problems for nonsmooth case and there is no established practice for testing solvers for large-scale nonsmooth optimization. For this reason, we now collect the nonsmooth test problems used in our previous numerical experiments and also give some new problems. Namely, we give problems for unconstrained, bound constrained, and inequality constrained nonsmooth minimization.
Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo
2020
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelisation and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the sug…
Shrinkage and spectral filtering of correlation matrices: A comparison via the Kullback-Leibler distance
2007
The problem of filtering information from large correlation matrices is of great importance in many applications. We have recently proposed the use of the Kullback-Leibler distance to measure the performance of filtering algorithms in recovering the underlying correlation matrix when the variables are described by a multivariate Gaussian distribution. Here we use the Kullback-Leibler distance to investigate the performance of filtering methods based on Random Matrix Theory and on the shrinkage technique. We also present some results on the application of the Kullback-Leibler distance to multivariate data which are non Gaussian distributed.
Appendix C. Posterior distributions of the CR-SEM parameters (conditional on the covariates being in the model).
2016
Posterior distributions of the CR-SEM parameters (conditional on the covariates being in the model).
Particle Group Metropolis Methods for Tracking the Leaf Area Index
2020
Monte Carlo (MC) algorithms are widely used for Bayesian inference in statistics, signal processing, and machine learning. In this work, we introduce an Markov Chain Monte Carlo (MCMC) technique driven by a particle filter. The resulting scheme is a generalization of the so-called Particle Metropolis-Hastings (PMH) method, where a suitable Markov chain of sets of weighted samples is generated. We also introduce a marginal version for the goal of jointly inferring dynamic and static variables. The proposed algorithms outperform the corresponding standard PMH schemes, as shown by numerical experiments.
A Bayesian analysis of classical hypothesis testing
1980
The procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses. The results shed further light on Lindley’s paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.
What Bayesians Expect of Each Other
1991
Abstract Our goal is to study general properties of one Bayesian's subjective beliefs about the behavior of another Bayesian's subjective beliefs. We consider two Bayesians, A and B, who have different subjective distributions for a parameter θ, and study Bayesian A's expectation of Bayesian B's posterior distribution for θ given some data Y. We show that when θ can take only two values, Bayesian A always expects Bayesian B's posterior distribution to lie between the prior distributions of A and B. Conditions are given under which a similar result holds for an arbitrary real-valued parameter θ. For a vector parameter θ we present useful expressions for the mean vector and covariance matrix …
An introduction to Bayesian reference analysis: inference on the ratio of multinomial parameters
1998
This paper offers an introduction to Bayesian reference analysis, often described as the more successful method to produce non-subjective, model-based, posterior distributions. The ideas are illustrated in detail with an interesting problem, the ratio of multinomial parameters, for which no model-based Bayesian analysis has been proposed. Signposts are provided to the huge related literature.
Statistical inference and Monte Carlo algorithms
1996
This review article looks at a small part of the picture of the interrelationship between statistical theory and computational algorithms, especially the Gibbs sampler and the Accept-Reject algorithm. We pay particular attention to how the methodologies affect and complement each other.