Search results for "Statistics::Computation"
showing 8 items of 48 documents
Selecting the tuning parameter in penalized Gaussian graphical models
2019
Penalized inference of Gaussian graphical models is a way to assess the conditional independence structure in multivariate problems. In this setting, the conditional independence structure, corresponding to a graph, is related to the choice of the tuning parameter, which determines the model complexity or degrees of freedom. There has been little research on the degrees of freedom for penalized Gaussian graphical models. In this paper, we propose an estimator of the degrees of freedom in $$\ell _1$$ -penalized Gaussian graphical models. Specifically, we derive an estimator inspired by the generalized information criterion and propose to use this estimator as the bias term for two informatio…
Design-based estimation for geometric quantiles with application to outlier detection
2010
Geometric quantiles are investigated using data collected from a complex survey. Geometric quantiles are an extension of univariate quantiles in a multivariate set-up that uses the geometry of multivariate data clouds. A very important application of geometric quantiles is the detection of outliers in multivariate data by means of quantile contours. A design-based estimator of geometric quantiles is constructed and used to compute quantile contours in order to detect outliers in both multivariate data and survey sampling set-ups. An algorithm for computing geometric quantile estimates is also developed. Under broad assumptions, the asymptotic variance of the quantile estimator is derived an…
Nonlinear parametric quantile models
2020
Quantile regression is widely used to estimate conditional quantiles of an outcome variable of interest given covariates. This method can estimate one quantile at a time without imposing any constraints on the quantile process other than the linear combination of covariates and parameters specified by the regression model. While this is a flexible modeling tool, it generally yields erratic estimates of conditional quantiles and regression coefficients. Recently, parametric models for the regression coefficients have been proposed that can help balance bias and sampling variability. So far, however, only models that are linear in the parameters and covariates have been explored. This paper …
Dataset 3 from Organic residue analysis shows sub-regional patterns in the use of pottery by Northern European hunter–gatherers
2020
Result of the Bayesian mixing model (FRUITS)
On the Ambiguous Consequences of Omitting Variables
2015
This paper studies what happens when we move from a short regression to a long regression (or vice versa), when the long regression is shorter than the data-generation process. In the special case where the long regression equals the data-generation process, the least-squares estimators have smaller bias (in fact zero bias) but larger variances in the long regression than in the short regression. But if the long regression is also misspecified, the bias may not be smaller. We provide bias and mean squared error comparisons and study the dependence of the differences on the misspecification parameter.
Dynamic copula models for the spark spread
2011
We propose a non-symmetric copula to model the evolution of electricity and gas prices by a bivariate non-Gaussian autoregressive process. We identify the marginal dynamics as driven by normal inverse Gaussian processes, estimating them from a series of observed UK electricity and gas spot data. We estimate the copula by modeling the difference between the empirical copula and the independent copula. We then simulate the joint process and price options written on the spark spread. We find that option prices are significantly influenced by the copula and the marginal distributions, along with the seasonality of the underlying prices.
On the ambiguous consequences of omitting variables
2015
This paper studies what happens when we move from a short regression to a long regression (or vice versa), when the long regression is shorter than the data-generation process. In the special case where the long regression equals the data-generation process, the least-squares estimators have smaller bias (in fact zero bias) but larger variances in the long regression than in the short regression. But if the long regression is also misspecified, the bias may not be smaller. We provide bias and mean squared error comparisons and study the dependence of the differences on the misspecification parameter.
Theoretical and methodological aspects of MCMC computations with noisy likelihoods
2018
Approximate Bayesian computation (ABC) [11, 42] is a popular method for Bayesian inference involving an intractable, or expensive to evaluate, likelihood function but where simulation from the model is easy. The method consists of defining an alternative likelihood function which is also in general intractable but naturally lends itself to pseudo-marginal computations [5], hence making the approach of practical interest. The aim of this chapter is to show the connections of ABC Markov chain Monte Carlo with pseudo-marginal algorithms, review their existing theoretical results, and discuss how these can inform practice and hopefully lead to fruitful methodological developments. peerReviewed