Search results for "Statistics::Methodology"

showing 10 items of 71 documents

Weighted-Average Least Squares (WALS): Confidence and Prediction Intervals

2022

We extend the results of De Luca et al. (2021) to inference for linear regression models based on weighted-average least squares (WALS), a frequentist model averaging approach with a Bayesian flavor. We concentrate on inference about a single focus parameter, interpreted as the causal effect of a policy or intervention, in the presence of a potentially large number of auxiliary parameters representing the nuisance component of the model. In our Monte Carlo simulations we compare the performance of WALS with that of several competing estimators, including the unrestricted least-squares estimator (with all auxiliary regressors) and the restricted least-squares estimator (with no auxiliary reg…

Shrinkage estimatorStatistics::TheorySettore SECS-P/05Economics Econometrics and Finance (miscellaneous)Linear model WALS condence intervals prediction intervals Monte Carlo simulations.Prediction intervalEstimatorSettore SECS-P/05 - EconometriaComputer Science ApplicationsLasso (statistics)Frequentist inferenceBayesian information criterionStatisticsStatistics::MethodologyAkaike information criterionJackknife resamplingMathematics
researchProduct

A Bayesian analysis of classical hypothesis testing

1980

The procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses. The results shed further light on Lindley’s paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.

Statistics and ProbabilityBayes factorBayesian inferenceStatistics::ComputationBayesian statisticsStatisticsEconometricsBayesian experimental designStatistics::MethodologyStatistics Probability and UncertaintyBayesian linear regressionLindley's paradoxBayesian averageMathematicsStatistical hypothesis testingTrabajos de Estadistica Y de Investigacion Operativa
researchProduct

Intensity estimation for inhomogeneous Gibbs point process with covariates-dependent chemical activity

2014

Recent development of intensity estimation for inhomogeneous spatial point processes with covariates suggests that kerneling in the covariate space is a competitive intensity estimation method for inhomogeneous Poisson processes. It is not known whether this advantageous performance is still valid when the points interact. In the simplest common case, this happens, for example, when the objects presented as points have a spatial dimension. In this paper, kerneling in the covariate space is extended to Gibbs processes with covariates-dependent chemical activity and inhibitive interactions, and the performance of the approach is studied through extensive simulation experiments. It is demonstr…

Statistics and ProbabilityDimensionality reductionNonparametric statisticsPoisson distributionPoint processsymbols.namesakeDimension (vector space)CovariatesymbolsEconometricsStatistics::MethodologyStatistical physicsStatistics Probability and UncertaintySmoothingMathematicsParametric statisticsStatistica Neerlandica
researchProduct

Multivariate nonparametric estimation of the Pickands dependence function using Bernstein polynomials

2017

Abstract Many applications in risk analysis require the estimation of the dependence among multivariate maxima, especially in environmental sciences. Such dependence can be described by the Pickands dependence function of the underlying extreme-value copula. Here, a nonparametric estimator is constructed as the sample equivalent of a multivariate extension of the madogram. Shape constraints on the family of Pickands dependence functions are taken into account by means of a representation in terms of Bernstein polynomials. The large-sample theory of the estimator is developed and its finite-sample performance is evaluated with a simulation study. The approach is illustrated with a dataset of…

Statistics and ProbabilityFOS: Computer and information sciencesMultivariate statisticsNONPARAMETRIC ESTIMATIONMULTIVARIATE MAX-STABLE DISTRIBUTION01 natural sciencesCopula (probability theory)Methodology (stat.ME)010104 statistics & probabilityStatisticsStatistics::Methodology0101 mathematicsExtreme-value copulaEXTREMAL DEPENDENCEEXTREMEVALUE COPULA[SDU.ENVI]Sciences of the Universe [physics]/Continental interfaces environmentStatistics - MethodologyComputingMilieux_MISCELLANEOUSMathematics[SDU.OCEAN]Sciences of the Universe [physics]/Ocean AtmosphereApplied Mathematics010102 general mathematicsNonparametric statisticsEstimatorExtremal dependenceHEAVY RAINFALLBernstein polynomialBERNSTEIN POLYNOMIALS EXTREMAL DEPENDENCE EXTREMEVALUE COPULA HEAVY RAINFALL NONPARAMETRIC ESTIMATION MULTIVARIATE MAX-STABLE DISTRIBUTION PICKANDS DEPENDENCE FUNCTION13. Climate actionDependence functionStatistics Probability and UncertaintyMaximaSettore SECS-S/01 - StatisticaBERNSTEIN POLYNOMIALSPICKANDS DEPENDENCE FUNCTION
researchProduct

Extended differential geometric LARS for high-dimensional GLMs with general dispersion parameter

2018

A large class of modeling and prediction problems involves outcomes that belong to an exponential family distribution. Generalized linear models (GLMs) are a standard way of dealing with such situations. Even in high-dimensional feature spaces GLMs can be extended to deal with such situations. Penalized inference approaches, such as the $$\ell _1$$ or SCAD, or extensions of least angle regression, such as dgLARS, have been proposed to deal with GLMs with high-dimensional feature spaces. Although the theory underlying these methods is in principle generic, the implementation has remained restricted to dispersion-free models, such as the Poisson and logistic regression models. The aim of this…

Statistics and ProbabilityGeneralized linear modelMathematical optimizationGeneralized linear modelsPredictor-€“corrector algorithmGeneralized linear model02 engineering and technologyPoisson distributionDANTZIG SELECTOR01 natural sciencesCross-validationHigh-dimensional inferenceTheoretical Computer Science010104 statistics & probabilitysymbols.namesakeExponential familyLEAST ANGLE REGRESSION0202 electrical engineering electronic engineering information engineeringApplied mathematicsStatistics::Methodology0101 mathematicsCROSS-VALIDATIONMathematicsLeast-angle regressionLinear model020206 networking & telecommunicationsProbability and statisticsVARIABLE SELECTIONEfficient estimatorPredictor-corrector algorithmComputational Theory and MathematicsDispersion paremeterLINEAR-MODELSsymbolsSHRINKAGEStatistics Probability and UncertaintySettore SECS-S/01 - StatisticaStatistics and Computing
researchProduct

Robust estimation and inference for bivariate line-fitting in allometry.

2011

In allometry, bivariate techniques related to principal component analysis are often used in place of linear regression, and primary interest is in making inferences about the slope. We demonstrate that the current inferential methods are not robust to bivariate contamination, and consider four robust alternatives to the current methods -- a novel sandwich estimator approach, using robust covariance matrices derived via an influence function approach, Huber's M-estimator and the fast-and-robust bootstrap. Simulations demonstrate that Huber's M-estimators are highly efficient and robust against bivariate contamination, and when combined with the fast-and-robust bootstrap, we can make accurat…

Statistics and ProbabilityHeteroscedasticityAnalysis of VarianceCovariance matrixRobust statisticsEstimatorGeneral MedicineBivariate analysisCovarianceBiostatisticsStatistics::ComputationEfficient estimatorPrincipal component analysisStatisticsEconometricsStatistics::MethodologyBody SizeStatistics Probability and UncertaintyMathematicsProbabilityBiometrical journal. Biometrische Zeitschrift
researchProduct

Prior-based Bayesian information criterion

2019

We present a new approach to model selection and Bayes factor determination, based on Laplace expansions (as in BIC), which we call Prior-based Bayes Information Criterion (PBIC). In this approach, the Laplace expansion is only done with the likelihood function, and then a suitable prior distribution is chosen to allow exact computation of the (approximate) marginal likelihood arising from the Laplace approximation and the prior. The result is a closed-form expression similar to BIC, but now involves a term arising from the prior distribution (which BIC ignores) and also incorporates the idea that different parameters can have different effective sample sizes (whereas BIC only allows one ov…

Statistics and ProbabilityLaplace expansionApplied MathematicsBayes factorMarginal likelihoodStatistics::Computationsymbols.namesakeComputational Theory and MathematicsLaplace's methodBayesian information criterionPrior probabilitysymbolsApplied mathematicsStatistics::MethodologyStatistics Probability and UncertaintyLikelihood functionFisher informationAnalysisMathematics
researchProduct

Posterior moments and quantiles for the normal location model with Laplace prior

2021

We derive explicit expressions for arbitrary moments and quantiles of the posterior distribution of the location parameter η in the normal location model with Laplace prior, and use the results to approximate the posterior distribution of sums of independent copies of η.

Statistics and ProbabilityLaplace priorsLaplace priorLocation parameterreflected generalized gamma priorSettore SECS-P/05Posterior probability0211 other engineering and technologiesSettore SECS-P/05 - Econometria02 engineering and technology01 natural sciencesCornish-Fisher approximation010104 statistics & probabilityStatistics::Methodologyposterior quantile0101 mathematicsposterior moments and cumulantsMathematicsreflected generalized gamma priors021103 operations researchLaplace transformLocation modelMathematical analysisStatistics::Computationposterior moments and cumulantCornish–Fisher approximationSettore SECS-S/01 - StatisticaNormal location modelposterior quantilesQuantileCommunications in Statistics - Theory and Methods
researchProduct

Componentwise adaptation for high dimensional MCMC

2005

We introduce a new adaptive MCMC algorithm, based on the traditional single component Metropolis-Hastings algorithm and on our earlier adaptive Metropolis algorithm (AM). In the new algorithm the adaption is performed component by component. The chain is no more Markovian, but it remains ergodic. The algorithm is demonstrated to work well in varying test cases up to 1000 dimensions.

Statistics and ProbabilityMathematical optimization010504 meteorology & atmospheric sciencesMonte Carlo methodMarkov processMarkov chain Monte Carlo01 natural sciencesStatistics::Computation010104 statistics & probabilityComputational Mathematicssymbols.namesakeMetropolis–Hastings algorithmTest caseChain (algebraic topology)Component (UML)symbolsStatistics::MethodologyErgodic theory0101 mathematicsStatistics Probability and Uncertainty0105 earth and related environmental sciencesMathematicsComputational Statistics
researchProduct

Rejoinder: Bayesian Checking of the Second Levels of Hierarchical Models

2008

Rejoinder: Bayesian Checking of the Second Levels of Hierarchical Models [arXiv:0802.0743]

Statistics and ProbabilityModel checkingFOS: Computer and information sciencesStatistics::TheoryDistribution (number theory)Computer sciencebusiness.industryGeneral MathematicsBayesian probabilityProbability and statisticsMachine learningcomputer.software_genreComputer Science::Digital LibrariesStatistics::ComputationMethodology (stat.ME)Test statisticStatistics::MethodologyArtificial intelligenceStatistics Probability and UncertaintybusinesscomputerStatistics - Methodology
researchProduct