Search results for "A* algorithm"

showing 10 items of 2538 documents

dglars: An R Package to Estimate Sparse Generalized Linear Models

2014

dglars is a publicly available R package that implements the method proposed in Augugliaro, Mineo, and Wit (2013), developed to study the sparse structure of a generalized linear model. This method, called dgLARS, is based on a differential geometrical extension of the least angle regression method proposed in Efron, Hastie, Johnstone, and Tibshirani (2004). The core of the dglars package consists of two algorithms implemented in Fortran 90 to efficiently compute the solution curve: a predictor-corrector algorithm, proposed in Augugliaro et al. (2013), and a cyclic coordinate descent algorithm, proposed in Augugliaro, Mineo, and Wit (2012). The latter algorithm, as shown here, is significan…

Statistics and ProbabilityGeneralized linear modelEXPRESSIONMathematical optimizationTISSUESFortrancyclic coordinate descent algorithmdgLARSFeature selectionDANTZIG SELECTORpredictor-corrector algorithmLIKELIHOODLEAST ANGLE REGRESSIONsparse modelsDifferential (infinitesimal)differential geometrylcsh:Statisticslcsh:HA1-4737computer.programming_languageMathematicsLeast-angle regressionExtension (predicate logic)Expression (computer science)generalized linear modelsBREAST-CANCER RISKVARIABLE SELECTIONDifferential geometrydifferential geometry generalized linear models dgLARS predictor-corrector algorithm cyclic coordinate descent algorithm sparse models variable selection.MARKERSHRINKAGEStatistics Probability and UncertaintyHAPLOTYPESSettore SECS-S/01 - StatisticacomputerAlgorithmSoftware
researchProduct

Extended differential geometric LARS for high-dimensional GLMs with general dispersion parameter

2018

A large class of modeling and prediction problems involves outcomes that belong to an exponential family distribution. Generalized linear models (GLMs) are a standard way of dealing with such situations. Even in high-dimensional feature spaces GLMs can be extended to deal with such situations. Penalized inference approaches, such as the $$\ell _1$$ or SCAD, or extensions of least angle regression, such as dgLARS, have been proposed to deal with GLMs with high-dimensional feature spaces. Although the theory underlying these methods is in principle generic, the implementation has remained restricted to dispersion-free models, such as the Poisson and logistic regression models. The aim of this…

Statistics and ProbabilityGeneralized linear modelMathematical optimizationGeneralized linear modelsPredictor-€“corrector algorithmGeneralized linear model02 engineering and technologyPoisson distributionDANTZIG SELECTOR01 natural sciencesCross-validationHigh-dimensional inferenceTheoretical Computer Science010104 statistics & probabilitysymbols.namesakeExponential familyLEAST ANGLE REGRESSION0202 electrical engineering electronic engineering information engineeringApplied mathematicsStatistics::Methodology0101 mathematicsCROSS-VALIDATIONMathematicsLeast-angle regressionLinear model020206 networking & telecommunicationsProbability and statisticsVARIABLE SELECTIONEfficient estimatorPredictor-corrector algorithmComputational Theory and MathematicsDispersion paremeterLINEAR-MODELSsymbolsSHRINKAGEStatistics Probability and UncertaintySettore SECS-S/01 - StatisticaStatistics and Computing
researchProduct

Differential geometric least angle regression: a differential geometric approach to sparse generalized linear models

2013

Summary Sparsity is an essential feature of many contemporary data problems. Remote sensing, various forms of automated screening and other high throughput measurement devices collect a large amount of information, typically about few independent statistical subjects or units. In certain cases it is reasonable to assume that the underlying process generating the data is itself sparse, in the sense that only a few of the measured variables are involved in the process. We propose an explicit method of monotonically decreasing sparsity for outcomes that can be modelled by an exponential family. In our approach we generalize the equiangular condition in a generalized linear model. Although the …

Statistics and ProbabilityGeneralized linear modelSparse modelMathematical optimizationGeneralized linear modelsVariable selectionPath following algorithmEquiangular polygonGeneralized linear modelLASSODANTZIG SELECTORsymbols.namesakeExponential familyLasso (statistics)Sparse modelsDifferential geometryInformation geometryCOORDINATE DESCENTFisher informationERRORMathematicsLeast-angle regressionLeast angle regressionGeneralized degrees of freedomsymbolsSHRINKAGEStatistics Probability and UncertaintySimple linear regressionInformation geometrySettore SECS-S/01 - StatisticaAlgorithmCovariance penalty theory
researchProduct

A differential-geometric approach to generalized linear models with grouped predictors

2016

We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important properties that distinguish it from the group lasso. First, its solution curve is based on the invariance properties of a generalized linear model. Second, it adds groups of variables based on a group equiangularity condition, which is shown to be related to score statistics. An adaptive version, which includes weights based on the Kullback-Leibler divergence, improves its variable selection fea…

Statistics and ProbabilityGeneralized linear modelStatistics::TheoryMathematical optimizationProper linear modelGeneral MathematicsORACLE PROPERTIESGeneralized linear modelSPARSITYGeneralized linear array model01 natural sciencesGeneralized linear mixed modelCONSISTENCY010104 statistics & probabilityScore statistic.LEAST ANGLE REGRESSIONLinear regressionESTIMATORApplied mathematicsDifferential geometry0101 mathematicsDivergence (statistics)MathematicsVariance functionDifferential-geometric least angle regressionPATH ALGORITHMApplied MathematicsLeast-angle regressionScore statistic010102 general mathematicsAgricultural and Biological Sciences (miscellaneous)Group lassoGROUP SELECTIONStatistics Probability and UncertaintyGeneral Agricultural and Biological SciencesSettore SECS-S/01 - Statistica
researchProduct

Multiple smoothing parameters selection in additive regression quantiles

2021

We propose an iterative algorithm to select the smoothing parameters in additive quantile regression, wherein the functional forms of the covariate effects are unspecified and expressed via B-spline bases with difference penalties on the spline coefficients. The proposed algorithm relies on viewing the penalized coefficients as random effects from the symmetric Laplace distribution, and it turns out to be very efficient and particularly attractive with multiple smooth terms. Through simulations we compare our proposal with some alternative approaches, including the traditional ones based on minimization of the Schwarz Information Criterion. A real-data analysis is presented to illustrate t…

Statistics and ProbabilityIterative methodSchall algorithmexible modellingMathematicsofComputing_NUMERICALANALYSISAdditive quantile regression030229 sport sciencesP splines01 natural sciencesRegressionQuantile regression010104 statistics & probability03 medical and health sciences0302 clinical medicineP-splineStatisticsCovariatesemiparametric quantile regression0101 mathematicsStatistics Probability and UncertaintySmoothingSelection (genetic algorithm)QuantileMathematicsStatistical Modelling
researchProduct

MCMC methods to approximate conditional predictive distributions

2006

Sampling from conditional distributions is a problem often encountered in statistics when inferences are based on conditional distributions which are not of closed-form. Several Markov chain Monte Carlo (MCMC) algorithms to simulate from them are proposed. Potential problems are pointed out and some suitable modifications are suggested. Approximations based on conditioning sets are also explored. The issues are illustrated within a specific statistical tool for Bayesian model checking, and compared in an example. An example in frequentist conditional testing is also given.

Statistics and ProbabilityMarkov chainApplied MathematicsMarkov chain Monte CarloConditional probability distributionBayesian inferenceComputational Mathematicssymbols.namesakeMetropolis–Hastings algorithmComputational Theory and MathematicsSampling distributionFrequentist inferencesymbolsEconometricsAlgorithmMathematicsGibbs samplingComputational Statistics & Data Analysis
researchProduct

Componentwise adaptation for high dimensional MCMC

2005

We introduce a new adaptive MCMC algorithm, based on the traditional single component Metropolis-Hastings algorithm and on our earlier adaptive Metropolis algorithm (AM). In the new algorithm the adaption is performed component by component. The chain is no more Markovian, but it remains ergodic. The algorithm is demonstrated to work well in varying test cases up to 1000 dimensions.

Statistics and ProbabilityMathematical optimization010504 meteorology & atmospheric sciencesMonte Carlo methodMarkov processMarkov chain Monte Carlo01 natural sciencesStatistics::Computation010104 statistics & probabilityComputational Mathematicssymbols.namesakeMetropolis–Hastings algorithmTest caseChain (algebraic topology)Component (UML)symbolsStatistics::MethodologyErgodic theory0101 mathematicsStatistics Probability and Uncertainty0105 earth and related environmental sciencesMathematicsComputational Statistics
researchProduct

Criteria for Bayesian model choice with application to variable selection

2012

In objective Bayesian model selection, no single criterion has emerged as dominant in defining objective prior distributions. Indeed, many criteria have been separately proposed and utilized to propose differing prior choices. We first formalize the most general and compelling of the various criteria that have been suggested, together with a new criterion. We then illustrate the potential of these criteria in determining objective model selection priors by considering their application to the problem of variable selection in normal linear models. This results in a new model selection objective prior with a number of compelling properties.

Statistics and ProbabilityMathematical optimization62C10Model selectiong-priorLinear modelMathematics - Statistics TheoryFeature selectionStatistics Theory (math.ST)Model selectionBayesian inferenceObjective model62J05Prior probability62J15FOS: MathematicsStatistics Probability and Uncertaintyobjective BayesSelection (genetic algorithm)variable selectionMathematicsThe Annals of Statistics
researchProduct

A gradient-based deletion diagnostic measure for generalized linear mixed models

2016

ABSTRACTA gradient-statistic-based diagnostic measure is developed in the context of the generalized linear mixed models. Its performance is assessed by some real examples and simulation studies, in terms of ability in detecting influential data structures and of concordance with the most used influence measures.

Statistics and ProbabilityMathematical optimizationConcordance05 social sciencesContext (language use)Data structure01 natural sciencesMeasure (mathematics)Generalized linear mixed model010104 statistics & probabilityInfluence outliers deletion diagnostics GLMM gradient statisticGradient based algorithm0502 economics and businessOutlierApplied mathematics0101 mathematicsSettore SECS-S/01 - Statistica050205 econometrics MathematicsCommunications in Statistics - Theory and Methods
researchProduct

Adaptive Metropolis algorithm using variational Bayesian adaptive Kalman filter

2013

Markov chain Monte Carlo (MCMC) methods are powerful computational tools for analysis of complex statistical problems. However, their computational efficiency is highly dependent on the chosen proposal distribution, which is generally difficult to find. One way to solve this problem is to use adaptive MCMC algorithms which automatically tune the statistics of a proposal distribution during the MCMC run. A new adaptive MCMC algorithm, called the variational Bayesian adaptive Metropolis (VBAM) algorithm, is developed. The VBAM algorithm updates the proposal covariance matrix using the variational Bayesian adaptive Kalman filter (VB-AKF). A strong law of large numbers for the VBAM algorithm is…

Statistics and ProbabilityMathematical optimizationCovariance matrixApplied MathematicsBayesian probabilityRejection samplingMathematics - Statistics TheoryMarkov chain Monte CarloStatistics Theory (math.ST)Kalman filterStatistics::ComputationComputational Mathematicssymbols.namesakeComputingMethodologies_PATTERNRECOGNITIONMetropolis–Hastings algorithmComputational Theory and MathematicsConvergence (routing)FOS: MathematicsKernel adaptive filtersymbolsMathematicsComputational Statistics & Data Analysis
researchProduct