Search results for "Methodology"

showing 10 items of 852 documents

Identifying Causal Effects with the R Package causaleffect

2017

Do-calculus is concerned with estimating the interventional distribution of an action from the observed joint probability distribution of the variables in a given causal structure. All identifiable causal effects can be derived using the rules of do-calculus, but the rules themselves do not give any direct indication whether the effect in question is identifiable or not. Shpitser and Pearl constructed an algorithm for identifying joint interventional distributions in causal models, which contain unobserved variables and induce directed acyclic graphs. This algorithm can be seen as a repeated application of the rules of do-calculus and known properties of probabilities, and it ultimately eit…

Statistics and ProbabilityFOS: Computer and information sciencesTheoretical computer sciencecausalityDistribution (number theory)C-componentComputer sciencecausal model02 engineering and technologyCausal structureMethodology (stat.ME)03 medical and health sciences0302 clinical medicinedo-calculusJoint probability distribution0202 electrical engineering electronic engineering information engineering030212 general & internal medicineDAG; do-calculus; causality; causal model; identifiability; graph; C-component; hedge; d-separationlcsh:Statisticslcsh:HA1-4737Statistics - Methodologycomputer.programming_languageCausal modelta112DAGd-separationgraphhedgeidentifiabilityExpression (mathematics)PEARL (programming language)Action (philosophy)kausaliteetti020201 artificial intelligence & image processingStatistics Probability and UncertaintycomputerSoftware
researchProduct

Blind source separation for non-stationary random fields

2022

Regional data analysis is concerned with the analysis and modeling of measurements that are spatially separated by specifically accounting for typical features of such data. Namely, measurements in close proximity tend to be more similar than the ones further separated. This might hold also true for cross-dependencies when multivariate spatial data is considered. Often, scientists are interested in linear transformations of such data which are easy to interpret and might be used as dimension reduction. Recently, for that purpose spatial blind source separation (SBSS) was introduced which assumes that the observed data are formed by a linear mixture of uncorrelated, weakly stationary random …

Statistics and ProbabilityFOS: Computer and information scienceslinear latent variable modelpaikkatietoanalyysiManagement Monitoring Policy and Law010502 geochemistry & geophysics01 natural scienceslineaariset mallitspatial statisticsMethodology (stat.ME)010104 statistics & probabilitymonimuuttujamenetelmät0101 mathematicsComputers in Earth SciencesStatistics - Methodology0105 earth and related environmental sciences
researchProduct

Confidence bands for Horvitz-Thompson estimators using sampled noisy functional data

2013

When collections of functional data are too large to be exhaustively observed, survey sampling techniques provide an effective way to estimate global quantities such as the population mean function. Assuming functional data are collected from a finite population according to a probabilistic sampling scheme, with the measurements being discrete in time and noisy, we propose to first smooth the sampled trajectories with local polynomials and then estimate the mean function with a Horvitz-Thompson estimator. Under mild conditions on the population size, observation times, regularity of the trajectories, sampling scheme, and smoothing bandwidth, we prove a Central Limit theorem in the space of …

Statistics and ProbabilityFOS: Computer and information sciencesmaximal inequalitiesCovariance functionCLTPopulationSurvey samplingweighted cross-validationMathematics - Statistics TheoryStatistics Theory (math.ST)Methodology (stat.ME)symbols.namesakeFOS: Mathematicssurvey samplingeducationGaussian processfunctional dataStatistics - Methodologysuprema of Gaussian processesMathematicsCentral limit theoremeducation.field_of_studySampling (statistics)Estimatorspace of continuous functionssymbolslocal polynomial smoothingAlgorithmSmoothing
researchProduct

Bayesian models for data missing not at random in health examination surveys

2018

In epidemiological surveys, data missing not at random (MNAR) due to survey nonresponse may potentially lead to a bias in the risk factor estimates. We propose an approach based on Bayesian data augmentation and survival modelling to reduce the nonresponse bias. The approach requires additional information based on follow-up data. We present a case study of smoking prevalence using FINRISK data collected between 1972 and 2007 with a follow-up to the end of 2012 and compare it to other commonly applied missing at random (MAR) imputation approaches. A simulation experiment is carried out to study the validity of the approaches. Our approach appears to reduce the nonresponse bias substantially…

Statistics and ProbabilityFOS: Computer and information sciencesmedicine.medical_specialtymultiple imputationComputer scienceBayesian probability01 natural sciencesStatistics - Applicationssurvival analysisfollow-up dataMethodology (stat.ME)010104 statistics & probability03 medical and health sciencesHealth examination0302 clinical medicineEpidemiologyStatisticsmedicineApplications (stat.AP)030212 general & internal medicine0101 mathematicsSurvival analysisStatistics - MethodologyBayes estimatorta112elinaika-analyysiRisk factor (computing)Bayesian estimation3. Good healthhealth examination surveysStatistics Probability and UncertaintyMissing not at randomdata augmentation
researchProduct

Extended differential geometric LARS for high-dimensional GLMs with general dispersion parameter

2018

A large class of modeling and prediction problems involves outcomes that belong to an exponential family distribution. Generalized linear models (GLMs) are a standard way of dealing with such situations. Even in high-dimensional feature spaces GLMs can be extended to deal with such situations. Penalized inference approaches, such as the $$\ell _1$$ or SCAD, or extensions of least angle regression, such as dgLARS, have been proposed to deal with GLMs with high-dimensional feature spaces. Although the theory underlying these methods is in principle generic, the implementation has remained restricted to dispersion-free models, such as the Poisson and logistic regression models. The aim of this…

Statistics and ProbabilityGeneralized linear modelMathematical optimizationGeneralized linear modelsPredictor-€“corrector algorithmGeneralized linear model02 engineering and technologyPoisson distributionDANTZIG SELECTOR01 natural sciencesCross-validationHigh-dimensional inferenceTheoretical Computer Science010104 statistics & probabilitysymbols.namesakeExponential familyLEAST ANGLE REGRESSION0202 electrical engineering electronic engineering information engineeringApplied mathematicsStatistics::Methodology0101 mathematicsCROSS-VALIDATIONMathematicsLeast-angle regressionLinear model020206 networking & telecommunicationsProbability and statisticsVARIABLE SELECTIONEfficient estimatorPredictor-corrector algorithmComputational Theory and MathematicsDispersion paremeterLINEAR-MODELSsymbolsSHRINKAGEStatistics Probability and UncertaintySettore SECS-S/01 - StatisticaStatistics and Computing
researchProduct

Robust estimation and inference for bivariate line-fitting in allometry.

2011

In allometry, bivariate techniques related to principal component analysis are often used in place of linear regression, and primary interest is in making inferences about the slope. We demonstrate that the current inferential methods are not robust to bivariate contamination, and consider four robust alternatives to the current methods -- a novel sandwich estimator approach, using robust covariance matrices derived via an influence function approach, Huber's M-estimator and the fast-and-robust bootstrap. Simulations demonstrate that Huber's M-estimators are highly efficient and robust against bivariate contamination, and when combined with the fast-and-robust bootstrap, we can make accurat…

Statistics and ProbabilityHeteroscedasticityAnalysis of VarianceCovariance matrixRobust statisticsEstimatorGeneral MedicineBivariate analysisCovarianceBiostatisticsStatistics::ComputationEfficient estimatorPrincipal component analysisStatisticsEconometricsStatistics::MethodologyBody SizeStatistics Probability and UncertaintyMathematicsProbabilityBiometrical journal. Biometrische Zeitschrift
researchProduct

Local bandwidth selection for kernel density estimation in a bifurcating Markov chain model

2020

International audience; We propose an adaptive estimator for the stationary distribution of a bifurcating Markov Chain onRd. Bifurcating Markov chains (BMC for short) are a class of stochastic processes indexed by regular binary trees. A kernel estimator is proposed whose bandwidths are selected by a method inspired by the works of Goldenshluger and Lepski [(2011), 'Bandwidth Selection in Kernel Density Estimation: Oracle Inequalities and Adaptive Minimax Optimality',The Annals of Statistics3: 1608-1632). Drawing inspiration from dimension jump methods for model selection, we also provide an algorithm to select the best constant in the penalty. Finally, we investigate the performance of the…

Statistics and ProbabilityKernel density estimationadaptive estimationNonparametric kernel estimation01 natural sciences010104 statistics & probability[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]0502 economics and businessbinary treesApplied mathematicsbifurcating autoregressive processes0101 mathematics[MATH]Mathematics [math]050205 econometrics MathematicsBinary treeStationary distributionMarkov chainStochastic processModel selection05 social sciencesEstimator[MATH.MATH-PR]Mathematics [math]/Probability [math.PR]Adaptive estimatorStatistics Probability and UncertaintyGoldenshluger-Lepski methodology
researchProduct

Prior-based Bayesian information criterion

2019

We present a new approach to model selection and Bayes factor determination, based on Laplace expansions (as in BIC), which we call Prior-based Bayes Information Criterion (PBIC). In this approach, the Laplace expansion is only done with the likelihood function, and then a suitable prior distribution is chosen to allow exact computation of the (approximate) marginal likelihood arising from the Laplace approximation and the prior. The result is a closed-form expression similar to BIC, but now involves a term arising from the prior distribution (which BIC ignores) and also incorporates the idea that different parameters can have different effective sample sizes (whereas BIC only allows one ov…

Statistics and ProbabilityLaplace expansionApplied MathematicsBayes factorMarginal likelihoodStatistics::Computationsymbols.namesakeComputational Theory and MathematicsLaplace's methodBayesian information criterionPrior probabilitysymbolsApplied mathematicsStatistics::MethodologyStatistics Probability and UncertaintyLikelihood functionFisher informationAnalysisMathematics
researchProduct

Posterior moments and quantiles for the normal location model with Laplace prior

2021

We derive explicit expressions for arbitrary moments and quantiles of the posterior distribution of the location parameter η in the normal location model with Laplace prior, and use the results to approximate the posterior distribution of sums of independent copies of η.

Statistics and ProbabilityLaplace priorsLaplace priorLocation parameterreflected generalized gamma priorSettore SECS-P/05Posterior probability0211 other engineering and technologiesSettore SECS-P/05 - Econometria02 engineering and technology01 natural sciencesCornish-Fisher approximation010104 statistics & probabilityStatistics::Methodologyposterior quantile0101 mathematicsposterior moments and cumulantsMathematicsreflected generalized gamma priors021103 operations researchLaplace transformLocation modelMathematical analysisStatistics::Computationposterior moments and cumulantCornish–Fisher approximationSettore SECS-S/01 - StatisticaNormal location modelposterior quantilesQuantileCommunications in Statistics - Theory and Methods
researchProduct

Componentwise adaptation for high dimensional MCMC

2005

We introduce a new adaptive MCMC algorithm, based on the traditional single component Metropolis-Hastings algorithm and on our earlier adaptive Metropolis algorithm (AM). In the new algorithm the adaption is performed component by component. The chain is no more Markovian, but it remains ergodic. The algorithm is demonstrated to work well in varying test cases up to 1000 dimensions.

Statistics and ProbabilityMathematical optimization010504 meteorology & atmospheric sciencesMonte Carlo methodMarkov processMarkov chain Monte Carlo01 natural sciencesStatistics::Computation010104 statistics & probabilityComputational Mathematicssymbols.namesakeMetropolis–Hastings algorithmTest caseChain (algebraic topology)Component (UML)symbolsStatistics::MethodologyErgodic theory0101 mathematicsStatistics Probability and Uncertainty0105 earth and related environmental sciencesMathematicsComputational Statistics
researchProduct