Search results for "Inference"

showing 10 items of 478 documents

Olley–Pakes productivity decomposition: computation and inference

2016

Summary We show how a moment-based estimation procedure can be used to compute point estimates and standard errors for the two components of the widely used Olley–Pakes decomposition of aggregate (weighted average) productivity. When applied to business level microdata, the procedure allows for autocovariance and heteroscedasticity robust inference and hypothesis testing about, for example, the coevolution of the productivity components in different groups of firms. We provide an application to Finnish firm level data and find that formal statistical inference casts doubt on the conclusions that one might draw on the basis of a visual inspection of the components of the decomposition.

Statistics and ProbabilityEconomics and EconometricsHeteroscedasticityproductivitytuottavuusInferenceFrequentist inference0502 economics and businessStatisticsStatistical inferenceEconometricsPoint estimation050207 economics050205 econometrics MathematicsStatistical hypothesis testingpäättelyta112inferenceta51105 social sciencesgeneralized method of momentsAutocovarianceweighted averageFiducial inferenceStatistics Probability and UncertaintySocial Sciences (miscellaneous)Journal of the Royal Statistical Society Series A: Statistics in Society
researchProduct

Bayesian hierarchical Poisson models with a hidden Markov structure for the detection of influenza epidemic outbreaks

2015

Considerable effort has been devoted to the development of statistical algorithms for the automated monitoring of influenza surveillance data. In this article, we introduce a framework of models for the early detection of the onset of an influenza epidemic which is applicable to different kinds of surveillance data. In particular, the process of the observed cases is modelled via a Bayesian Hierarchical Poisson model in which the intensity parameter is a function of the incidence rate. The key point is to consider this incidence rate as a normal distribution in which both parameters (mean and variance) are modelled differently, depending on whether the system is in an epidemic or non-epide…

Statistics and ProbabilityEpidemiologyComputer scienceBayesian probabilityBiostatisticsPoisson distributionBayesian inferenceDisease OutbreaksNormal distributionsymbols.namesakeHealth Information ManagementInfluenza HumanStatisticsEconometricsHumansPoisson DistributionPoisson regressionEpidemicsHidden Markov modelProbabilityInternetModels StatisticalIncidenceBayes TheoremMarkov ChainsSearch EngineMoment (mathematics)Autoregressive modelSpainsymbolsMonte Carlo MethodSentinel Surveillance
researchProduct

Bayesian Markov switching models for the early detection of influenza epidemics

2008

The early detection of outbreaks of diseases is one of the most challenging objectives of epidemiological surveillance systems. In this paper, a Markov switching model is introduced to determine the epidemic and non-epidemic periods from influenza surveillance data: the process of differenced incidence rates is modelled either with a first-order autoregressive process or with a Gaussian white-noise process depending on whether the system is in an epidemic or in a non-epidemic phase. The transition between phases of the disease is modelled as a Markovian process. Bayesian inference is carried out on the former model to detect influenza epidemics at the very moment of their onset. Moreover, t…

Statistics and ProbabilityEpidemiologyComputer scienceBayesian probabilityMarkov processBayesian inferenceDisease Outbreakssymbols.namesakeBayes' theoremStatisticsInfluenza HumanEconometricsHumansHidden Markov modelModels StatisticalMarkov chainIncidenceBayes TheoremMarkov ChainsMoment (mathematics)Autoregressive modelSpainSpace-Time ClusteringsymbolsRegression AnalysisSentinel Surveillance
researchProduct

The conditional censored graphical lasso estimator

2020

© 2020, Springer Science+Business Media, LLC, part of Springer Nature. In many applied fields, such as genomics, different types of data are collected on the same system, and it is not uncommon that some of these datasets are subject to censoring as a result of the measurement technologies used, such as data generated by polymerase chain reactions and flow cytometer. When the overall objective is that of network inference, at possibly different levels of a system, information coming from different sources and/or different steps of the analysis can be integrated into one model with the use of conditional graphical models. In this paper, we develop a doubly penalized inferential procedure for…

Statistics and ProbabilityFOS: Computer and information sciencesComputer scienceGaussianInferenceData typeTheoretical Computer Sciencehigh-dimensional settingDatabase normalizationMethodology (stat.ME)symbols.namesakeLasso (statistics)Graphical modelConditional Gaussian graphical modelcensored graphical lassoStatistics - MethodologyHigh-dimensional settingconditional Gaussian graphical modelssparsityEstimatorCensoring (statistics)Censored graphical lassoComputational Theory and MathematicssymbolsCensored dataStatistics Probability and UncertaintySettore SECS-S/01 - StatisticaSparsityAlgorithm
researchProduct

Bayesian survival analysis with BUGS

2020

Survival analysis is one of the most important fields of statistics in medicine and biological sciences. In addition, the computational advances in the last decades have favored the use of Bayesian methods in this context, providing a flexible and powerful alternative to the traditional frequentist approach. The objective of this article is to summarize some of the most popular Bayesian survival models, such as accelerated failure time, proportional hazards, mixture cure, competing risks, multi-state, frailty, and joint models of longitudinal and survival data. Moreover, an implementation of each presented model is provided using a BUGS syntax that can be run with JAGS from the R programmin…

Statistics and ProbabilityFOS: Computer and information sciencesEpidemiologyComputer scienceBayesian probabilityContext (language use)Accelerated failure time modelMachine learningcomputer.software_genreBayesian inference01 natural sciencesStatistics - Applications010104 statistics & probability03 medical and health sciences0302 clinical medicineFrequentist inferenceHumansApplications (stat.AP)030212 general & internal medicine0101 mathematicsModels StatisticalSyntax (programming languages)business.industryR Programming LanguageBayes TheoremSurvival AnalysisMedical statisticsArtificial intelligencebusinesscomputer
researchProduct

Assessing uncertainty of voter transitions estimated from aggregated data. Application to the 2017 French presidential election

2020

[EN] Inferring electoral individual behaviour from aggregated data is a very active research area, with ramifications in sociology and political science. A new approach based on linear programming is proposed to estimate voter transitions among parties (or candidates) between two elections. Compared to other linear and quadratic programming models previously published, our approach presents two important innovations. Firstly, it explicitly deals with new entries and exits in the election census without assuming unrealistic hypotheses, enabling a reasonable estimation of vote behaviour of young electors voting for the first time. Secondly, by exploiting the information contained in the model…

Statistics and ProbabilityFrench elections021103 operations researchPresidential electionLinear programmingESTADISTICA E INVESTIGACION OPERATIVA0211 other engineering and technologies02 engineering and technologyData application01 natural sciencesEcological inferenceR x C contingency tables010104 statistics & probabilityLinear programmingVoter transitionsEconometricsV WCDANM 2018: Advances in Computational Data Analysis0101 mathematicsStatistics Probability and Uncertainty
researchProduct

Improvements and Modifications of Tarone's Multiple Test Procedure for Discrete Data

1998

Tarone (1990, Biometrics 46, 515-522) proposed a multiple test procedure for discrete test statistics improving the usual Bonferroni procedure. However, Tarone's procedure is not monotone depending on the predetermined multiple level a. Roth (1998, Journal of Statistical Planning and Inference, in press) developed a monotone version of Tarone's procedure. We present a similar procedure that is both monotone and an improvement of Tarone's proposal. Based on this extension, we derive a step-down procedure that is a corresponding improvement of Holm's (1979, Scandinavian Journal of Statistics 6, 65-70) sequentially rejective procedure. It is shown how adjusted p-values can be computed for the …

Statistics and ProbabilityGeneral Immunology and MicrobiologyBiometricsComputer scienceTest proceduresApplied MathematicsInferenceGeneral MedicineExtension (predicate logic)General Biochemistry Genetics and Molecular Biologysymbols.namesakeBonferroni correctionMonotone polygonsymbolsGeneral Agricultural and Biological SciencesAlgorithmStatistical hypothesis testingBiometrics
researchProduct

Explaining German outward FDI in the EU: a reassessment using Bayesian model averaging and GLM estimators

2021

The last decades have seen an increasing interest in FDI and the process of production fragmentation. This has been particularly important for Germany as the core of the European Union (EU) production hub. This paper attempts to provide a deeper under standing of the drivers of German outward FDI in the EU for the period 1996–2012 by tackling the two main challenges faced in the modelization of FDI, namely the variable selection problem and the choice of the estimation method. For that purpose, we first extend previous BMA analysis developed by Camarero et al. (Econ Model 83:326–345, 2019) by including country-pair-fixed effects to select the appropriate set of variables. Second, we compare…

Statistics and ProbabilityGeneralized linear modelFDI determinantsEconomics and Econometricsgravity modelsForeign direct investmentgermanyBayesian inferenceGermanMathematics (miscellaneous)Germany0502 economics and businessEconomicsEconometricsmedia_common.cataloged_instanceC13050207 economicsEuropean unionC33050205 econometrics media_commonEstimation05 social sciencesEstimatorUNESCO::CIENCIAS ECONÓMICASInvestment (macroeconomics)language.human_languageGravity modelsOutward FDIlanguageoutward FDIF21F23GLMSocial Sciences (miscellaneous)
researchProduct

Extended differential geometric LARS for high-dimensional GLMs with general dispersion parameter

2018

A large class of modeling and prediction problems involves outcomes that belong to an exponential family distribution. Generalized linear models (GLMs) are a standard way of dealing with such situations. Even in high-dimensional feature spaces GLMs can be extended to deal with such situations. Penalized inference approaches, such as the $$\ell _1$$ or SCAD, or extensions of least angle regression, such as dgLARS, have been proposed to deal with GLMs with high-dimensional feature spaces. Although the theory underlying these methods is in principle generic, the implementation has remained restricted to dispersion-free models, such as the Poisson and logistic regression models. The aim of this…

Statistics and ProbabilityGeneralized linear modelMathematical optimizationGeneralized linear modelsPredictor-€“corrector algorithmGeneralized linear model02 engineering and technologyPoisson distributionDANTZIG SELECTOR01 natural sciencesCross-validationHigh-dimensional inferenceTheoretical Computer Science010104 statistics & probabilitysymbols.namesakeExponential familyLEAST ANGLE REGRESSION0202 electrical engineering electronic engineering information engineeringApplied mathematicsStatistics::Methodology0101 mathematicsCROSS-VALIDATIONMathematicsLeast-angle regressionLinear model020206 networking & telecommunicationsProbability and statisticsVARIABLE SELECTIONEfficient estimatorPredictor-corrector algorithmComputational Theory and MathematicsDispersion paremeterLINEAR-MODELSsymbolsSHRINKAGEStatistics Probability and UncertaintySettore SECS-S/01 - StatisticaStatistics and Computing
researchProduct

A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance

2018

Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all-cause hazard ratio, which gives equal weight to all events irrespective of their type …

Statistics and ProbabilityHazard (logic)EpidemiologyEndpoint Determination01 natural sciencesMeasure (mathematics)WIN RATIO010104 statistics & probability03 medical and health sciences0302 clinical medicineResamplingStatisticstime-to-eventHumansComputer Simulation030212 general & internal medicinerelevance weighting0101 mathematicsParametric statisticsEvent (probability theory)MathematicsProportional Hazards Modelsclinical trialsHazard ratiocomposite endpointWeightingPRIORITIZED OUTCOMESTRIALSData Interpretation StatisticalMULTISTATE MODELSINFERENCENull hypothesisMonte Carlo MethodStatistics in Medicine
researchProduct