Search results for " estimation"

showing 10 items of 562 documents

Olley–Pakes productivity decomposition: computation and inference

2016

Summary We show how a moment-based estimation procedure can be used to compute point estimates and standard errors for the two components of the widely used Olley–Pakes decomposition of aggregate (weighted average) productivity. When applied to business level microdata, the procedure allows for autocovariance and heteroscedasticity robust inference and hypothesis testing about, for example, the coevolution of the productivity components in different groups of firms. We provide an application to Finnish firm level data and find that formal statistical inference casts doubt on the conclusions that one might draw on the basis of a visual inspection of the components of the decomposition.

Statistics and ProbabilityEconomics and EconometricsHeteroscedasticityproductivitytuottavuusInferenceFrequentist inference0502 economics and businessStatisticsStatistical inferenceEconometricsPoint estimation050207 economics050205 econometrics MathematicsStatistical hypothesis testingpäättelyta112inferenceta51105 social sciencesgeneralized method of momentsAutocovarianceweighted averageFiducial inferenceStatistics Probability and UncertaintySocial Sciences (miscellaneous)Journal of the Royal Statistical Society Series A: Statistics in Society
researchProduct

Weighted samples, kernel density estimators and convergence

2003

This note extends the standard kernel density estimator to the case of weighted samples in several ways. In the first place I consider the obvious extension by substituting the simple sum in the definition of the estimator by a weighted sum, but I also consider other alternatives of introducing weights, based on adaptive kernel density estimators, and consider the weights as indicators of the informational content of the observations and in this sense as signals of the local density of the data. All these ideas are shown using the Penn World Table in the context of the macroeconomic convergence issue.

Statistics and ProbabilityEconomics and EconometricsMathematical optimizationKernel density estimationEstimatorMultivariate kernel density estimationKernel principal component analysisMathematics (miscellaneous)Penn World TableKernel embedding of distributionsVariable kernel density estimationKernel (statistics)Applied mathematicsSocial Sciences (miscellaneous)MathematicsEmpirical Economics
researchProduct

Maximum likelihood estimation for the exponential power function parameters

1995

This paper addresses the problem of obtaining maximum likelihood estimates for the three parameters of the exponential power function; the information matrix is derived and the covariance matrix is here presented; the regularity conditions which ensure asymptotic normality and efficiency are examined. A numerical investigation is performed for exploring the bias and variance of the maximum likelihood estimates and their dependence on sample size and shape parameter.

Statistics and ProbabilityEstimation theoryRestricted maximum likelihoodMaximum likelihood sequence estimationLikelihood principlesymbols.namesakeEstimation of covariance matricesModeling and SimulationStatisticsExpectation–maximization algorithmsymbolsFisher informationLikelihood functionMathematicsCommunications in Statistics - Simulation and Computation
researchProduct

Model-Assisted Estimation Through Random Forests in Finite Population Sampling

2021

In surveys, the interest lies in estimating finite population parameters such as population totals and means. In most surveys, some auxiliary information is available at the estimation stage. This information may be incorporated in the estimation procedures to increase their precision. In this article, we use random forests (RFs) to estimate the functional relationship between the survey variable and the auxiliary variables. In recent years, RFs have become attractive as National Statistical Offices have now access to a variety of data sources, potentially exhibiting a large number of observations on a large number of variables. We establish the theoretical properties of model-assisted proc…

Statistics and ProbabilityEstimationFOS: Computer and information sciences0303 health scienceseducation.field_of_studyPopulationAstrophysics::Cosmology and Extragalactic Astrophysics01 natural sciencesPopulation samplingNonparametric regressionRandom forestMethodology (stat.ME)010104 statistics & probability03 medical and health sciencesVariance estimationStatisticsQuantitative Biology::Populations and EvolutionSurvey data collectionStage (hydrology)0101 mathematicsStatistics Probability and UncertaintyeducationStatistics - Methodology030304 developmental biologyMathematics
researchProduct

A Software Tool for the Exponential Power Distribution: The normalp Package

2005

In this paper we present the normalp package, a package for the statistical environment R that has a set of tools for dealing with the exponential power distribution. In this package there are functions to compute the density function, the distribution function and the quantiles from an exponential power distribution and to generate pseudo-random numbers from the same distribution. Moreover, methods concerning the estimation of the distribution parameters are described and implemented. It is also possible to estimate linear regression models when we assume the random errors distributed according to an exponential power distribution. A set of functions is designed to perform simulation studi…

Statistics and ProbabilityExponential distributionTheoretical computer scienceComputer scienceAsymptotic distributionDistribution fittingLaplace distributionExponential familyGamma distributionStatistics Probability and UncertaintyNatural exponential familyProbability integral transformAlgorithmlcsh:Statisticslcsh:HA1-4737exponential power distribution R estimation linear regressionSoftwareJournal of Statistical Software
researchProduct

A computationally fast alternative to cross-validation in penalized Gaussian graphical models

2015

We study the problem of selection of regularization parameter in penalized Gaussian graphical models. When the goal is to obtain the model with good predicting power, cross validation is the gold standard. We present a new estimator of Kullback-Leibler loss in Gaussian Graphical model which provides a computationally fast alternative to cross-validation. The estimator is obtained by approximating leave-one-out-cross validation. Our approach is demonstrated on simulated data sets for various types of graphs. The proposed formula exhibits superior performance, especially in the typical small sample size scenario, compared to other available alternatives to cross validation, such as Akaike's i…

Statistics and ProbabilityFOS: Computer and information sciencesGaussianInformation CriteriaCross-validationMethodology (stat.ME)symbols.namesakeBayesian information criterionStatisticsPenalized estimationGeneralized approximate cross-validationGraphical modelSDG 7 - Affordable and Clean EnergyStatistics - MethodologyMathematics/dk/atira/pure/sustainabledevelopmentgoals/affordable_and_clean_energyKullback-Leibler loApplied MathematicsEstimatorCross-validationGaussian graphical modelSample size determinationModeling and SimulationsymbolsInformation criteriaStatistics Probability and UncertaintyAkaike information criterionSettore SECS-S/01 - StatisticaAlgorithm
researchProduct

Multivariate nonparametric estimation of the Pickands dependence function using Bernstein polynomials

2017

Abstract Many applications in risk analysis require the estimation of the dependence among multivariate maxima, especially in environmental sciences. Such dependence can be described by the Pickands dependence function of the underlying extreme-value copula. Here, a nonparametric estimator is constructed as the sample equivalent of a multivariate extension of the madogram. Shape constraints on the family of Pickands dependence functions are taken into account by means of a representation in terms of Bernstein polynomials. The large-sample theory of the estimator is developed and its finite-sample performance is evaluated with a simulation study. The approach is illustrated with a dataset of…

Statistics and ProbabilityFOS: Computer and information sciencesMultivariate statisticsNONPARAMETRIC ESTIMATIONMULTIVARIATE MAX-STABLE DISTRIBUTION01 natural sciencesCopula (probability theory)Methodology (stat.ME)010104 statistics & probabilityStatisticsStatistics::Methodology0101 mathematicsExtreme-value copulaEXTREMAL DEPENDENCEEXTREMEVALUE COPULA[SDU.ENVI]Sciences of the Universe [physics]/Continental interfaces environmentStatistics - MethodologyComputingMilieux_MISCELLANEOUSMathematics[SDU.OCEAN]Sciences of the Universe [physics]/Ocean AtmosphereApplied Mathematics010102 general mathematicsNonparametric statisticsEstimatorExtremal dependenceHEAVY RAINFALLBernstein polynomialBERNSTEIN POLYNOMIALS EXTREMAL DEPENDENCE EXTREMEVALUE COPULA HEAVY RAINFALL NONPARAMETRIC ESTIMATION MULTIVARIATE MAX-STABLE DISTRIBUTION PICKANDS DEPENDENCE FUNCTION13. Climate actionDependence functionStatistics Probability and UncertaintyMaximaSettore SECS-S/01 - StatisticaBERNSTEIN POLYNOMIALSPICKANDS DEPENDENCE FUNCTION
researchProduct

Bayesian models for data missing not at random in health examination surveys

2018

In epidemiological surveys, data missing not at random (MNAR) due to survey nonresponse may potentially lead to a bias in the risk factor estimates. We propose an approach based on Bayesian data augmentation and survival modelling to reduce the nonresponse bias. The approach requires additional information based on follow-up data. We present a case study of smoking prevalence using FINRISK data collected between 1972 and 2007 with a follow-up to the end of 2012 and compare it to other commonly applied missing at random (MAR) imputation approaches. A simulation experiment is carried out to study the validity of the approaches. Our approach appears to reduce the nonresponse bias substantially…

Statistics and ProbabilityFOS: Computer and information sciencesmedicine.medical_specialtymultiple imputationComputer scienceBayesian probability01 natural sciencesStatistics - Applicationssurvival analysisfollow-up dataMethodology (stat.ME)010104 statistics & probability03 medical and health sciencesHealth examination0302 clinical medicineEpidemiologyStatisticsmedicineApplications (stat.AP)030212 general & internal medicine0101 mathematicsSurvival analysisStatistics - MethodologyBayes estimatorta112elinaika-analyysiRisk factor (computing)Bayesian estimation3. Good healthhealth examination surveysStatistics Probability and UncertaintyMissing not at randomdata augmentation
researchProduct

Intrinsic credible regions: An objective Bayesian approach to interval estimation

2005

This paper definesintrinsic credible regions, a method to produce objective Bayesian credible regions which only depends on the assumed model and the available data.Lowest posterior loss (LPL) regions are defined as Bayesian credible regions which contain values of minimum posterior expected loss: they depend both on the loss function and on the prior specification. An invariant, information-theory based loss function, theintrinsic discrepancy is argued to be appropriate for scientific communication. Intrinsic credible regions are the lowest posterior loss regions with respect to the intrinsic discrepancy loss and the appropriate reference prior. The proposed procedure is completely general…

Statistics and ProbabilityInterval estimationBayesian probabilityConfidence intervalsymbols.namesakeFrequentist inferenceStatisticssymbolsCredible intervalApplied mathematicsPoint estimationStatistics Probability and UncertaintyFisher informationExpected lossMathematicsTEST
researchProduct

Clustering of spatial point patterns

2006

Spatial point patterns arise as the natural sampling information in many problems. An ophthalmologic problem gave rise to the problem of detecting clusters of point patterns. A set of human corneal endothelium images is given. Each image is described by using a point pattern, the cell centroids. The main problem is to find groups of images corresponding with groups of spatial point patterns. This is interesting from a descriptive point of view and for clinical purposes. A new image can be compared with prototypes of each group and finally evaluated by the physician. Usual descriptors of spatial point patterns such as the empty-space function, the nearest distribution function or Ripley's K-…

Statistics and ProbabilityK-functionbusiness.industryApplied MathematicsCentroidPattern recognitionFunction (mathematics)Point processComputational MathematicsComputational Theory and MathematicsSurvival functionStatisticsPoint (geometry)Artificial intelligencePoint estimationCluster analysisbusinessMathematicsComputational Statistics & Data Analysis
researchProduct