Search results for " estimation"
showing 10 items of 562 documents
Olley–Pakes productivity decomposition: computation and inference
2016
Summary We show how a moment-based estimation procedure can be used to compute point estimates and standard errors for the two components of the widely used Olley–Pakes decomposition of aggregate (weighted average) productivity. When applied to business level microdata, the procedure allows for autocovariance and heteroscedasticity robust inference and hypothesis testing about, for example, the coevolution of the productivity components in different groups of firms. We provide an application to Finnish firm level data and find that formal statistical inference casts doubt on the conclusions that one might draw on the basis of a visual inspection of the components of the decomposition.
Weighted samples, kernel density estimators and convergence
2003
This note extends the standard kernel density estimator to the case of weighted samples in several ways. In the first place I consider the obvious extension by substituting the simple sum in the definition of the estimator by a weighted sum, but I also consider other alternatives of introducing weights, based on adaptive kernel density estimators, and consider the weights as indicators of the informational content of the observations and in this sense as signals of the local density of the data. All these ideas are shown using the Penn World Table in the context of the macroeconomic convergence issue.
Maximum likelihood estimation for the exponential power function parameters
1995
This paper addresses the problem of obtaining maximum likelihood estimates for the three parameters of the exponential power function; the information matrix is derived and the covariance matrix is here presented; the regularity conditions which ensure asymptotic normality and efficiency are examined. A numerical investigation is performed for exploring the bias and variance of the maximum likelihood estimates and their dependence on sample size and shape parameter.
Model-Assisted Estimation Through Random Forests in Finite Population Sampling
2021
In surveys, the interest lies in estimating finite population parameters such as population totals and means. In most surveys, some auxiliary information is available at the estimation stage. This information may be incorporated in the estimation procedures to increase their precision. In this article, we use random forests (RFs) to estimate the functional relationship between the survey variable and the auxiliary variables. In recent years, RFs have become attractive as National Statistical Offices have now access to a variety of data sources, potentially exhibiting a large number of observations on a large number of variables. We establish the theoretical properties of model-assisted proc…
A Software Tool for the Exponential Power Distribution: The normalp Package
2005
In this paper we present the normalp package, a package for the statistical environment R that has a set of tools for dealing with the exponential power distribution. In this package there are functions to compute the density function, the distribution function and the quantiles from an exponential power distribution and to generate pseudo-random numbers from the same distribution. Moreover, methods concerning the estimation of the distribution parameters are described and implemented. It is also possible to estimate linear regression models when we assume the random errors distributed according to an exponential power distribution. A set of functions is designed to perform simulation studi…
A computationally fast alternative to cross-validation in penalized Gaussian graphical models
2015
We study the problem of selection of regularization parameter in penalized Gaussian graphical models. When the goal is to obtain the model with good predicting power, cross validation is the gold standard. We present a new estimator of Kullback-Leibler loss in Gaussian Graphical model which provides a computationally fast alternative to cross-validation. The estimator is obtained by approximating leave-one-out-cross validation. Our approach is demonstrated on simulated data sets for various types of graphs. The proposed formula exhibits superior performance, especially in the typical small sample size scenario, compared to other available alternatives to cross validation, such as Akaike's i…
Multivariate nonparametric estimation of the Pickands dependence function using Bernstein polynomials
2017
Abstract Many applications in risk analysis require the estimation of the dependence among multivariate maxima, especially in environmental sciences. Such dependence can be described by the Pickands dependence function of the underlying extreme-value copula. Here, a nonparametric estimator is constructed as the sample equivalent of a multivariate extension of the madogram. Shape constraints on the family of Pickands dependence functions are taken into account by means of a representation in terms of Bernstein polynomials. The large-sample theory of the estimator is developed and its finite-sample performance is evaluated with a simulation study. The approach is illustrated with a dataset of…
Bayesian models for data missing not at random in health examination surveys
2018
In epidemiological surveys, data missing not at random (MNAR) due to survey nonresponse may potentially lead to a bias in the risk factor estimates. We propose an approach based on Bayesian data augmentation and survival modelling to reduce the nonresponse bias. The approach requires additional information based on follow-up data. We present a case study of smoking prevalence using FINRISK data collected between 1972 and 2007 with a follow-up to the end of 2012 and compare it to other commonly applied missing at random (MAR) imputation approaches. A simulation experiment is carried out to study the validity of the approaches. Our approach appears to reduce the nonresponse bias substantially…
Intrinsic credible regions: An objective Bayesian approach to interval estimation
2005
This paper definesintrinsic credible regions, a method to produce objective Bayesian credible regions which only depends on the assumed model and the available data.Lowest posterior loss (LPL) regions are defined as Bayesian credible regions which contain values of minimum posterior expected loss: they depend both on the loss function and on the prior specification. An invariant, information-theory based loss function, theintrinsic discrepancy is argued to be appropriate for scientific communication. Intrinsic credible regions are the lowest posterior loss regions with respect to the intrinsic discrepancy loss and the appropriate reference prior. The proposed procedure is completely general…
Clustering of spatial point patterns
2006
Spatial point patterns arise as the natural sampling information in many problems. An ophthalmologic problem gave rise to the problem of detecting clusters of point patterns. A set of human corneal endothelium images is given. Each image is described by using a point pattern, the cell centroids. The main problem is to find groups of images corresponding with groups of spatial point patterns. This is interesting from a descriptive point of view and for clinical purposes. A new image can be compared with prototypes of each group and finally evaluated by the physician. Usual descriptors of spatial point patterns such as the empty-space function, the nearest distribution function or Ripley's K-…