Search results for "ESTIMATOR"
showing 10 items of 313 documents
Conditionally heteroscedastic intensity-dependent marking of log Gaussian Cox processes
2009
Spatial marked point processes are models for systems of points which are randomly distributed in space and provided with measured quantities called marks. This study deals with marking, that is methods of constructing marked point processes from unmarked ones. The focus is density-dependent marking where the local point intensity affects the mark distribution. This study develops new markings for log Gaussian Cox processes. In these markings, both the mean and variance of the mark distribution depend on the local intensity. The mean, variance and mark correlation properties are presented for the new markings, and a Bayesian estimation procedure is suggested for statistical inference. The p…
Poisson Regression with Change-Point Prior in the Modelling of Disease Risk around a Point Source
2003
Bayesian estimation of the risk of a disease around a known point source of exposure is considered. The minimal requirements for data are that cases and populations at risk are known for a fixed set of concentric annuli around the point source, and each annulus has a uniquely defined distance from the source. The conventional Poisson likelihood is assumed for the counts of disease cases in each annular zone with zone-specific relative risk and parameters and, conditional on the risks, the counts are considered to be independent. The prior for the relative risk parameters is assumed to be piecewise constant at the distance having a known number of components. This prior is the well-known cha…
A Log-Rank Test for Equivalence of Two Survivor Functions
1993
We consider a hypothesis testing problem in which the alternative states that the vertical distance between the underlying survivor functions nowhere exceeds some prespecified bound delta0. Under the assumption of proportional hazards, this hypothesis is shown to be (logically) equivalent to the statement [beta[log(1 + epsilon), where beta denotes the regression coefficient associated with the treatment group indicator, and epsilon is a simple strictly increasing function of delta. The testing procedure proposed consists of carrying out in terms of beta (i.e., the standard Cox likelihood estimator of beta) the uniformly most powerful level alpha test for a suitable interval hypothesis about…
A fast and recursive algorithm for clustering large datasets with k-medians
2012
Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics. Borrowing ideas from MacQueen (1967) who introduced a sequential version of the $k$-means algorithm, a new class of recursive stochastic gradient algorithms designed for the $k$-medians loss criterion is proposed. By their recursive nature, these algorithms are very fast and are well adapted to deal with large samples of data that are allowed to arrive sequentially. It is proved that the stochastic gradient algorithm converges almost surely to the set of stationary points of the underlying loss criterion. A particular attention is paid to the averaged versions, which…
Robust estimation and regression with parametric quantile functions
2022
A new, broad family of quantile-based estimators is described, and theoretical and empirical evidence is provided for their robustness to outliers in the response. The proposed method can be used to estimate all types of parameters, including location, scale, rate and shape parameters, extremes, regression coefficients and hazard ratios, and can be extended to censored and truncated data. The described estimator can be utilized to construct robust versions of common parametric and semiparametric methods, such as linear (Normal) regression, generalized linear models, and proportional hazards models. A variety of significant results and applications is presented to show the flexibility of the…
A new mathematical approach for the estimation of the AUC and its variability under different experimental designs in preclinical studies
2011
The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confi…
Blind Source Separation Based on Joint Diagonalization in R: The Packages JADE and BSSasymp
2017
Blind source separation (BSS) is a well-known signal processing tool which is used to solve practical data analysis problems in various fields of science. In BSS, we assume that the observed data consists of linear mixtures of latent variables. The mixing system and the distributions of the latent variables are unknown. The aim is to find an estimate of an unmixing matrix which then transforms the observed data back to latent sources. In this paper we present the R packages JADE and BSSasymp. The package JADE offers several BSS methods which are based on joint diagonalization. Package BSSasymp contains functions for computing the asymptotic covariance matrices as well as their data-based es…
Fast Estimation of the Median Covariation Matrix with Application to Online Robust Principal Components Analysis
2017
International audience; The geometric median covariation matrix is a robust multivariate indicator of dispersion which can be extended without any difficulty to functional data. We define estimators, based on recursive algorithms, that can be simply updated at each new observation and are able to deal rapidly with large samples of high dimensional data without being obliged to store all the data in memory. Asymptotic convergence properties of the recursive algorithms are studied under weak conditions. The computation of the principal components can also be performed online and this approach can be useful for online outlier detection. A simulation study clearly shows that this robust indicat…
A Distribution-Free Two-Sample Equivalence Test Allowing for Tied Observations
1999
A new testing procedure is derived which enables to assess the equivalence of two arbitrary noncontinuous distribution functions from which unrelated samples are taken as the data to be analyzed. The equivalence region is defined to consist of all pairs (F, G) of distribution functions such that for independent X ∼F, Y ∼G the conditional probability of {X > Y} given {X ¬= Y} lies in some short interval around 1/2. The test rejects the null hypothesis of nonequivalence if and only if the standardized distance between the U-statistics estimator of P|X > Y | X ¬= Y] and the center of the equivalence interval (1/2 - e 1 , 1/2 + e 2 ) does not exceed a critical upper bound which has to be comput…
Properties of Design-Based Functional Principal Components Analysis.
2010
This work aims at performing Functional Principal Components Analysis (FPCA) with Horvitz-Thompson estimators when the observations are curves collected with survey sampling techniques. One important motivation for this study is that FPCA is a dimension reduction tool which is the first step to develop model assisted approaches that can take auxiliary information into account. FPCA relies on the estimation of the eigenelements of the covariance operator which can be seen as nonlinear functionals. Adapting to our functional context the linearization technique based on the influence function developed by Deville (1999), we prove that these estimators are asymptotically design unbiased and con…