Search results for "ESTIMATOR"

showing 10 items of 313 documents

Conditionally heteroscedastic intensity-dependent marking of log Gaussian Cox processes

2009

Spatial marked point processes are models for systems of points which are randomly distributed in space and provided with measured quantities called marks. This study deals with marking, that is methods of constructing marked point processes from unmarked ones. The focus is density-dependent marking where the local point intensity affects the mark distribution. This study develops new markings for log Gaussian Cox processes. In these markings, both the mean and variance of the mark distribution depend on the local intensity. The mean, variance and mark correlation properties are presented for the new markings, and a Bayesian estimation procedure is suggested for statistical inference. The p…

Statistics and ProbabilityBayes estimatorHeteroscedasticityGaussianVariance (accounting)Point processsymbols.namesakeStatisticsStatistical inferencesymbolsPoint (geometry)Statistics Probability and UncertaintyFocus (optics)MathematicsStatistica Neerlandica
researchProduct

Poisson Regression with Change-Point Prior in the Modelling of Disease Risk around a Point Source

2003

Bayesian estimation of the risk of a disease around a known point source of exposure is considered. The minimal requirements for data are that cases and populations at risk are known for a fixed set of concentric annuli around the point source, and each annulus has a uniquely defined distance from the source. The conventional Poisson likelihood is assumed for the counts of disease cases in each annular zone with zone-specific relative risk and parameters and, conditional on the risks, the counts are considered to be independent. The prior for the relative risk parameters is assumed to be piecewise constant at the distance having a known number of components. This prior is the well-known cha…

Statistics and ProbabilityBayes estimatorPoint sourcePosterior probabilityGeneral MedicineConditional probability distributionPoisson distributionsymbols.namesakePrior probabilityStatisticssymbolsPoisson regressionStatistics Probability and UncertaintyGibbs samplingMathematicsBiometrical Journal
researchProduct

A Log-Rank Test for Equivalence of Two Survivor Functions

1993

We consider a hypothesis testing problem in which the alternative states that the vertical distance between the underlying survivor functions nowhere exceeds some prespecified bound delta0. Under the assumption of proportional hazards, this hypothesis is shown to be (logically) equivalent to the statement [beta[log(1 + epsilon), where beta denotes the regression coefficient associated with the treatment group indicator, and epsilon is a simple strictly increasing function of delta. The testing procedure proposed consists of carrying out in terms of beta (i.e., the standard Cox likelihood estimator of beta) the uniformly most powerful level alpha test for a suitable interval hypothesis about…

Statistics and ProbabilityBiometryGaussianGeneral Biochemistry Genetics and Molecular BiologyCombinatoricssymbols.namesakeNeoplasmsLinear regressionStatisticsChi-square testHumansComputer SimulationCerebellar NeoplasmsChildEquivalence (measure theory)Proportional Hazards ModelsStatistical hypothesis testingMathematicsClinical Trials as TopicGeneral Immunology and MicrobiologyApplied MathematicsEstimatorGeneral MedicineSurvival AnalysisLog-rank testLinear ModelssymbolsGeneral Agricultural and Biological SciencesMedulloblastomaQuantileBiometrics
researchProduct

A fast and recursive algorithm for clustering large datasets with k-medians

2012

Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics. Borrowing ideas from MacQueen (1967) who introduced a sequential version of the $k$-means algorithm, a new class of recursive stochastic gradient algorithms designed for the $k$-medians loss criterion is proposed. By their recursive nature, these algorithms are very fast and are well adapted to deal with large samples of data that are allowed to arrive sequentially. It is proved that the stochastic gradient algorithm converges almost surely to the set of stationary points of the underlying loss criterion. A particular attention is paid to the averaged versions, which…

Statistics and ProbabilityClustering high-dimensional dataFOS: Computer and information sciencesMathematical optimizationhigh dimensional dataMachine Learning (stat.ML)02 engineering and technologyStochastic approximation01 natural sciencesStatistics - Computation010104 statistics & probabilityk-medoidsStatistics - Machine Learning[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]stochastic approximation0202 electrical engineering electronic engineering information engineeringComputational statisticsrecursive estimatorsAlmost surely[ MATH.MATH-ST ] Mathematics [math]/Statistics [math.ST]0101 mathematicsCluster analysisComputation (stat.CO)Mathematicsaveragingk-medoidsRobbins MonroApplied MathematicsEstimator[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]stochastic gradient[ STAT.TH ] Statistics [stat]/Statistics Theory [stat.TH]MedoidComputational MathematicsComputational Theory and Mathematicsonline clustering020201 artificial intelligence & image processingpartitioning around medoidsAlgorithm
researchProduct

Robust estimation and regression with parametric quantile functions

2022

A new, broad family of quantile-based estimators is described, and theoretical and empirical evidence is provided for their robustness to outliers in the response. The proposed method can be used to estimate all types of parameters, including location, scale, rate and shape parameters, extremes, regression coefficients and hazard ratios, and can be extended to censored and truncated data. The described estimator can be utilized to construct robust versions of common parametric and semiparametric methods, such as linear (Normal) regression, generalized linear models, and proportional hazards models. A variety of significant results and applications is presented to show the flexibility of the…

Statistics and ProbabilityComputational MathematicsRobust Cox modelComputational Theory and Mathematicsq-estimatorsR package QestApplied MathematicsQuantile-based estimationRobust linear model
researchProduct

A new mathematical approach for the estimation of the AUC and its variability under different experimental designs in preclinical studies

2011

The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confi…

Statistics and ProbabilityComputer scienceDrug Evaluation PreclinicalAdministration Oralcomputer.software_genreSoftwareCiprofloxacinArea under curveVariance estimationAnimalsPharmacology (medical)Rats WistarPharmacologyModels Statisticalbusiness.industryDesign of experimentsEstimatorModels TheoreticalConfidence intervalRatsStandard errorResearch DesignArea Under CurveData miningbusinesscomputerSoftwarePharmaceutical Statistics
researchProduct

Blind Source Separation Based on Joint Diagonalization in R: The Packages JADE and BSSasymp

2017

Blind source separation (BSS) is a well-known signal processing tool which is used to solve practical data analysis problems in various fields of science. In BSS, we assume that the observed data consists of linear mixtures of latent variables. The mixing system and the distributions of the latent variables are unknown. The aim is to find an estimate of an unmixing matrix which then transforms the observed data back to latent sources. In this paper we present the R packages JADE and BSSasymp. The package JADE offers several BSS methods which are based on joint diagonalization. Package BSSasymp contains functions for computing the asymptotic covariance matrices as well as their data-based es…

Statistics and ProbabilityComputer scienceJADE (programming language)02 engineering and technologyLatent variableMachine learningcomputer.software_genre01 natural sciencesBlind signal separation010104 statistics & probabilityMatrix (mathematics)nonstationary source separationMixing (mathematics)0202 electrical engineering electronic engineering information engineeringsecond order source separation0101 mathematicslcsh:Statisticslcsh:HA1-4737computer.programming_languageta113Signal processingta112matematiikkamultivariate time seriesmathematicsbusiness.industryEstimator020206 networking & telecommunicationsriippumattomien komponenttien analyysiindependent component analysis; multivariate time series; nonstationary source separation; performance indices; second order source separationIndependent component analysisperformance indicesstatisticsindependent component analysisArtificial intelligenceStatistics Probability and UncertaintybusinesscomputerAlgorithmSoftwareJournal of Statistical Software
researchProduct

Fast Estimation of the Median Covariation Matrix with Application to Online Robust Principal Components Analysis

2017

International audience; The geometric median covariation matrix is a robust multivariate indicator of dispersion which can be extended without any difficulty to functional data. We define estimators, based on recursive algorithms, that can be simply updated at each new observation and are able to deal rapidly with large samples of high dimensional data without being obliged to store all the data in memory. Asymptotic convergence properties of the recursive algorithms are studied under weak conditions. The computation of the principal components can also be performed online and this approach can be useful for online outlier detection. A simulation study clearly shows that this robust indicat…

Statistics and ProbabilityComputer scienceMathematics - Statistics TheoryStatistics Theory (math.ST)01 natural sciences010104 statistics & probabilityMatrix (mathematics)Dimension (vector space)Geometric medianStochastic gradientFOS: Mathematics0101 mathematicsL1-median010102 general mathematicsEstimator[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]Geometric medianCovariance[ STAT.TH ] Statistics [stat]/Statistics Theory [stat.TH]Functional dataMSC: 62G05 62L20Principal component analysisProjection pursuitAnomaly detectionRecursive robust estimationStatistics Probability and UncertaintyAlgorithm
researchProduct

A Distribution-Free Two-Sample Equivalence Test Allowing for Tied Observations

1999

A new testing procedure is derived which enables to assess the equivalence of two arbitrary noncontinuous distribution functions from which unrelated samples are taken as the data to be analyzed. The equivalence region is defined to consist of all pairs (F, G) of distribution functions such that for independent X ∼F, Y ∼G the conditional probability of {X > Y} given {X ¬= Y} lies in some short interval around 1/2. The test rejects the null hypothesis of nonequivalence if and only if the standardized distance between the U-statistics estimator of P|X > Y | X ¬= Y] and the center of the equivalence interval (1/2 - e 1 , 1/2 + e 2 ) does not exceed a critical upper bound which has to be comput…

Statistics and ProbabilityConditional probabilityEstimatorGeneral MedicineUpper and lower boundsCombinatoricsDelta methodDistribution functionSampling distributionStatisticsStatistics Probability and UncertaintyEquivalence (measure theory)MathematicsNoncentrality parameterBiometrical Journal
researchProduct

Properties of Design-Based Functional Principal Components Analysis.

2010

This work aims at performing Functional Principal Components Analysis (FPCA) with Horvitz-Thompson estimators when the observations are curves collected with survey sampling techniques. One important motivation for this study is that FPCA is a dimension reduction tool which is the first step to develop model assisted approaches that can take auxiliary information into account. FPCA relies on the estimation of the eigenelements of the covariance operator which can be seen as nonlinear functionals. Adapting to our functional context the linearization technique based on the influence function developed by Deville (1999), we prove that these estimators are asymptotically design unbiased and con…

Statistics and ProbabilityContext (language use)Mathematics - Statistics TheoryStatistics Theory (math.ST)Perturbation theory01 natural sciencesVariance estimationHorvitz–Thompson estimatorSurvey sampling010104 statistics & probabilityLinearization[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]0502 economics and businessStatisticsConsistent estimatorFOS: Mathematicsvon Mises expansionApplied mathematicsHorvitz-Thompson estimator[ MATH.MATH-ST ] Mathematics [math]/Statistics [math.ST]0101 mathematicsComputingMilieux_MISCELLANEOUS050205 econometrics MathematicsEigenfunctionsInfluence functionApplied Mathematics05 social sciencesMathematical statisticsEstimator[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]Covariance operatorCovariance16. Peace & justice[ STAT.TH ] Statistics [stat]/Statistics Theory [stat.TH]Delta methodModel-assisted estimationStatistics Probability and Uncertainty
researchProduct