Search results for "Statistics & Probability"

showing 10 items of 436 documents

A fast and recursive algorithm for clustering large datasets with k-medians

2012

Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics. Borrowing ideas from MacQueen (1967) who introduced a sequential version of the $k$-means algorithm, a new class of recursive stochastic gradient algorithms designed for the $k$-medians loss criterion is proposed. By their recursive nature, these algorithms are very fast and are well adapted to deal with large samples of data that are allowed to arrive sequentially. It is proved that the stochastic gradient algorithm converges almost surely to the set of stationary points of the underlying loss criterion. A particular attention is paid to the averaged versions, which…

Statistics and ProbabilityClustering high-dimensional dataFOS: Computer and information sciencesMathematical optimizationhigh dimensional dataMachine Learning (stat.ML)02 engineering and technologyStochastic approximation01 natural sciencesStatistics - Computation010104 statistics & probabilityk-medoidsStatistics - Machine Learning[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]stochastic approximation0202 electrical engineering electronic engineering information engineeringComputational statisticsrecursive estimatorsAlmost surely[ MATH.MATH-ST ] Mathematics [math]/Statistics [math.ST]0101 mathematicsCluster analysisComputation (stat.CO)Mathematicsaveragingk-medoidsRobbins MonroApplied MathematicsEstimator[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]stochastic gradient[ STAT.TH ] Statistics [stat]/Statistics Theory [stat.TH]MedoidComputational MathematicsComputational Theory and Mathematicsonline clustering020201 artificial intelligence & image processingpartitioning around medoidsAlgorithm
researchProduct

Online Principal Component Analysis in High Dimension: Which Algorithm to Choose?

2017

Summary Principal component analysis (PCA) is a method of choice for dimension reduction. In the current context of data explosion, online techniques that do not require storing all data in memory are indispensable to perform the PCA of streaming data and/or massive data. Despite the wide availability of recursive algorithms that can efficiently update the PCA when new data are observed, the literature offers little guidance on how to select a suitable algorithm for a given application. This paper reviews the main approaches to online PCA, namely, perturbation techniques, incremental methods and stochastic optimisation, and compares the most widely employed techniques in terms statistical a…

Statistics and ProbabilityComputer scienceComputationDimensionality reductionIncremental methods02 engineering and technologyMissing data01 natural sciences010104 statistics & probabilityData explosionStreaming dataPrincipal component analysis0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processing0101 mathematicsStatistics Probability and UncertaintyAlgorithmEigendecomposition of a matrixInternational Statistical Review
researchProduct

Blind Source Separation Based on Joint Diagonalization in R: The Packages JADE and BSSasymp

2017

Blind source separation (BSS) is a well-known signal processing tool which is used to solve practical data analysis problems in various fields of science. In BSS, we assume that the observed data consists of linear mixtures of latent variables. The mixing system and the distributions of the latent variables are unknown. The aim is to find an estimate of an unmixing matrix which then transforms the observed data back to latent sources. In this paper we present the R packages JADE and BSSasymp. The package JADE offers several BSS methods which are based on joint diagonalization. Package BSSasymp contains functions for computing the asymptotic covariance matrices as well as their data-based es…

Statistics and ProbabilityComputer scienceJADE (programming language)02 engineering and technologyLatent variableMachine learningcomputer.software_genre01 natural sciencesBlind signal separation010104 statistics & probabilityMatrix (mathematics)nonstationary source separationMixing (mathematics)0202 electrical engineering electronic engineering information engineeringsecond order source separation0101 mathematicslcsh:Statisticslcsh:HA1-4737computer.programming_languageta113Signal processingta112matematiikkamultivariate time seriesmathematicsbusiness.industryEstimator020206 networking & telecommunicationsriippumattomien komponenttien analyysiindependent component analysis; multivariate time series; nonstationary source separation; performance indices; second order source separationIndependent component analysisperformance indicesstatisticsindependent component analysisArtificial intelligenceStatistics Probability and UncertaintybusinesscomputerAlgorithmSoftwareJournal of Statistical Software
researchProduct

Fast Estimation of the Median Covariation Matrix with Application to Online Robust Principal Components Analysis

2017

International audience; The geometric median covariation matrix is a robust multivariate indicator of dispersion which can be extended without any difficulty to functional data. We define estimators, based on recursive algorithms, that can be simply updated at each new observation and are able to deal rapidly with large samples of high dimensional data without being obliged to store all the data in memory. Asymptotic convergence properties of the recursive algorithms are studied under weak conditions. The computation of the principal components can also be performed online and this approach can be useful for online outlier detection. A simulation study clearly shows that this robust indicat…

Statistics and ProbabilityComputer scienceMathematics - Statistics TheoryStatistics Theory (math.ST)01 natural sciences010104 statistics & probabilityMatrix (mathematics)Dimension (vector space)Geometric medianStochastic gradientFOS: Mathematics0101 mathematicsL1-median010102 general mathematicsEstimator[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]Geometric medianCovariance[ STAT.TH ] Statistics [stat]/Statistics Theory [stat.TH]Functional dataMSC: 62G05 62L20Principal component analysisProjection pursuitAnomaly detectionRecursive robust estimationStatistics Probability and UncertaintyAlgorithm
researchProduct

Anthropometry: An R Package for Analysis of Anthropometric Data

2017

The development of powerful new 3D scanning techniques has enabled the generation of large up-to-date anthropometric databases which provide highly valued data to improve the ergonomic design of products adapted to the user population. As a consequence, Ergonomics and Anthropometry are two increasingly quantitative fields, so advanced statistical methodologies and modern software tools are required to get the maximum benefit from anthropometric data. This paper presents a new R package, called Anthropometry, which is available on the Comprehensive R Archive Network. It brings together some statistical methodologies concerning clustering, statistical shape analysis, statistical archetypal an…

Statistics and ProbabilityComputer sciencePopulationstatistical shape analysis02 engineering and technologycomputer.software_genre01 natural sciences010104 statistics & probabilitySoftware0202 electrical engineering electronic engineering information engineeringR; anthropometric data; clustering; statistical shape analysis; archetypal analysis; data depth0101 mathematicsarchetypal analysisCluster analysiseducationlcsh:Statisticslcsh:HA1-4737education.field_of_studyAnthropometric databusiness.industryStatistical shape analysisRHuman factors and ergonomicsAnthropometryanthropometric dataVignette020201 artificial intelligence & image processingData miningStatistics Probability and Uncertaintydata depthbusinesscomputerSoftwareclusteringJournal of Statistical Software
researchProduct

Sequential Monte Carlo methods in Bayesian joint models for longitudinal and time-to-event data

2020

The statistical analysis of the information generated by medical follow-up is a very important challenge in the field of personalized medicine. As the evolutionary course of a patient's disease progresses, his/her medical follow-up generates more and more information that should be processed immediately in order to review and update his/her prognosis and treatment. Hence, we focus on this update process through sequential inference methods for joint models of longitudinal and time-to-event data from a Bayesian perspective. More specifically, we propose the use of sequential Monte Carlo (SMC) methods for static parameter joint models with the intention of reducing computational time in each…

Statistics and ProbabilityComputer sciencebusiness.industryBayesian probabilitySequential monte carlo methodsMachine learningcomputer.software_genre01 natural sciencesField (computer science)010104 statistics & probability03 medical and health sciences0302 clinical medicineEvent data030220 oncology & carcinogenesisStatistical analysisPersonalized medicineArtificial intelligence0101 mathematicsStatistics Probability and UncertaintybusinessJoint (audio engineering)CartographycomputerStatistical Modelling
researchProduct

Modeling accident risk at the road level through zero-inflated negative binomial models: A case study of multiple road networks

2021

Abstract This paper presents a case study carried out in multiple cities of the Valencian Community (Spain) to determine the effect of sociodemographic and road characteristics on traffic accident risk. The analyzes are performed at the road segment level, considering the linear network representing the road structure of each city as a spatial lattice. The number of accidents observed in each road segment from 2010 to 2019 is taken as the response variable, and a zero-inflated modeling approach is considered. Count overdispersion and spatial dependence are also accounted for. Despite the complexity and sparsity of the data, the fitted models performed considerably well, with few exceptions.…

Statistics and ProbabilityComputer sciencespatial dependence0208 environmental biotechnologyAccident riskMagnitude (mathematics)Distribution (economics)02 engineering and technologyManagement Monitoring Policy and Law01 natural sciencestraffic accidents010104 statistics & probabilityOverdispersionCovariateStatisticsZero-inflated model0101 mathematicsComputers in Earth SciencesSpatial dependencelattice structurebusiness.industryIntegrated Nested Laplace Approximationzero-inflated model020801 environmental engineeringVariable (computer science)linear networksbusiness
researchProduct

Properties of Design-Based Functional Principal Components Analysis.

2010

This work aims at performing Functional Principal Components Analysis (FPCA) with Horvitz-Thompson estimators when the observations are curves collected with survey sampling techniques. One important motivation for this study is that FPCA is a dimension reduction tool which is the first step to develop model assisted approaches that can take auxiliary information into account. FPCA relies on the estimation of the eigenelements of the covariance operator which can be seen as nonlinear functionals. Adapting to our functional context the linearization technique based on the influence function developed by Deville (1999), we prove that these estimators are asymptotically design unbiased and con…

Statistics and ProbabilityContext (language use)Mathematics - Statistics TheoryStatistics Theory (math.ST)Perturbation theory01 natural sciencesVariance estimationHorvitz–Thompson estimatorSurvey sampling010104 statistics & probabilityLinearization[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]0502 economics and businessStatisticsConsistent estimatorFOS: Mathematicsvon Mises expansionApplied mathematicsHorvitz-Thompson estimator[ MATH.MATH-ST ] Mathematics [math]/Statistics [math.ST]0101 mathematicsComputingMilieux_MISCELLANEOUS050205 econometrics MathematicsEigenfunctionsInfluence functionApplied Mathematics05 social sciencesMathematical statisticsEstimator[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]Covariance operatorCovariance16. Peace & justice[ STAT.TH ] Statistics [stat]/Statistics Theory [stat.TH]Delta methodModel-assisted estimationStatistics Probability and Uncertainty
researchProduct

Spanish electoral archive. SEA database

2021

This paper introduces the SEA database (acronym for Spanish Electoral Archive). SEA brings together the most complete public repository available to date on Spanish election outcomes. SEA holds all the results recorded from the electoral processes of General (1979–2019), Regional (1989–2021), Local (1979–2019) and European Parliamentary (1987–2019) elections held in Spain since the restoration of democracy in the late 70 s, in addition to other data sets with electoral content. The data are offered for free and is presented in a homogeneous and friendly format. Most of the databases are available for download with data from various electoral levels, including from the ballot box level. This…

Statistics and ProbabilityData DescriptorHistoryDownloadSciencemedia_common.quotation_subject0211 other engineering and technologiesInference02 engineering and technologyLibrary and Information Sciencescomputer.software_genre01 natural sciencesEducation010104 statistics & probabilitySociologyVotingPolitical scienceAcronymSociety0101 mathematicsmedia_commonDatabaseQPolitics021107 urban & regional planningTurnoutDemocracyComputer Science ApplicationsMetadataBallotGovernmentEconomia Mètodes estadísticsStatistics Probability and UncertaintycomputerInformation SystemsScientific Data
researchProduct

Ergodicity for a stochastic Hodgkin–Huxley model driven by Ornstein–Uhlenbeck type input

2013

We consider a model describing a neuron and the input it receives from its dendritic tree when this input is a random perturbation of a periodic deterministic signal, driven by an Ornstein-Uhlenbeck process. The neuron itself is modeled by a variant of the classical Hodgkin-Huxley model. Using the existence of an accessible point where the weak Hoermander condition holds and the fact that the coefficients of the system are analytic, we show that the system is non-degenerate. The existence of a Lyapunov function allows to deduce the existence of (at most a finite number of) extremal invariant measures for the process. As a consequence, the complexity of the system is drastically reduced in c…

Statistics and ProbabilityDegenerate diffusion processesWeak Hörmander conditionType (model theory)01 natural sciencesPeriodic ergodicity010104 statistics & probability60H0760J25FOS: Mathematics0101 mathematicsComputingMilieux_MISCELLANEOUSMathematical physicsMathematics60J60Quantitative Biology::Neurons and CognitionProbability (math.PR)010102 general mathematicsErgodicityOrnstein–Uhlenbeck processHodgkin–Huxley model[MATH.MATH-PR]Mathematics [math]/Probability [math.PR]Hodgkin–Huxley model60J60 60J25 60H07Statistics Probability and UncertaintyTime inhomogeneous diffusion processesMathematics - Probability
researchProduct