Search results for " Probability"

showing 10 items of 2176 documents

Centrality measures for networks with community structure

2016

Understanding the network structure, and finding out the influential nodes is a challenging issue in the large networks. Identifying the most influential nodes in the network can be useful in many applications like immunization of nodes in case of epidemic spreading, during intentional attacks on complex networks. A lot of research is done to devise centrality measures which could efficiently identify the most influential nodes in the network. There are two major approaches to the problem: On one hand, deterministic strategies that exploit knowledge about the overall network topology in order to find the influential nodes, while on the other end, random strategies are completely agnostic ab…

FOS: Computer and information sciencesStatistics and ProbabilityPhysics - Physics and SocietyExploitComplex networksFOS: Physical sciencesNetwork sciencePhysics and Society (physics.soc-ph)Network theoryMachine learningcomputer.software_genreNetwork topologyImmunization strategies01 natural sciences010305 fluids & plasmas0103 physical sciences010306 general physicsMathematicsSocial and Information Networks (cs.SI)Structure (mathematical logic)[PHYS.PHYS]Physics [physics]/Physics [physics]business.industryCommunity structureComputer Science - Social and Information NetworksComplex networkEpidemic dynamicsCondensed Matter Physics[ PHYS.PHYS ] Physics [physics]/Physics [physics]Community structureArtificial intelligenceData miningbusinessCentralitycomputer
researchProduct

Estimating with kernel smoothers the mean of functional data in a finite population setting. A note on variance estimation in presence of partially o…

2014

In the near future, millions of load curves measuring the electricity consumption of French households in small time grids (probably half hours) will be available. All these collected load curves represent a huge amount of information which could be exploited using survey sampling techniques. In particular, the total consumption of a specific cus- tomer group (for example all the customers of an electricity supplier) could be estimated using unequal probability random sampling methods. Unfortunately, data collection may undergo technical problems resulting in missing values. In this paper we study a new estimation method for the mean curve in the presence of missing values which consists in…

FOS: Computer and information sciencesStatistics and ProbabilityPopulationRatio estimatorLinearizationRatio estimator01 natural sciencesSurvey sampling.Horvitz–Thompson estimatorMethodology (stat.ME)010104 statistics & probabilityH\'ajek estimator0502 economics and businessApplied mathematicsMissing valuesHorvitz-Thompson estimator0101 mathematicseducationStatistics - Methodology050205 econometrics MathematicsPointwiseeducation.field_of_study[STAT.ME] Statistics [stat]/Methodology [stat.ME]05 social sciencesNonparametric statisticsEstimator16. Peace & justiceMissing dataFunctional data[ STAT.ME ] Statistics [stat]/Methodology [stat.ME]Kernel (statistics)Statistics Probability and UncertaintyNonparametric estimation[STAT.ME]Statistics [stat]/Methodology [stat.ME]
researchProduct

Conditional Bias Robust Estimation of the Total of Curve Data by Sampling in a Finite Population: An Illustration on Electricity Load Curves

2020

Abstract For marketing or power grid management purposes, many studies based on the analysis of total electricity consumption curves of groups of customers are now carried out by electricity companies. Aggregated totals or mean load curves are estimated using individual curves measured at fine time grid and collected according to some sampling design. Due to the skewness of the distribution of electricity consumptions, these samples often contain outlying curves which may have an important impact on the usual estimation procedures. We introduce several robust estimators of the total consumption curve which are not sensitive to such outlying curves. These estimators are based on the conditio…

FOS: Computer and information sciencesStatistics and ProbabilityPopulationWaveletsStatistics - Applications01 natural sciencesSurvey samplingMethodology (stat.ME)010104 statistics & probabilityKokic and bell methodConditional bias0502 economics and businessStatisticsApplications (stat.AP)Conditional bias0101 mathematics[MATH]Mathematics [math]educationStatistics - Methodology050205 econometrics MathematicsEstimationeducation.field_of_studyModified band depthbusiness.industryApplied Mathematics05 social sciencesSampling (statistics)Functional dataBootstrapElectricityStatistics Probability and Uncertaintybusinessasymptotic confidence bandsSocial Sciences (miscellaneous)Spherical principal component analysis
researchProduct

Asymptotic and bootstrap tests for subspace dimension

2022

Most linear dimension reduction methods proposed in the literature can be formulated using an appropriate pair of scatter matrices, see e.g. Ye and Weiss (2003), Tyler et al. (2009), Bura and Yang (2011), Liski et al. (2014) and Luo and Li (2016). The eigen-decomposition of one scatter matrix with respect to another is then often used to determine the dimension of the signal subspace and to separate signal and noise parts of the data. Three popular dimension reduction methods, namely principal component analysis (PCA), fourth order blind identification (FOBI) and sliced inverse regression (SIR) are considered in detail and the first two moments of subsets of the eigenvalues are used to test…

FOS: Computer and information sciencesStatistics and ProbabilityPrincipal component analysisMathematics - Statistics TheoryStatistics Theory (math.ST)01 natural sciencesMethodology (stat.ME)010104 statistics & probabilityDimension (vector space)Scatter matrixSliced inverse regression0502 economics and businessFOS: MathematicsSliced inverse regressionApplied mathematics0101 mathematicsEigenvalues and eigenvectorsStatistics - Methodology050205 econometrics MathematicsestimointiNumerical AnalysisOrder determinationDimensionality reduction05 social sciencesriippumattomien komponenttien analyysimonimuuttujamenetelmätPrincipal component analysisStatistics Probability and UncertaintySubspace topologySignal subspace
researchProduct

A multi-scale area-interaction model for spatio-temporal point patterns

2018

Models for fitting spatio-temporal point processes should incorporate spatio-temporal inhomogeneity and allow for different types of interaction between points (clustering or regularity). This paper proposes an extension of the spatial multi-scale area-interaction model to a spatio-temporal framework. This model allows for interaction between points at different spatio-temporal scales and the inclusion of covariates. We fit the proposed model to varicella cases registered during 2013 in Valencia, Spain. The fitted model indicates small scale clustering and regularity for higher spatio-temporal scales.

FOS: Computer and information sciencesStatistics and ProbabilityScale (ratio)Computer scienceManagement Monitoring Policy and LawMulti-scale area-interaction modelcomputer.software_genreVaricella01 natural sciencesPoint processMethodology (stat.ME)010104 statistics & probability0502 economics and businessStatisticsCovariate60D05 60G55 62M30Point (geometry)0101 mathematicsComputers in Earth SciencesCluster analysisStatistics - Methodology050205 econometrics 05 social sciencesInteraction modelExtension (predicate logic)Gibbs point processesComputingMethodologies_PATTERNRECOGNITIONSpatio-temporal point processesData miningcomputer
researchProduct

Imputation Procedures in Surveys Using Nonparametric and Machine Learning Methods: An Empirical Comparison

2020

Abstract Nonparametric and machine learning methods are flexible methods for obtaining accurate predictions. Nowadays, data sets with a large number of predictors and complex structures are fairly common. In the presence of item nonresponse, nonparametric and machine learning procedures may thus provide a useful alternative to traditional imputation procedures for deriving a set of imputed values used next for the estimation of study parameters defined as solution of population estimating equation. In this paper, we conduct an extensive empirical investigation that compares a number of imputation procedures in terms of bias and efficiency in a wide variety of settings, including high-dimens…

FOS: Computer and information sciencesStatistics and ProbabilityStatistics::ApplicationsEmpirical comparisonbusiness.industryComputer scienceApplied MathematicsNonparametric statisticsMachine learningcomputer.software_genreStatistics - ComputationVariety (cybernetics)Methodology (stat.ME)Set (abstract data type)Statistics::MethodologyImputation (statistics)Artificial intelligenceStatistics Probability and UncertaintybusinesscomputerStatistics - MethodologyComputation (stat.CO)Social Sciences (miscellaneous)Journal of Survey Statistics and Methodology
researchProduct

An ensemble approach to short-term forecast of COVID-19 intensive care occupancy in Italian Regions

2020

Abstract The availability of intensive care beds during the COVID‐19 epidemic is crucial to guarantee the best possible treatment to severely affected patients. In this work we show a simple strategy for short‐term prediction of COVID‐19 intensive care unit (ICU) beds, that has proved very effective during the Italian outbreak in February to May 2020. Our approach is based on an optimal ensemble of two simple methods: a generalized linear mixed regression model, which pools information over different areas, and an area‐specific nonstationary integer autoregressive methodology. Optimal weights are estimated using a leave‐last‐out rationale. The approach has been set up and validated during t…

FOS: Computer and information sciencesStatistics and ProbabilityTime FactorsOccupancyCoronavirus disease 2019 (COVID-19)Computer science01 natural sciencesGeneralized linear mixed modelSARS‐CoV‐2law.inventionclustered data; COVID-19; generalized linear mixed model; integer autoregressive; integer autoregressive model; panel data; SARS-CoV-2; weighted ensembleMethodology (stat.ME)panel data010104 statistics & probability03 medical and health sciences0302 clinical medicinelawCOVID‐19Intensive careEconometricsHumansclustered data030212 general & internal medicine0101 mathematicsPandemicsStatistics - MethodologySARS-CoV-2Reproducibility of ResultsCOVID-19General Medicineweighted ensembleIntensive care unitResearch PapersTerm (time)integer autoregressiveIntensive Care UnitsAutoregressive modelItalyNonlinear Dynamicsgeneralized linear mixed modelinteger autoregressive modelclustered data; COVID-19; generalized linear mixed model; integer autoregressive; integer autoregressive model; panel data; SARS-CoV-2; weighted ensemble; COVID-19; Humans; Intensive Care Units; Italy; Nonlinear Dynamics; Pandemics; Reproducibility of Results; Time Factors; ForecastingStatistics Probability and UncertaintySettore SECS-S/01Settore SECS-S/01 - StatisticaPanel dataResearch PaperForecasting
researchProduct

A New Nonparametric Estimate of the Risk-Neutral Density with Applications to Variance Swaps

2021

We develop a new nonparametric approach for estimating the risk-neutral density of asset prices and reformulate its estimation into a double-constrained optimization problem. We evaluate our approach using the S\&P 500 market option prices from 1996 to 2015. A comprehensive cross-validation study shows that our approach outperforms the existing nonparametric quartic B-spline and cubic spline methods, as well as the parametric method based on the Normal Inverse Gaussian distribution. As an application, we use the proposed density estimator to price long-term variance swaps, and the model-implied prices match reasonably well with those of the variance future downloaded from the CBOE websi…

FOS: Computer and information sciencesStatistics and ProbabilityVariance swapOptimization problemvariance swapStatistics - ApplicationsFOS: Economics and businessNormal-inverse Gaussian distributiondouble-constrained optimizationpricingEconometricsApplications (stat.AP)Asset (economics)normal inverse Gaussian distributionMathematicsParametric statisticslcsh:T57-57.97Applied MathematicsNonparametric statisticsEstimatorVariance (accounting)lcsh:Applied mathematics. Quantitative methodsPricing of Securities (q-fin.PR)risk-neutral densitylcsh:Probabilities. Mathematical statisticslcsh:QA273-280Quantitative Finance - Pricing of Securities
researchProduct

KFAS : Exponential Family State Space Models in R

2017

State space modelling is an efficient and flexible method for statistical inference of a broad class of time series and other data. This paper describes an R package KFAS for state space modelling with the observations from an exponential family, namely Gaussian, Poisson, binomial, negative binomial and gamma distributions. After introducing the basic theory behind Gaussian and non-Gaussian state space models, an illustrative example of Poisson time series forecasting is provided. Finally, a comparison to alternative R packages suitable for non-Gaussian time series modelling is presented.

FOS: Computer and information sciencesStatistics and ProbabilityaikasarjatGaussianNegative binomial distributionforecastingPoisson distribution01 natural sciencesStatistics - ComputationMethodology (stat.ME)010104 statistics & probability03 medical and health sciencessymbols.namesake0302 clinical medicineExponential familyexponential familyGamma distributionStatistical inferenceState spaceApplied mathematicsSannolikhetsteori och statistik030212 general & internal medicine0101 mathematicsProbability Theory and Statisticslcsh:Statisticslcsh:HA1-4737Computation (stat.CO)Statistics - MethodologyMathematicsR; exponential family; state space models; time series; forecasting; dynamic linear modelsta112state space modelsSeries (mathematics)RStatistics; Computer softwaresymbolsStatistics Probability and Uncertaintytime seriesSoftwaredynamic linear models
researchProduct

Community characterization of heterogeneous complex systems

2011

We introduce an analytical statistical method to characterize the communities detected in heterogeneous complex systems. By posing a suitable null hypothesis, our method makes use of the hypergeometric distribution to assess the probability that a given property is over-expressed in the elements of a community with respect to all the elements of the investigated set. We apply our method to two specific complex networks, namely a network of world movies and a network of physics preprints. The characterization of the elements and of the communities is done in terms of languages and countries for the movie network and of journals and subject categories for papers. We find that our method is ab…

FOS: Computer and information sciencesStatistics and Probabilityrandom graphs networks statistical inference socio-economic networksPhysics - Physics and SocietyTheoretical computer scienceProperty (programming)Complex systemFOS: Physical sciencesPhysics and Society (physics.soc-ph)socio-economic networksStatistical inferenceSocial and Information Networks (cs.SI)Random graphComputer Science - Social and Information NetworksStatistical and Nonlinear PhysicsProbability and statisticsComplex networkSettore FIS/07 - Fisica Applicata(Beni Culturali Ambientali Biol.e Medicin)Hypergeometric distributionPhysics - Data Analysis Statistics and ProbabilitynetworkStatistics Probability and UncertaintyNull hypothesisData Analysis Statistics and Probability (physics.data-an)random graphstatistical inferenceJournal of Statistical Mechanics: Theory and Experiment
researchProduct