Search results for "Methodology"

showing 10 items of 852 documents

Isotonic regression for metallic microstructure data: estimation and testing under order restrictions

2021

Investigating the main determinants of the mechanical performance of metals is not a simple task. Already known physical inspired qualitative relations between 2D microstructure characteristics and 3D mechanical properties can act as the starting point of the investigation. Isotonic regression allows to take into account ordering relations and leads to more efficient and accurate results when the underlying assumptions actually hold. The main goal in this paper is to test order relations in a model inspired by a materials science application. The statistical estimation procedure is described considering three different scenarios according to the knowledge of the variances: known variance ra…

FOS: Computer and information sciencesStatistics and ProbabilityMathematical optimizationgeometrically necessary dislocationsComputer science0211 other engineering and technologiesG.302 engineering and technology01 natural sciencesStatistics - ApplicationsMethodology (stat.ME)010104 statistics & probabilitySimple (abstract algebra)Isotonic regressionApplications (stat.AP)0101 mathematicsbootstraporder restrictionsStatistics - Methodology021103 operations researchlikelihood ratio testMicrostructurealternating iterative methodOrder (business)Geometrically necessary dislocationsLikelihood-ratio testStatistics Probability and UncertaintyIsotonic regression62F30 62F03 97K80
researchProduct

Bayesian Checking of the Second Levels of Hierarchical Models

2007

Hierarchical models are increasingly used in many applications. Along with this increased use comes a desire to investigate whether the model is compatible with the observed data. Bayesian methods are well suited to eliminate the many (nuisance) parameters in these complicated models; in this paper we investigate Bayesian methods for model checking. Since we contemplate model checking as a preliminary, exploratory analysis, we concentrate on objective Bayesian methods in which careful specification of an informative prior distribution is avoided. Numerous examples are given and different proposals are investigated and critically compared.

FOS: Computer and information sciencesStatistics and ProbabilityModel checkingModel checkingComputer scienceconflictGeneral MathematicsBayesian probabilityMachine learningcomputer.software_genreMethodology (stat.ME)partial posterior predictivePrior probabilityStatistics - Methodologybusiness.industrymodel criticismProbability and statisticsExploratory analysisobjective Bayesian methodsempirical-Bayesposterior predictivep-valuesArtificial intelligenceStatistics Probability and Uncertaintybusinesscomputer
researchProduct

Bootstrap validation of links of a minimum spanning tree

2018

We describe two different bootstrap methods applied to the detection of a minimum spanning tree obtained from a set of multivariate variables. We show that two different bootstrap procedures provide partly distinct information that can be highly informative about the investigated complex system. Our case study, based on the investigation of daily returns of a portfolio of stocks traded in the US equity markets, shows the degree of robustness and completeness of the information extracted with popular information filtering methods such as the minimum spanning tree and the planar maximally filtered graph. The first method performs a "row bootstrap" whereas the second method performs a "pair bo…

FOS: Computer and information sciencesStatistics and ProbabilityMultivariate statisticsCorrelation coefficientCovariance matrixReplicaComplex systemMinimum spanning treeCondensed Matter Physics01 natural sciencesSettore FIS/07 - Fisica Applicata(Beni Culturali Ambientali Biol.e Medicin)Minimum spanning tree Bootstrap Planar maximally filtered graph Information filtering Proximity based networks Random matrix theory010305 fluids & plasmasMethodology (stat.ME)0103 physical sciencesStatistics010306 general physicsRandom matrixStatistics - MethodologyMathematics
researchProduct

Panel Data Analysis via Mechanistic Models

2018

Panel data, also known as longitudinal data, consist of a collection of time series. Each time series, which could itself be multivariate, comprises a sequence of measurements taken on a distinct unit. Mechanistic modeling involves writing down scientifically motivated equations describing the collection of dynamic systems giving rise to the observations on each unit. A defining characteristic of panel systems is that the dynamic interaction between units should be negligible. Panel models therefore consist of a collection of independent stochastic processes, generally linked through shared parameters while also having unit-specific parameters. To give the scientist flexibility in model spe…

FOS: Computer and information sciencesStatistics and ProbabilityMultivariate statisticsSeries (mathematics)Longitudinal dataComputer science05 social sciences01 natural sciencesMethodology (stat.ME)010104 statistics & probabilityNonlinear system0502 economics and business0101 mathematicsStatistics Probability and UncertaintyParticle filterAlgorithmStatistics - Methodology050205 econometrics Panel dataSequence (medicine)Journal of the American Statistical Association
researchProduct

Estimating with kernel smoothers the mean of functional data in a finite population setting. A note on variance estimation in presence of partially o…

2014

In the near future, millions of load curves measuring the electricity consumption of French households in small time grids (probably half hours) will be available. All these collected load curves represent a huge amount of information which could be exploited using survey sampling techniques. In particular, the total consumption of a specific cus- tomer group (for example all the customers of an electricity supplier) could be estimated using unequal probability random sampling methods. Unfortunately, data collection may undergo technical problems resulting in missing values. In this paper we study a new estimation method for the mean curve in the presence of missing values which consists in…

FOS: Computer and information sciencesStatistics and ProbabilityPopulationRatio estimatorLinearizationRatio estimator01 natural sciencesSurvey sampling.Horvitz–Thompson estimatorMethodology (stat.ME)010104 statistics & probabilityH\'ajek estimator0502 economics and businessApplied mathematicsMissing valuesHorvitz-Thompson estimator0101 mathematicseducationStatistics - Methodology050205 econometrics MathematicsPointwiseeducation.field_of_study[STAT.ME] Statistics [stat]/Methodology [stat.ME]05 social sciencesNonparametric statisticsEstimator16. Peace & justiceMissing dataFunctional data[ STAT.ME ] Statistics [stat]/Methodology [stat.ME]Kernel (statistics)Statistics Probability and UncertaintyNonparametric estimation[STAT.ME]Statistics [stat]/Methodology [stat.ME]
researchProduct

Conditional Bias Robust Estimation of the Total of Curve Data by Sampling in a Finite Population: An Illustration on Electricity Load Curves

2020

Abstract For marketing or power grid management purposes, many studies based on the analysis of total electricity consumption curves of groups of customers are now carried out by electricity companies. Aggregated totals or mean load curves are estimated using individual curves measured at fine time grid and collected according to some sampling design. Due to the skewness of the distribution of electricity consumptions, these samples often contain outlying curves which may have an important impact on the usual estimation procedures. We introduce several robust estimators of the total consumption curve which are not sensitive to such outlying curves. These estimators are based on the conditio…

FOS: Computer and information sciencesStatistics and ProbabilityPopulationWaveletsStatistics - Applications01 natural sciencesSurvey samplingMethodology (stat.ME)010104 statistics & probabilityKokic and bell methodConditional bias0502 economics and businessStatisticsApplications (stat.AP)Conditional bias0101 mathematics[MATH]Mathematics [math]educationStatistics - Methodology050205 econometrics MathematicsEstimationeducation.field_of_studyModified band depthbusiness.industryApplied Mathematics05 social sciencesSampling (statistics)Functional dataBootstrapElectricityStatistics Probability and Uncertaintybusinessasymptotic confidence bandsSocial Sciences (miscellaneous)Spherical principal component analysis
researchProduct

Asymptotic and bootstrap tests for subspace dimension

2022

Most linear dimension reduction methods proposed in the literature can be formulated using an appropriate pair of scatter matrices, see e.g. Ye and Weiss (2003), Tyler et al. (2009), Bura and Yang (2011), Liski et al. (2014) and Luo and Li (2016). The eigen-decomposition of one scatter matrix with respect to another is then often used to determine the dimension of the signal subspace and to separate signal and noise parts of the data. Three popular dimension reduction methods, namely principal component analysis (PCA), fourth order blind identification (FOBI) and sliced inverse regression (SIR) are considered in detail and the first two moments of subsets of the eigenvalues are used to test…

FOS: Computer and information sciencesStatistics and ProbabilityPrincipal component analysisMathematics - Statistics TheoryStatistics Theory (math.ST)01 natural sciencesMethodology (stat.ME)010104 statistics & probabilityDimension (vector space)Scatter matrixSliced inverse regression0502 economics and businessFOS: MathematicsSliced inverse regressionApplied mathematics0101 mathematicsEigenvalues and eigenvectorsStatistics - Methodology050205 econometrics MathematicsestimointiNumerical AnalysisOrder determinationDimensionality reduction05 social sciencesriippumattomien komponenttien analyysimonimuuttujamenetelmätPrincipal component analysisStatistics Probability and UncertaintySubspace topologySignal subspace
researchProduct

A multi-scale area-interaction model for spatio-temporal point patterns

2018

Models for fitting spatio-temporal point processes should incorporate spatio-temporal inhomogeneity and allow for different types of interaction between points (clustering or regularity). This paper proposes an extension of the spatial multi-scale area-interaction model to a spatio-temporal framework. This model allows for interaction between points at different spatio-temporal scales and the inclusion of covariates. We fit the proposed model to varicella cases registered during 2013 in Valencia, Spain. The fitted model indicates small scale clustering and regularity for higher spatio-temporal scales.

FOS: Computer and information sciencesStatistics and ProbabilityScale (ratio)Computer scienceManagement Monitoring Policy and LawMulti-scale area-interaction modelcomputer.software_genreVaricella01 natural sciencesPoint processMethodology (stat.ME)010104 statistics & probability0502 economics and businessStatisticsCovariate60D05 60G55 62M30Point (geometry)0101 mathematicsComputers in Earth SciencesCluster analysisStatistics - Methodology050205 econometrics 05 social sciencesInteraction modelExtension (predicate logic)Gibbs point processesComputingMethodologies_PATTERNRECOGNITIONSpatio-temporal point processesData miningcomputer
researchProduct

Imputation Procedures in Surveys Using Nonparametric and Machine Learning Methods: An Empirical Comparison

2020

Abstract Nonparametric and machine learning methods are flexible methods for obtaining accurate predictions. Nowadays, data sets with a large number of predictors and complex structures are fairly common. In the presence of item nonresponse, nonparametric and machine learning procedures may thus provide a useful alternative to traditional imputation procedures for deriving a set of imputed values used next for the estimation of study parameters defined as solution of population estimating equation. In this paper, we conduct an extensive empirical investigation that compares a number of imputation procedures in terms of bias and efficiency in a wide variety of settings, including high-dimens…

FOS: Computer and information sciencesStatistics and ProbabilityStatistics::ApplicationsEmpirical comparisonbusiness.industryComputer scienceApplied MathematicsNonparametric statisticsMachine learningcomputer.software_genreStatistics - ComputationVariety (cybernetics)Methodology (stat.ME)Set (abstract data type)Statistics::MethodologyImputation (statistics)Artificial intelligenceStatistics Probability and UncertaintybusinesscomputerStatistics - MethodologyComputation (stat.CO)Social Sciences (miscellaneous)Journal of Survey Statistics and Methodology
researchProduct

An ensemble approach to short-term forecast of COVID-19 intensive care occupancy in Italian Regions

2020

Abstract The availability of intensive care beds during the COVID‐19 epidemic is crucial to guarantee the best possible treatment to severely affected patients. In this work we show a simple strategy for short‐term prediction of COVID‐19 intensive care unit (ICU) beds, that has proved very effective during the Italian outbreak in February to May 2020. Our approach is based on an optimal ensemble of two simple methods: a generalized linear mixed regression model, which pools information over different areas, and an area‐specific nonstationary integer autoregressive methodology. Optimal weights are estimated using a leave‐last‐out rationale. The approach has been set up and validated during t…

FOS: Computer and information sciencesStatistics and ProbabilityTime FactorsOccupancyCoronavirus disease 2019 (COVID-19)Computer science01 natural sciencesGeneralized linear mixed modelSARS‐CoV‐2law.inventionclustered data; COVID-19; generalized linear mixed model; integer autoregressive; integer autoregressive model; panel data; SARS-CoV-2; weighted ensembleMethodology (stat.ME)panel data010104 statistics & probability03 medical and health sciences0302 clinical medicinelawCOVID‐19Intensive careEconometricsHumansclustered data030212 general & internal medicine0101 mathematicsPandemicsStatistics - MethodologySARS-CoV-2Reproducibility of ResultsCOVID-19General Medicineweighted ensembleIntensive care unitResearch PapersTerm (time)integer autoregressiveIntensive Care UnitsAutoregressive modelItalyNonlinear Dynamicsgeneralized linear mixed modelinteger autoregressive modelclustered data; COVID-19; generalized linear mixed model; integer autoregressive; integer autoregressive model; panel data; SARS-CoV-2; weighted ensemble; COVID-19; Humans; Intensive Care Units; Italy; Nonlinear Dynamics; Pandemics; Reproducibility of Results; Time Factors; ForecastingStatistics Probability and UncertaintySettore SECS-S/01Settore SECS-S/01 - StatisticaPanel dataResearch PaperForecasting
researchProduct