Search results for "Weighting"

showing 10 items of 117 documents

Soft computing-based aggregation methods for human resource management

2008

Abstract We are interested in the personnel selection problem. We have developed a flexible decision support system to help managers in their decision-making functions. This DSS simulates experts’ evaluations using ordered weighted average (OWA) aggregation operators, which assign different weights to different selection criteria. Moreover, we show an aggregation model based on efficiency analysis to put the candidates into an order.

Soft computingDecision support systemInformation Systems and ManagementGeneral Computer ScienceFuzzy setStaff managementPersonnel selectionManagement Science and Operations Researchcomputer.software_genreIndustrial and Manufacturing EngineeringWeightingComplete informationModeling and SimulationData miningcomputerSelection (genetic algorithm)MathematicsEuropean Journal of Operational Research
researchProduct

Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance

2017

We establish an ordering criterion for the asymptotic variances of two consistent Markov chain Monte Carlo (MCMC) estimators: an importance sampling (IS) estimator, based on an approximate reversible chain and subsequent IS weighting, and a standard MCMC estimator, based on an exact reversible chain. Essentially, we relax the criterion of the Peskun type covariance ordering by considering two different invariant probabilities, and obtain, in place of a strict ordering of asymptotic variances, a bound of the asymptotic variance of IS by that of the direct MCMC. Simple examples show that IS can have arbitrarily better or worse asymptotic variance than Metropolis-Hastings and delayed-acceptanc…

Statistics and ProbabilityFOS: Computer and information sciencesdelayed-acceptanceMarkovin ketjut01 natural sciencesStatistics - Computationasymptotic variance010104 statistics & probabilitysymbols.namesake60J22 65C05unbiased estimatorFOS: MathematicsApplied mathematics0101 mathematicsComputation (stat.CO)stokastiset prosessitestimointiMathematicsnumeeriset menetelmätpseudo-marginal algorithmApplied Mathematics010102 general mathematicsProbability (math.PR)EstimatorMarkov chain Monte CarloCovarianceInfimum and supremumWeightingMarkov chain Monte CarloMonte Carlo -menetelmätDelta methodimportance samplingModeling and SimulationBounded functionsymbolsImportance samplingMathematics - Probability
researchProduct

A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance

2018

Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all-cause hazard ratio, which gives equal weight to all events irrespective of their type …

Statistics and ProbabilityHazard (logic)EpidemiologyEndpoint Determination01 natural sciencesMeasure (mathematics)WIN RATIO010104 statistics & probability03 medical and health sciences0302 clinical medicineResamplingStatisticstime-to-eventHumansComputer Simulation030212 general & internal medicinerelevance weighting0101 mathematicsParametric statisticsEvent (probability theory)MathematicsProportional Hazards Modelsclinical trialsHazard ratiocomposite endpointWeightingPRIORITIZED OUTCOMESTRIALSData Interpretation StatisticalMULTISTATE MODELSINFERENCENull hypothesisMonte Carlo MethodStatistics in Medicine
researchProduct

Weighting Elementary Prices in Consumer Price Index Construction Using Spatial Autocorrelation

2013

The Consumer Price Indexes (CPI) are used in current economic systems to measure inflation. When constructing CPIs, however, official institutions have systematically overlooked the spatial dimension of elementary prices. Ignoring the fact that prices are collected at geographical locations implicitly implies considering prices as spatially independent, when in fact they are not. To solve this problem, this article proposes to weight basic price data by taking into account the spatial correlation they display. The weighted geometric and arithmetic means suggested generalize and improve the simple geometric and arithmetic means currently in use.

Statistics and ProbabilityInflationComputer Science::Computer Science and Game TheorySpatial correlationmedia_common.quotation_subjectWeightingPrice indexStatisticsEconometricsConsumer price indexDimension (data warehouse)Spatial analysisArithmetic meanmedia_commonMathematicsCommunications in Statistics - Theory and Methods
researchProduct

Optimal Reporting of Predictions

1989

Abstract Consider a problem in which you and a group of other experts must report your individual predictive distributions for an observable random variable X to some decision maker. Suppose that the report of each expert is assigned a prior weight by the decision maker and that these weights are then updated based on the observed value of X. In this situation you will try to maximize your updated, or posterior, weight by appropriately choosing the distribution that you report, rather than necessarily simply reporting your honest predictive distribution. We study optimal reporting strategies under various conditions regarding your knowledge and beliefs about X and the reports of the other e…

Statistics and ProbabilityMathematical optimizationExpert opinionStatisticsGaining weightStatistics Probability and UncertaintyDecision makerBayesian inferenceFinite setRandom variableValue (mathematics)WeightingMathematicsJournal of the American Statistical Association
researchProduct

Point process diagnostics based on weighted second-order statistics and their asymptotic properties

2008

A new approach for point process diagnostics is presented. The method is based on extending second-order statistics for point processes by weighting each point by the inverse of the conditional intensity function at the point’s location. The result is generalized versions of the spectral density, R/S statistic, correlation integral and K-function, which can be used to test the fit of a complex point process model with an arbitrary conditional intensity function, rather than a stationary Poisson model. Asymptotic properties of these generalized second-order statistics are derived, using an approach based on martingale theory.

Statistics and ProbabilityMathematical optimizationSpectral densityInverseResidual analysis point process second-order analysis conditional intensity functionResidualPoint processWeightingCorrelation integralApplied mathematicsPoint (geometry)Settore SECS-S/01 - StatisticaStatisticMathematicsAnnals of the Institute of Statistical Mathematics
researchProduct

Tests for Differentiation in Gene Expression Using a Data-Driven Order or Weights for Hypotheses

2005

In the analysis of gene expression by microarrays there are usually few subjects, but high-dimensional data. By means of techniques, such as the theory of spherical tests or with suitable permutation tests, it is possible to sort the endpoints or to give weights to them according to specific criteria determined by the data while controlling the multiple type I error rate. The procedures developed so far are based on a sequential analysis of weighted p-values (corresponding to the endpoints), including the most extreme situation of weighting leading to a complete order of p-values. When the data for the endpoints have approximately equal variances, these procedures show good power properties…

Statistics and ProbabilityModels StatisticalModels GeneticBiometricsGene Expression ProfilingWord error rateFamilywise error rateGeneral MedicineData-drivenWeightingData Interpretation StatisticalsortComputer Simulationp-valueStatistics Probability and UncertaintyAlgorithmAlgorithmsOligonucleotide Array Sequence AnalysisMathematicsType I and type II errorsBiometrical Journal
researchProduct

Adaptive Modifications of Hypotheses After an Interim Analysis

2001

It is investigated how one can modify hypotheses in a trial after an interim analysis such that the type I error rate is controlled. If only a global statement is desired, a solution was given by Bauer (1989). For a general multiple testing problem, Kieser, Bauer and Lehmacher (1999) and Bauer and Kieser (1999) gave solutions, by means of which the initial set of hypotheses can be reduced after the interim analysis. The same techniques can be applied to obtain more flexible strategies, as changing weights of hypotheses, changing an a priori order, or even including new hypotheses. It is emphasized that the application of these methods requires very careful planning of a trial as well as a c…

Statistics and ProbabilityStatement (computer science)Mathematical optimizationGeneral MedicineInterim analysisWeightingMultiple comparisons problemA priori and a posterioriStatistics Probability and UncertaintySet (psychology)AlgorithmStatistical hypothesis testingType I and type II errorsMathematicsBiometrical Journal
researchProduct

Systematic handling of missing data in complex study designs : experiences from the Health 2000 and 2011 Surveys

2016

We present a systematic approach to the practical and comprehensive handling of missing data motivated by our experiences of analyzing longitudinal survey data. We consider the Health 2000 and 2011 Surveys (BRIF8901) where increased non-response and non-participation from 2000 to 2011 was a major issue. The model assumptions involved in the complex sampling design, repeated measurements design, non-participation mechanisms and associations are presented graphically using methodology previously defined as a causal model with design, i.e. a functional causal model extended with the study design. This tool forces the statistician to make the study design and the missing-data mechanism explicit…

Statistics and Probabilitymultiple imputationComputer sciencecomputer.software_genre01 natural sciences010104 statistics & probability03 medical and health sciences0302 clinical medicinenon-responseSampling design030212 general & internal medicine0101 mathematicsCausal modelta112Clinical study designInverse probability weightingSampling (statistics)non-participationMissing dataData sciencedoubly robust methodsSurvey data collectionData miningStatistics Probability and Uncertaintycomputerinverse probability weightingStatisticiancausal model with designJournal of Applied Statistics
researchProduct

MuLiMs-MCoMPAs: A Novel Multiplatform Framework to Compute Tensor Algebra-Based Three-Dimensional Protein Descriptors

2019

This report introduces the MuLiMs-MCoMPAs software (acronym for Multi-Linear Maps based on N-Metric and Contact Matrices of 3D Protein and Amino-acid weightings), designed to compute tensor-based 3D protein structural descriptors by applying two- and three-linear algebraic forms. Moreover, these descriptors contemplate generalizing components such as novel 3D protein structural representations, (dis)similarity metrics, and multimetrics to extract geometrical related information between two and three amino acids, weighting schemes based on amino acid properties, matrix normalization procedures that consider simple-stochastic and mutual probability transformations, topological and geometrical…

Theoretical computer science010304 chemical physicsbusiness.industryGeneral Chemical EngineeringComputationGeneral ChemistryTensor algebraLibrary and Information Sciences01 natural sciences0104 chemical sciencesComputer Science ApplicationsWeighting010404 medicinal & biomolecular chemistryMatrix (mathematics)Software0103 physical sciencesPrincipal component analysisData pre-processingUser interfacebusinessJournal of Chemical Information and Modeling
researchProduct