Search results for "interpretatio"

showing 10 items of 1068 documents

Testing for homogeneity in meta-analysis I. The one-parameter case: standardized mean difference.

2010

Meta-analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran's Q statistic is frequently used for this homogeneity test. It is often assumed that Q follows a chi-square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution for Q is not accurate for moderate sample sizes. Here, we present an expansion for the mean of Q under the null hypothesis that is valid when the effect and the weight for each s…

Statistics and ProbabilityBiometryModels StatisticalGeneral Immunology and MicrobiologyApplied MathematicsHomogeneity (statistics)Pearson's chi-squared testAsymptotic distributionGeneral MedicineGeneral Biochemistry Genetics and Molecular Biologysymbols.namesakeF-testMeta-Analysis as TopicData Interpretation StatisticalStatisticsTest statisticNull distributionsymbolsChi-square testZ-testComputer SimulationGeneral Agricultural and Biological SciencesEpidemiologic MethodsAlgorithmsMathematicsBiometrics
researchProduct

Cluster-Localized Sparse Logistic Regression for SNP Data

2012

The task of analyzing high-dimensional single nucleotide polymorphism (SNP) data in a case-control design using multivariable techniques has only recently been tackled. While many available approaches investigate only main effects in a high-dimensional setting, we propose a more flexible technique, cluster-localized regression (CLR), based on localized logistic regression models, that allows different SNPs to have an effect for different groups of individuals. Separate multivariable regression models are fitted for the different groups of individuals by incorporating weights into componentwise boosting, which provides simultaneous variable selection, hence sparse fits. For model fitting, th…

Statistics and ProbabilityBoosting (machine learning)Computer scienceMultivariable calculusComputational BiologyHigh-Throughput Nucleotide SequencingFeature selectionRegression analysisModels TheoreticalLogistic regressioncomputer.software_genrePolymorphism Single NucleotideRegressionComputational MathematicsLogistic ModelsData Interpretation StatisticalGeneticsCluster AnalysisHumansData miningCluster analysisMolecular BiologyUnit-weighted regressioncomputerGenome-Wide Association StudyStatistical Applications in Genetics and Molecular Biology
researchProduct

Multiple testing in candidate gene situations: a comparison of classical, discrete, and resampling-based procedures.

2011

In candidate gene association studies, usually several elementary hypotheses are tested simultaneously using one particular set of data. The data normally consist of partly correlated SNP information. Every SNP can be tested for association with the disease, e.g., using the Cochran-Armitage test for trend. To account for the multiplicity of the test situation, different types of multiple testing procedures have been proposed. The question arises whether procedures taking into account the discreteness of the situation show a benefit especially in case of correlated data. We empirically evaluate several different multiple testing procedures via simulation studies using simulated correlated SN…

Statistics and ProbabilityCandidate geneContrast (statistics)computer.software_genrePolymorphism Single NucleotideSet (abstract data type)Computational MathematicsSample size determinationResamplingData Interpretation StatisticalSample SizeStatisticsMultiple comparisons problemGeneticsCochran–Armitage test for trendRange (statistics)HumansComputer SimulationDiseaseData miningMolecular BiologycomputerGenetic Association StudiesMathematicsStatistical applications in genetics and molecular biology
researchProduct

The multichoice consistent value

2000

We consider multichoice NTU games, i.e., cooperative NTU games in which players can participate in the game with several levels of activity. For these games, we define and characterize axiomatically the multichoice consistent value, which is a generalization of the consistent NTU value for NTU games and of the multichoice value for multichoice TU games. Moreover, we show that this value coincides with the consistent NTU value of a replicated NTU game and we provide a probabilistic interpretation.

Statistics and ProbabilityEconomics and EconometricsMathematics (miscellaneous)GeneralizationProbabilistic logicNTU games · consistent NTU value · multichoice valueStatistics Probability and UncertaintyValue (mathematics)Mathematical economicsSocial Sciences (miscellaneous)MathematicsInterpretation (model theory)
researchProduct

A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance

2018

Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all-cause hazard ratio, which gives equal weight to all events irrespective of their type …

Statistics and ProbabilityHazard (logic)EpidemiologyEndpoint Determination01 natural sciencesMeasure (mathematics)WIN RATIO010104 statistics & probability03 medical and health sciences0302 clinical medicineResamplingStatisticstime-to-eventHumansComputer Simulation030212 general & internal medicinerelevance weighting0101 mathematicsParametric statisticsEvent (probability theory)MathematicsProportional Hazards Modelsclinical trialsHazard ratiocomposite endpointWeightingPRIORITIZED OUTCOMESTRIALSData Interpretation StatisticalMULTISTATE MODELSINFERENCENull hypothesisMonte Carlo MethodStatistics in Medicine
researchProduct

On the convenience of heteroscedasticity in highly multivariate disease mapping

2019

Highly multivariate disease mapping has recently been proposed as an enhancement of traditional multivariate studies, making it possible to perform the joint analysis of a large number of diseases. This line of research has an important potential since it integrates the information of many diseases into a single model yielding richer and more accurate risk maps. In this paper we show how some of the proposals already put forward in this area display some particular problems when applied to small regions of study. Specifically, the homoscedasticity of these proposals may produce evident misfits and distorted risk maps. In this paper we propose two new models to deal with the variance-adaptiv…

Statistics and ProbabilityHeteroscedasticityMultivariate statisticsComputer scienceDiseaseJoint analysisMachine learningcomputer.software_genreBayesian statistics01 natural sciencesGaussian Markov random fields010104 statistics & probability03 medical and health sciences0302 clinical medicineHomoscedasticity0101 mathematicsMultivariate disease mappingSpatial analysisMortality studiesInterpretation (logic)Spatial statisticsbusiness.industryBayesian statisticsEstadística bayesianaMalalties030211 gastroenterology & hepatologyArtificial intelligenceStatistics Probability and Uncertaintybusinesscomputer
researchProduct

Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

2013

For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivat…

Statistics and ProbabilityMaleNiacinamideBoosting (machine learning)Carcinoma HepatocellularEpidemiologyComputer scienceScoreFeature selectionAntineoplastic Agentscomputer.software_genreDecision Support TechniquesNeoplasmsCovariateHumansRegistriesAgedProportional Hazards ModelsProportional hazards modelPhenylurea CompoundsLiver NeoplasmsRegression analysisConfounding Factors EpidemiologicMiddle AgedSorafenibPrognosisRegressionCancer registryData Interpretation StatisticalRegression AnalysisData miningcomputerStatistics in medicine
researchProduct

Assessing covariate imbalance in meta-analysis studies.

2010

The main goal of meta-analysis is to combine data across studies or data sets to obtain summary estimates. In this paper, the novelty is to propose a statistical tool to assess a possible covariate imbalance in baseline variables to investigate similarity of trials. We conducted the detection of the covariate imbalance, first, through some graphical comparison of the empirical cumulative distribution functions or ECDFs, which are built by putting together arms or trials according to some risk factor, and second, through some non-parametric tests such as the Kolmogorov–Smirnov and the Anderson–Darling tests. To overcome the huge presence of ties, we conducted the statistical tests on perturbe…

Statistics and ProbabilityMaleperturbationEpidemiologyComputer sciencePoolingHypercholesterolemiaAlpha interferonMeta-Analysis as TopicCovariateStatisticsEconometricsHumansSettore SECS-S/05 - Statistica SocialeECDFnon-parametric testStatistical hypothesis testingRandomized Controlled Trials as TopicCumulative distribution functionNonparametric statisticsNoveltyInterferon-alphacombinabilityHepatitis C ChronicMeta-analysisData Interpretation StatisticalFemaleHydroxymethylglutaryl-CoA Reductase InhibitorsStatistics in medicine
researchProduct

Assessment of the probabilities for evolutionary structural changes in protein folds.

2007

Abstract Motivation: The evolution of protein sequences can be described by a stepwise process, where each step involves changes of a few amino acids. In a similar manner, the evolution of protein folds can be at least partially described by an analogous process, where each step involves comparatively simple changes affecting few secondary structure elements. A number of such evolution steps, justified by biologically confirmed examples, have previously been proposed by other researchers. However, unlike the situation with sequences, as far as we know there have been no attempts to estimate the comparative probabilities for different kinds of such structural changes. Results: We have tried …

Statistics and ProbabilityModels MolecularProtein FoldingProtein domainStructural alignmentBiologyBiochemistrySet (abstract data type)Evolution MolecularProtein structureSimilarity (network science)Sequence Analysis ProteinComputer SimulationMolecular BiologyProtein secondary structureConserved SequenceSequenceModels GeneticSequence Homology Amino AcidProteinsStructural Classification of Proteins databaseComputer Science ApplicationsComputational MathematicsComputational Theory and MathematicsModels ChemicalData Interpretation Statisticalsense organsAlgorithmSequence AlignmentBioinformatics (Oxford, England)
researchProduct

Tests for Differentiation in Gene Expression Using a Data-Driven Order or Weights for Hypotheses

2005

In the analysis of gene expression by microarrays there are usually few subjects, but high-dimensional data. By means of techniques, such as the theory of spherical tests or with suitable permutation tests, it is possible to sort the endpoints or to give weights to them according to specific criteria determined by the data while controlling the multiple type I error rate. The procedures developed so far are based on a sequential analysis of weighted p-values (corresponding to the endpoints), including the most extreme situation of weighting leading to a complete order of p-values. When the data for the endpoints have approximately equal variances, these procedures show good power properties…

Statistics and ProbabilityModels StatisticalModels GeneticBiometricsGene Expression ProfilingWord error rateFamilywise error rateGeneral MedicineData-drivenWeightingData Interpretation StatisticalsortComputer Simulationp-valueStatistics Probability and UncertaintyAlgorithmAlgorithmsOligonucleotide Array Sequence AnalysisMathematicsType I and type II errorsBiometrical Journal
researchProduct