Search results for "Models"

showing 10 items of 8211 documents

Elasticity as a measure for online determination of remission points in ongoing epidemics.

2020

The correct identification of change-points during ongoing outbreak investigations of infectious diseases is a matter of paramount importance in epidemiology, with major implications for the management of health care resources, public health and, as the COVID-19 pandemic has shown, social live. Onsets, peaks, and inflexion points are some of them. An onset is the moment when the epidemic starts. A "peak" indicates a moment at which the incorporated values, both before and after, are lower: a maximum. The inflexion points identify moments in which the rate of growth of the incorporation of new cases changes intensity. In this study, after interpreting the concept of elasticity of a random va…

Statistics and Probability2019-20 coronavirus outbreakCoronavirus disease 2019 (COVID-19)Computer scienceEpidemiology01 natural sciencesTime010104 statistics & probability03 medical and health sciencesRemission induction0302 clinical medicinePandemicHealth careEconometricsHumansComputer Simulation030212 general & internal medicine0101 mathematicsElasticity (economics)EpidemicsPandemicsProportional Hazards Modelsbusiness.industryRemission InductionCOVID-19businessEpidemiologic MethodsRandom variableRate of growthStatistics in medicineREFERENCES
researchProduct

Disorder relevance for the random walk pinning model in dimension 3

2011

We study the continuous time version of the random walk pinning model, where conditioned on a continuous time random walk Y on Z^d with jump rate \rho>0, which plays the role of disorder, the law up to time t of a second independent random walk X with jump rate 1 is Gibbs transformed with weight e^{\beta L_t(X,Y)}, where L_t(X,Y) is the collision local time between X and Y up to time t. As the inverse temperature \beta varies, the model undergoes a localization-delocalization transition at some critical \beta_c>=0. A natural question is whether or not there is disorder relevance, namely whether or not \beta_c differs from the critical point \beta_c^{ann} for the annealed model. In Birkner a…

Statistics and Probability60K35 82B4482B44Probability (math.PR)Random mediaGeometryMarginal disorderFractional moment methodMean estimationMathematics::Probability60K35Local limit theoremFOS: MathematicsCollision local timeDisordered pinning modelsStatistics Probability and UncertaintyRandom walksHumanitiesRenewal processes with infinite meanMathematics - ProbabilityMathematicsAnnales de l'Institut Henri Poincaré, Probabilités et Statistiques
researchProduct

A Log-Rank Test for Equivalence of Two Survivor Functions

1993

We consider a hypothesis testing problem in which the alternative states that the vertical distance between the underlying survivor functions nowhere exceeds some prespecified bound delta0. Under the assumption of proportional hazards, this hypothesis is shown to be (logically) equivalent to the statement [beta[log(1 + epsilon), where beta denotes the regression coefficient associated with the treatment group indicator, and epsilon is a simple strictly increasing function of delta. The testing procedure proposed consists of carrying out in terms of beta (i.e., the standard Cox likelihood estimator of beta) the uniformly most powerful level alpha test for a suitable interval hypothesis about…

Statistics and ProbabilityBiometryGaussianGeneral Biochemistry Genetics and Molecular BiologyCombinatoricssymbols.namesakeNeoplasmsLinear regressionStatisticsChi-square testHumansComputer SimulationCerebellar NeoplasmsChildEquivalence (measure theory)Proportional Hazards ModelsStatistical hypothesis testingMathematicsClinical Trials as TopicGeneral Immunology and MicrobiologyApplied MathematicsEstimatorGeneral MedicineSurvival AnalysisLog-rank testLinear ModelssymbolsGeneral Agricultural and Biological SciencesMedulloblastomaQuantileBiometrics
researchProduct

Automatic variable selection for exposure-driven propensity score matching with unmeasured confounders.

2020

Multivariable model building for propensity score modeling approaches is challenging. A common propensity score approach is exposure-driven propensity score matching, where the best model selection strategy is still unclear. In particular, the situation may require variable selection, while it is still unclear if variables included in the propensity score should be associated with the exposure and the outcome, with either the exposure or the outcome, with at least the exposure or with at least the outcome. Unmeasured confounders, complex correlation structures, and non-normal covariate distributions further complicate matters. We consider the performance of different modeling strategies in …

Statistics and ProbabilityBiometryModels StatisticalComputer scienceModel selectionFeature selectionGeneral MedicineVariance (accounting)01 natural sciencesOutcome (game theory)Correlation010104 statistics & probability03 medical and health sciencesAutomation0302 clinical medicineCovariatePropensity score matchingStatisticsMultivariate Analysis030212 general & internal medicine0101 mathematicsStatistics Probability and UncertaintyPropensity ScoreCounterexampleBiometrical journal. Biometrische ZeitschriftREFERENCES
researchProduct

Testing for homogeneity in meta-analysis I. The one-parameter case: standardized mean difference.

2010

Meta-analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran's Q statistic is frequently used for this homogeneity test. It is often assumed that Q follows a chi-square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution for Q is not accurate for moderate sample sizes. Here, we present an expansion for the mean of Q under the null hypothesis that is valid when the effect and the weight for each s…

Statistics and ProbabilityBiometryModels StatisticalGeneral Immunology and MicrobiologyApplied MathematicsHomogeneity (statistics)Pearson's chi-squared testAsymptotic distributionGeneral MedicineGeneral Biochemistry Genetics and Molecular Biologysymbols.namesakeF-testMeta-Analysis as TopicData Interpretation StatisticalStatisticsTest statisticNull distributionsymbolsChi-square testZ-testComputer SimulationGeneral Agricultural and Biological SciencesEpidemiologic MethodsAlgorithmsMathematicsBiometrics
researchProduct

Marginal hazard ratio estimates in joint frailty models for heart failure trials

2019

Abstract This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non‐fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment effect estimates that have been derived from marginal proportional hazards models, that is, a Cox model for mortality and an Andersen–Gill model for recurrent hospitalisations. …

Statistics and ProbabilityBiometryleast false parameterDiseasejoint frailty modelRisk AssessmentStudy durationCardiovascular deathunexplained heterogeneitymedicineHumansTreatment effectComplex Regression ModelsProportional Hazards ModelsHeart FailureClinical Trials as TopicProportional hazards modelbusiness.industryheart failure trialsHazard ratioGeneral Medicinemedicine.diseaseClinical trialrecurrent eventsHeart failureAsymptomatic DiseasesStatistics Probability and UncertaintybusinessDemographyResearch PaperBiometrical Journal. Biometrische Zeitschrift
researchProduct

Cluster-Localized Sparse Logistic Regression for SNP Data

2012

The task of analyzing high-dimensional single nucleotide polymorphism (SNP) data in a case-control design using multivariable techniques has only recently been tackled. While many available approaches investigate only main effects in a high-dimensional setting, we propose a more flexible technique, cluster-localized regression (CLR), based on localized logistic regression models, that allows different SNPs to have an effect for different groups of individuals. Separate multivariable regression models are fitted for the different groups of individuals by incorporating weights into componentwise boosting, which provides simultaneous variable selection, hence sparse fits. For model fitting, th…

Statistics and ProbabilityBoosting (machine learning)Computer scienceMultivariable calculusComputational BiologyHigh-Throughput Nucleotide SequencingFeature selectionRegression analysisModels TheoreticalLogistic regressioncomputer.software_genrePolymorphism Single NucleotideRegressionComputational MathematicsLogistic ModelsData Interpretation StatisticalGeneticsCluster AnalysisHumansData miningCluster analysisMolecular BiologyUnit-weighted regressioncomputerGenome-Wide Association StudyStatistical Applications in Genetics and Molecular Biology
researchProduct

Morphology changes induced by intercellular gap junction blocking: A reaction-diffusion mechanism.

2021

Complex anatomical form is regulated in part by endogenous physiological communication between cells; however, the dynamics by which gap junctional (GJ) states across tissues regulate morphology are still poorly understood. We employed a biophysical modeling approach combining different signaling molecules (morphogens) to qualitatively describe the anteroposterior and lateral morphology changes in model multicellular systems due to intercellular GJ blockade. The model is based on two assumptions for blocking-induced patterning: (i) the local concentrations of two small antagonistic morphogens diffusing through the GJs along the axial direction, together with that of an independent, uncouple…

Statistics and ProbabilityCell signalingModels BiologicalGeneral Biochemistry Genetics and Molecular BiologyDiffusionMorphogenesisAnimalsBlocking (linguistics)IonsNeurotransmitter AgentsbiologyMechanism (biology)ChemistryApplied MathematicsGap junctionGap JunctionsGeneral MedicinePlanariansbiology.organism_classificationPlanariaMulticellular organismIntercellular JunctionsModeling and SimulationBiophysicsReprogrammingAlgorithmsMorphogenSignal TransductionBio Systems
researchProduct

Opportunities and challenges of combined effect measures based on prioritized outcomes

2013

Many authors have proposed different approaches to combine multiple endpoints in a univariate outcome measure in the literature. In case of binary or time-to-event variables, composite endpoints, which combine several event types within a single event or time-to-first-event analysis are often used to assess the overall treatment effect. A main drawback of this approach is that the interpretation of the composite effect can be difficult as a negative effect in one component can be masked by a positive effect in another. Recently, some authors proposed more general approaches based on a priority ranking of outcomes, which moreover allow to combine outcome variables of different scale levels. …

Statistics and ProbabilityClinical Trials as TopicEpidemiologyUnivariatecomputer.software_genreOutcome (game theory)Treatment OutcomeRankingScale (social sciences)Component (UML)Outcome Assessment Health CareMultiple comparisons problemHumansComputer SimulationData miningcomputerProportional Hazards ModelsMathematicsStatistical hypothesis testingEvent (probability theory)Statistics in Medicine
researchProduct

Sample size planning for survival prediction with focus on high-dimensional data

2011

Sample size planning should reflect the primary objective of a trial. If the primary objective is prediction, the sample size determination should focus on prediction accuracy instead of power. We present formulas for the determination of training set sample size for survival prediction. Sample size is chosen to control the difference between optimal and expected prediction error. Prediction is carried out by Cox proportional hazards models. The general approach considers censoring as well as low-dimensional and high-dimensional explanatory variables. For dimension reduction in the high-dimensional setting, a variable selection step is inserted. If not all informative variables are included…

Statistics and ProbabilityClustering high-dimensional dataClinical Trials as TopicLung NeoplasmsModels StatisticalKaplan-Meier EstimateEpidemiologyProportional hazards modelDimensionality reductionGene ExpressionFeature selectionKaplan-Meier EstimateBiostatisticsPrognosisBrier scoreSample size determinationCarcinoma Non-Small-Cell LungSample SizeCensoring (clinical trials)StatisticsHumansProportional Hazards ModelsMathematicsStatistics in Medicine
researchProduct