Search results for "determination"

showing 10 items of 718 documents

A Comment on the Coefficient of Determination for Binary Responses

1992

Abstract Linear logistic or probit regression can be closely approximated by an unweighted least squares analysis of the regression linear in the conditional probabilities provided that these probabilities for success and failure are not too extreme. It is shown how this restriction on the probabilities translates into a restriction on the range of the coefficient of determination R 2 so that, as a consequence, R 2 is not suitable to judge the effectiveness of linear regressions with binary responses even if an important relation is present.

Statistics and ProbabilityCoefficient of determinationGeneral MathematicsProbit modelLinear regressionStatisticsConditional probabilityMultiple correlationStatistics Probability and UncertaintyLinear discriminant analysisLogistic regressionRegressionMathematicsThe American Statistician
researchProduct

A computationally fast alternative to cross-validation in penalized Gaussian graphical models

2015

We study the problem of selection of regularization parameter in penalized Gaussian graphical models. When the goal is to obtain the model with good predicting power, cross validation is the gold standard. We present a new estimator of Kullback-Leibler loss in Gaussian Graphical model which provides a computationally fast alternative to cross-validation. The estimator is obtained by approximating leave-one-out-cross validation. Our approach is demonstrated on simulated data sets for various types of graphs. The proposed formula exhibits superior performance, especially in the typical small sample size scenario, compared to other available alternatives to cross validation, such as Akaike's i…

Statistics and ProbabilityFOS: Computer and information sciencesGaussianInformation CriteriaCross-validationMethodology (stat.ME)symbols.namesakeBayesian information criterionStatisticsPenalized estimationGeneralized approximate cross-validationGraphical modelSDG 7 - Affordable and Clean EnergyStatistics - MethodologyMathematics/dk/atira/pure/sustainabledevelopmentgoals/affordable_and_clean_energyKullback-Leibler loApplied MathematicsEstimatorCross-validationGaussian graphical modelSample size determinationModeling and SimulationsymbolsInformation criteriaStatistics Probability and UncertaintyAkaike information criterionSettore SECS-S/01 - StatisticaAlgorithm
researchProduct

A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance

2018

Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all-cause hazard ratio, which gives equal weight to all events irrespective of their type …

Statistics and ProbabilityHazard (logic)EpidemiologyEndpoint Determination01 natural sciencesMeasure (mathematics)WIN RATIO010104 statistics & probability03 medical and health sciences0302 clinical medicineResamplingStatisticstime-to-eventHumansComputer Simulation030212 general & internal medicinerelevance weighting0101 mathematicsParametric statisticsEvent (probability theory)MathematicsProportional Hazards Modelsclinical trialsHazard ratiocomposite endpointWeightingPRIORITIZED OUTCOMESTRIALSData Interpretation StatisticalMULTISTATE MODELSINFERENCENull hypothesisMonte Carlo MethodStatistics in Medicine
researchProduct

Sample-size calculation and reestimation for a semiparametric analysis of recurrent event data taking robust standard errors into account

2014

In some clinical trials, the repeated occurrence of the same type of event is of primary interest and the Andersen-Gill model has been proposed to analyze recurrent event data. Existing methods to determine the required sample size for an Andersen-Gill analysis rely on the strong assumption that all heterogeneity in the individuals' risk to experience events can be explained by known covariates. In practice, however, this assumption might be violated due to unknown or unmeasured covariates affecting the time to events. In these situations, the use of a robust variance estimate in calculating the test statistic is highly recommended to assure the type I error rate, but this will in turn decr…

Statistics and ProbabilityInflationComputer sciencemedia_common.quotation_subjectRobust statisticsGeneral MedicineVariance (accounting)Sample size determinationStatisticsCovariateTest statisticEconometricsStatistics Probability and UncertaintyType I and type II errorsEvent (probability theory)media_commonBiometrical Journal
researchProduct

Sample Size Requirements of a Mixture Analysis Method with Applications in Systematic Biology

1999

The available information on sample size requirements of mixture analysis methods is insufficient to permit a precise evaluation of the potential problems facing practical applications of mixture analysis. We use results from Monte Carlo simulation to assess the sample size requirements of a simple mixture analysis method under conditions relevant to biological applications of mixture analysis. The mixture model used includes two univariate normal components with equal variances but assumes that the researcher is ignorant as to the equality of the variances. The method used relies on the EM algorithm to compute the maximum likelihood estimates of the mixture parameters, and the likelihood r…

Statistics and ProbabilityMathematical optimizationGeneral Immunology and MicrobiologyApplied MathematicsMonte Carlo methodUnivariateGeneral MedicineMixture modelGeneral Biochemistry Genetics and Molecular BiologySample size determinationSimple (abstract algebra)Modeling and SimulationLikelihood-ratio testExpectation–maximization algorithmGeneral Agricultural and Biological SciencesAnalysis methodMathematicsJournal of Theoretical Biology
researchProduct

Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous…

2012

In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedica…

Statistics and ProbabilityModels StatisticalEpidemiologyModel selectionMultivariable calculusExplained variationSpline (mathematics)Logistic ModelsSample size determinationSample SizeMultivariate AnalysisLinear regressionStatisticsCovariateHumansComputer SimulationCategorical variableMathematicsStatistics in Medicine
researchProduct

Power and Type I Error of the Mean and Covariance Structure Analysis Model for Detecting Differential Item Functioning in Graded Response Items.

2016

In this simulation study, we investigate the power and Type I error rate of a procedure based on the mean and covariance structure analysis (MACS) model in detecting differential item functioning (DIF) of graded response items with five response categories. The following factors were manipulated: type of DIF (uniform and non-uniform), DIF magnitude (low, medium and large), equality/inequality of latent trait distributions, sample size (100, 200, 400, and 800) and equality or inequality of the sample sizes across groups. The simulated test was made up of 10 items, of which only 1 contained DIF. One hundred replications were generated for each simulated condition. Results indicate that the MA…

Statistics and ProbabilityMultivariate analysisExperimental and Cognitive PsychologyGeneral MedicineCovarianceDifferential item functioningPower (physics)Distribution (mathematics)Arts and Humanities (miscellaneous)Sample size determinationStatisticsItem response theoryType I and type II errorsMathematicsMultivariate behavioral research
researchProduct

Adaptive designs with correlated test statistics

2009

In clinical trials, the collected observations such as clustered data or repeated measurements are often correlated. As a consequence, test statistics in a multistage design are correlated. Adaptive designs were originally developed for independent test statistics. We present a general framework for two-stage adaptive designs with correlated test statistics. We show that the significance level for the Bauer-Köhne design is inflated for positively correlated test statistics from a bivariate normal distribution. The decision boundary for the second stage can be modified so that type one error is controlled. This general concept is expandable to other adaptive designs. In order to use these de…

Statistics and ProbabilityOptimal designClinical Trials as TopicBiometryModels StatisticalEpidemiologyCovariance matrixMultivariate normal distributionWald testGeneralized linear mixed modelExact testSample size determinationStatisticsLinear ModelsHumansMathematicsStatistical hypothesis testingStatistics in Medicine
researchProduct

Statistical inference as a decision problem: the choice of sample size

1997

Statistics and ProbabilityPredictive inferenceSampling distributionFrequentist inferenceSample size determinationStatisticsEconometricsFiducial inferenceStatistical inferenceInfluence diagramStatistical theoryMathematicsJournal of the Royal Statistical Society: Series D (The Statistician)
researchProduct

Performance of adaptive sample size adjustment with respect to stopping criteria and time of interim analysis

2006

The benefit of adjusting the sample size in clinical trials on the basis of treatment effects observed in interim analysis has been the subject of several recent papers. Different conclusions were drawn about the usefulness of this approach for gaining power or saving sample size, because of differences in trial design and setting. We examined the benefit of sample size adjustment in relation to trial design parameters such as 'time of interim analysis' and 'choice of stopping criteria'. We compared the adaptive weighted inverse normal method with classical group sequential methods for the most common and for optimal stopping criteria in early, half-time and late interim analyses. We found …

Statistics and ProbabilityResearch designClinical Trials as TopicEpidemiologyComputer scienceInterim analysisClinical trialNormal-inverse Gaussian distributionSequential methodResearch DesignSample size determinationSample SizeInterimStatisticsEconometricsHumansOptimal stoppingStatistics in Medicine
researchProduct