0000000000583215

AUTHOR

Katharina Ingel

showing 3 related works from this author

Adaptive trial design: a general methodology for censored time to event data.

2008

Adaptive designs allow a clinical trial design to be changed according to interim findings without inflating type I error. The Inverse Normal method can be considered as an adaptive generalization of classical group sequential designs. The use of the Inverse Normal method for censored survival data was demonstrated only for the logrank statistic. However, the logrank statistic is inefficient in the presence of nuisance covariates affecting survival. We demonstrate, how the Inverse Normal method can be applied to Cox regression analysis. The required independence between test statistics of the different stages of the trial can be obtained by two different approaches. One is using the indepen…

Clinical Trials as Topicbusiness.industryProportional hazards modelNormal DistributionRegression analysisGeneral MedicineSurvival AnalysisTimeNormal distributionResearch DesignData Interpretation StatisticalStatisticsCovariateEconometricsMedicineHumansPharmacology (medical)Computer SimulationbusinessStatisticIndependence (probability theory)Statistical hypothesis testingType I and type II errorsProportional Hazards ModelsRandomized Controlled Trials as TopicContemporary clinical trials
researchProduct

Sample size in cluster-randomized trials with time to event as the primary endpoint

2011

In cluster-randomized trials, groups of individuals (clusters) are randomized to the treatments or interventions to be compared. In many of those trials, the primary objective is to compare the time for an event to occur between randomized groups, and the shared frailty model well fits clustered time-to-event data. Members of the same cluster tend to be more similar than members of different clusters, causing correlations. As correlations affect the power of a trial to detect intervention effects, the clustered design has to be considered in planning the sample size. In this publication, we derive a sample size formula for clustered time-to-event data with constant marginal baseline hazards…

Statistics and ProbabilityTime FactorsEndpoint DeterminationSubstance-Related DisordersEpidemiologyPsychological interventionBiostatisticsTime-to-Treatmentlaw.inventionCorrelationRandom AllocationRandomized controlled triallawStatisticsClinical endpointEconometricsCluster AnalysisHumansPoisson DistributionBaseline (configuration management)Randomized Controlled Trials as TopicMathematicsEvent (probability theory)Likelihood FunctionsModels StatisticalTerm (time)Sample size determinationSample SizeRegression AnalysisSubstance Abuse Treatment CentersStatistics in Medicine
researchProduct

Sample-size calculation and reestimation for a semiparametric analysis of recurrent event data taking robust standard errors into account

2014

In some clinical trials, the repeated occurrence of the same type of event is of primary interest and the Andersen-Gill model has been proposed to analyze recurrent event data. Existing methods to determine the required sample size for an Andersen-Gill analysis rely on the strong assumption that all heterogeneity in the individuals' risk to experience events can be explained by known covariates. In practice, however, this assumption might be violated due to unknown or unmeasured covariates affecting the time to events. In these situations, the use of a robust variance estimate in calculating the test statistic is highly recommended to assure the type I error rate, but this will in turn decr…

Statistics and ProbabilityInflationComputer sciencemedia_common.quotation_subjectRobust statisticsGeneral MedicineVariance (accounting)Sample size determinationStatisticsCovariateTest statisticEconometricsStatistics Probability and UncertaintyType I and type II errorsEvent (probability theory)media_commonBiometrical Journal
researchProduct