Search results for "Hypothesis"

showing 10 items of 426 documents

Opportunities and challenges of combined effect measures based on prioritized outcomes

2013

Many authors have proposed different approaches to combine multiple endpoints in a univariate outcome measure in the literature. In case of binary or time-to-event variables, composite endpoints, which combine several event types within a single event or time-to-first-event analysis are often used to assess the overall treatment effect. A main drawback of this approach is that the interpretation of the composite effect can be difficult as a negative effect in one component can be masked by a positive effect in another. Recently, some authors proposed more general approaches based on a priority ranking of outcomes, which moreover allow to combine outcome variables of different scale levels. …

Statistics and ProbabilityClinical Trials as TopicEpidemiologyUnivariatecomputer.software_genreOutcome (game theory)Treatment OutcomeRankingScale (social sciences)Component (UML)Outcome Assessment Health CareMultiple comparisons problemHumansComputer SimulationData miningcomputerProportional Hazards ModelsMathematicsStatistical hypothesis testingEvent (probability theory)Statistics in Medicine
researchProduct

A Unified Approach to Likelihood Inference on Stochastic Orderings in a Nonparametric Context

1998

Abstract For data in a two-way contingency table with ordered margins, we consider various hypotheses of stochastic orders among the conditional distributions considered by rows and show that each is equivalent to requiring that an invertible transformation of the vectors of conditional row probabilities satisfies an appropriate set of linear inequalities. This leads to the construction of a general algorithm for maximum likelihood estimation under multinomial sampling and provides a simple framework for deriving the asymptotic distribution of log-likelihood ratio tests. The usual stochastic ordering and the so called uniform and likelihood ratio orderings are considered as special cases. I…

Statistics and ProbabilityCombinatoricsIndependent and identically distributed random variablesLinear inequalityTransformation (function)Likelihood-ratio testAsymptotic distributionApplied mathematicsConditional probability distributionStatistics Probability and UncertaintyStochastic orderingStatistical hypothesis testingMathematicsJournal of the American Statistical Association
researchProduct

Test Procedures in Configural Frequency Analysis (CFA) Controlling the Local and Multiple Level

1987

The test statistics used until now in the CFA have been developed under the assumption of the overall hypothesis of total independence. Therefore, the multiple test procedures based on these statistics are really only different tests of the overall hypothesis. If one likes to test a special cell hypothesis, one should only assume that this hypothesis is true and not the whole overall hypothesis. Such cell tests can then be used as elements of a multiple test procedure. In this paper it is shown that the usual test procedures can be very anticonservative (except of the two-dimensional, and, for some procedures, the three-dimensional case), and corrected test procedures are developed. Further…

Statistics and ProbabilityContingency tableGeneral MedicineTest (assessment)StatisticsPortmanteau testEconometricsChi-square testTest statisticStatistics Probability and UncertaintyConfigural frequency analysisIndependence (probability theory)MathematicsStatistical hypothesis testingBiometrical Journal
researchProduct

Testing Goodness-of-Fit with the Kernel Density Estimator: GoFKernel

2015

To assess the goodness-of-fit of a sample to a continuous random distribution, the most popular approach has been based on measuring, using either L∞ - or L2 -norms, the distance between the null hypothesis cumulative distribution function and the empirical cumulative distribution function. Indeed, as far as I know, almost all the tests currently available in R related to this issue (ks.test in package stats, ad.test in package ADGofTest, and ad.test, ad2.test, ks.test, v.test and w2.test in package truncgof) use one of these two distances on cumulative distribution functions. This paper (i) proposes dgeometric.test, a new implementation of the test that measures the discrepancy between a s…

Statistics and ProbabilityCumulative distribution functionKernel density estimationProbability density functionKolmogorov–Smirnov testEmpirical distribution functionsymbols.namesakeGoodness of fitStatisticssymbolsStatistics Probability and UncertaintyNull hypothesisRandom variablelcsh:Statisticslcsh:HA1-4737SoftwareMathematicsJournal of Statistical Software
researchProduct

Olley–Pakes productivity decomposition: computation and inference

2016

Summary We show how a moment-based estimation procedure can be used to compute point estimates and standard errors for the two components of the widely used Olley–Pakes decomposition of aggregate (weighted average) productivity. When applied to business level microdata, the procedure allows for autocovariance and heteroscedasticity robust inference and hypothesis testing about, for example, the coevolution of the productivity components in different groups of firms. We provide an application to Finnish firm level data and find that formal statistical inference casts doubt on the conclusions that one might draw on the basis of a visual inspection of the components of the decomposition.

Statistics and ProbabilityEconomics and EconometricsHeteroscedasticityproductivitytuottavuusInferenceFrequentist inference0502 economics and businessStatisticsStatistical inferenceEconometricsPoint estimation050207 economics050205 econometrics MathematicsStatistical hypothesis testingpäättelyta112inferenceta51105 social sciencesgeneralized method of momentsAutocovarianceweighted averageFiducial inferenceStatistics Probability and UncertaintySocial Sciences (miscellaneous)Journal of the Royal Statistical Society Series A: Statistics in Society
researchProduct

Improvements and Modifications of Tarone's Multiple Test Procedure for Discrete Data

1998

Tarone (1990, Biometrics 46, 515-522) proposed a multiple test procedure for discrete test statistics improving the usual Bonferroni procedure. However, Tarone's procedure is not monotone depending on the predetermined multiple level a. Roth (1998, Journal of Statistical Planning and Inference, in press) developed a monotone version of Tarone's procedure. We present a similar procedure that is both monotone and an improvement of Tarone's proposal. Based on this extension, we derive a step-down procedure that is a corresponding improvement of Holm's (1979, Scandinavian Journal of Statistics 6, 65-70) sequentially rejective procedure. It is shown how adjusted p-values can be computed for the …

Statistics and ProbabilityGeneral Immunology and MicrobiologyBiometricsComputer scienceTest proceduresApplied MathematicsInferenceGeneral MedicineExtension (predicate logic)General Biochemistry Genetics and Molecular Biologysymbols.namesakeBonferroni correctionMonotone polygonsymbolsGeneral Agricultural and Biological SciencesAlgorithmStatistical hypothesis testingBiometrics
researchProduct

Global and multiple test procedures using ordered p-values—a review

2004

This paper reviews global and multiple tests for the combination ofn hypotheses using the orderedp-values of then individual tests. In 1987, Rohmel and Streitberg presented a general method to construct global level α tests based on orderedp-values when there exists no prior knowledge regarding the joint distribution of the corresponding test statistics. In the case of independent test statistics, construction of global tests is available by means of recursive formulae presented by Bicher (1989), Kornatz (1994) and Finner and Roters (1994). Multiple test procedures can be developed by applying the closed test principle using these global tests as building blocks. Liu (1996) proposed represe…

Statistics and ProbabilityGeneral methodTest proceduresJoint probability distributionExistential quantificationStatisticsApplied mathematicsStatistics Probability and UncertaintyConstruct (philosophy)Statistical hypothesis testingMathematicsDynamic testingTest (assessment)Statistical Papers
researchProduct

Extending conventional priors for testing general hypotheses in linear models

2007

We consider that observations come from a general normal linear model and that it is desirable to test a simplifying null hypothesis about the parameters. We approach this problem from an objective Bayesian, model-selection perspective. Crucial ingredients for this approach are 'proper objective priors' to be used for deriving the Bayes factors. Jeffreys-Zellner-Siow priors have good properties for testing null hypotheses defined by specific values of the parameters in full-rank linear models. We extend these priors to deal with general hypotheses in general linear models, not necessarily of full rank. The resulting priors, which we call 'conventional priors', are expressed as a generalizat…

Statistics and ProbabilityGeneralizationApplied MathematicsGeneral MathematicsModel selectionBayesian probabilityLinear modelBayes factorAgricultural and Biological Sciences (miscellaneous)Prior probabilityEconometricsStatistics Probability and UncertaintyGeneral Agricultural and Biological SciencesNull hypothesisStatistical hypothesis testingMathematicsBiometrika
researchProduct

A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance

2018

Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all-cause hazard ratio, which gives equal weight to all events irrespective of their type …

Statistics and ProbabilityHazard (logic)EpidemiologyEndpoint Determination01 natural sciencesMeasure (mathematics)WIN RATIO010104 statistics & probability03 medical and health sciences0302 clinical medicineResamplingStatisticstime-to-eventHumansComputer Simulation030212 general & internal medicinerelevance weighting0101 mathematicsParametric statisticsEvent (probability theory)MathematicsProportional Hazards Modelsclinical trialsHazard ratiocomposite endpointWeightingPRIORITIZED OUTCOMESTRIALSData Interpretation StatisticalMULTISTATE MODELSINFERENCENull hypothesisMonte Carlo MethodStatistics in Medicine
researchProduct

Generalization of Jeffreys Divergence-Based Priors for Bayesian Hypothesis Testing

2008

Summary We introduce objective proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence-based (DB) priors. DB priors have simple forms and desirable properties like information (finite sample) consistency and are often similar to other existing proposals like intrinsic priors. Moreover, in normal linear model scenarios, they reproduce the Jeffreys–Zellner–Siow priors exactly. Most importantly, in challenging scenarios such as irregular models and mixture models, DB priors are well defined and very reasonable, whereas alternative proposals are not. We derive approximations to the DB priors as w…

Statistics and ProbabilityKullback–Leibler divergenceMarkov chainMarkov chain Monte CarloBayes factorMixture modelsymbols.namesakePrior probabilityEconometricssymbolsApplied mathematicsStatistics Probability and UncertaintyDivergence (statistics)Statistical hypothesis testingMathematicsJournal of the Royal Statistical Society Series B: Statistical Methodology
researchProduct