Search results for "hypothesis testing"

showing 10 items of 124 documents

Testing for local structure in spatiotemporal point pattern data

2017

The detection of clustering structure in a point pattern is one of the main focuses of attention in spatiotemporal data mining. Indeed, statistical tools for clustering detection and identification of individual events belonging to clusters are welcome in epidemiology and seismology. Local second-order characteristics provide information on how an event relates to nearby events. In this work, we extend local indicators of spatial association (known as LISA functions) to the spatiotemporal context (which will be then called LISTA functions). These functions are then used to build local tests of clustering to analyse differences in local spatiotemporal structures. We present a simulation stud…

Settore SECS-S/01 - Statisticaearthquakes hypothesis testing local indicators of spatiotemporal association permutation-based tests second-order product density function
researchProduct

Bayesian analysis and design for comparison of effect-sizes

2002

Comparison of effect-sizes, or more generally, of non-centrality parameters of non-central t distributions, is a common problem, especially in meta-analysis. The usual simplifying assumptions of either identical or non-related effect-sizes are often too restrictive to be appropriate. In this paper, the effect-sizes are modeled as random effects with t distributions. Bayesian hierarchical models are used both to design and analyze experiments. The main goal is to compare effect-sizes. Sample sizes are chosen so as to make accurate inferences about the difference of effect-sizes and also to convincingly solve the testing of equality of effect-sizes if such is the goal.

Statistics and ProbabilityApplied MathematicsBayesian probabilityPosterior probabilityBayes factorRandom effects modelBlock designSample size determinationPrior probabilityStatisticsStatistics Probability and UncertaintyAlgorithmStatistical hypothesis testingMathematicsJournal of Statistical Planning and Inference
researchProduct

The size of Simes’ global test for discrete test statistics

1999

Abstract To increase the power of the Bonferroni–Holm procedure several modified Bonferroni procedures have been proposed (for example, Hochberg, 1988. Biometrika 75, 800–802; Hommel, 1988. Biometrika 75, 383–386), which are based on Simes’ global test (Simes, 1986. Biometrika 73, 751–754). By several simulation studies which, in particular, considered multinormal test statistics, it has been suggested that the Simes test is a level α test. However, an exact proof exists for only few situations one of them assuming independence of test statistics. We studied the behaviour of Simes’ test for discrete test statistics. Due to discreteness one can expect more conservative decisions whereas depe…

Statistics and ProbabilityApplied MathematicsMultivariate normal distributionNominal levelExact testchemistry.chemical_compoundsymbols.namesakeBonferroni correctionchemistryStatisticsTest statisticsymbolsSign testSIMesStatistics Probability and UncertaintyMathematicsStatistical hypothesis testingJournal of Statistical Planning and Inference
researchProduct

Tests against stationary and explosive alternatives in vector autoregressive models

2008

.  The article proposes new tests for the number of unit roots in vector autoregressive models based on the eigenvalues of the companion matrix. Both stationary and explosive alternatives are considered. The limiting distributions of test statistics depend only on the number of unit roots. Size and power are investigated, and it is found that the new test against some stationary alternatives compares favourably with the widely used likelihood ratio test for the cointegrating rank. The powers are prominently higher against explosive than against stationary alternatives. Some empirical examples are provided to show how to use the new tests with real data.

Statistics and ProbabilityAutoregressive modelExplosive materialRank (linear algebra)Applied MathematicsLikelihood-ratio testCompanion matrixEconometricsUnit rootStatistics Probability and UncertaintyEigenvalues and eigenvectorsMathematicsStatistical hypothesis testingJournal of Time Series Analysis
researchProduct

A Bayesian analysis of classical hypothesis testing

1980

The procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses. The results shed further light on Lindley’s paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.

Statistics and ProbabilityBayes factorBayesian inferenceStatistics::ComputationBayesian statisticsStatisticsEconometricsBayesian experimental designStatistics::MethodologyStatistics Probability and UncertaintyBayesian linear regressionLindley's paradoxBayesian averageMathematicsStatistical hypothesis testingTrabajos de Estadistica Y de Investigacion Operativa
researchProduct

A Log-Rank Test for Equivalence of Two Survivor Functions

1993

We consider a hypothesis testing problem in which the alternative states that the vertical distance between the underlying survivor functions nowhere exceeds some prespecified bound delta0. Under the assumption of proportional hazards, this hypothesis is shown to be (logically) equivalent to the statement [beta[log(1 + epsilon), where beta denotes the regression coefficient associated with the treatment group indicator, and epsilon is a simple strictly increasing function of delta. The testing procedure proposed consists of carrying out in terms of beta (i.e., the standard Cox likelihood estimator of beta) the uniformly most powerful level alpha test for a suitable interval hypothesis about…

Statistics and ProbabilityBiometryGaussianGeneral Biochemistry Genetics and Molecular BiologyCombinatoricssymbols.namesakeNeoplasmsLinear regressionStatisticsChi-square testHumansComputer SimulationCerebellar NeoplasmsChildEquivalence (measure theory)Proportional Hazards ModelsStatistical hypothesis testingMathematicsClinical Trials as TopicGeneral Immunology and MicrobiologyApplied MathematicsEstimatorGeneral MedicineSurvival AnalysisLog-rank testLinear ModelssymbolsGeneral Agricultural and Biological SciencesMedulloblastomaQuantileBiometrics
researchProduct

Opportunities and challenges of combined effect measures based on prioritized outcomes

2013

Many authors have proposed different approaches to combine multiple endpoints in a univariate outcome measure in the literature. In case of binary or time-to-event variables, composite endpoints, which combine several event types within a single event or time-to-first-event analysis are often used to assess the overall treatment effect. A main drawback of this approach is that the interpretation of the composite effect can be difficult as a negative effect in one component can be masked by a positive effect in another. Recently, some authors proposed more general approaches based on a priority ranking of outcomes, which moreover allow to combine outcome variables of different scale levels. …

Statistics and ProbabilityClinical Trials as TopicEpidemiologyUnivariatecomputer.software_genreOutcome (game theory)Treatment OutcomeRankingScale (social sciences)Component (UML)Outcome Assessment Health CareMultiple comparisons problemHumansComputer SimulationData miningcomputerProportional Hazards ModelsMathematicsStatistical hypothesis testingEvent (probability theory)Statistics in Medicine
researchProduct

A Unified Approach to Likelihood Inference on Stochastic Orderings in a Nonparametric Context

1998

Abstract For data in a two-way contingency table with ordered margins, we consider various hypotheses of stochastic orders among the conditional distributions considered by rows and show that each is equivalent to requiring that an invertible transformation of the vectors of conditional row probabilities satisfies an appropriate set of linear inequalities. This leads to the construction of a general algorithm for maximum likelihood estimation under multinomial sampling and provides a simple framework for deriving the asymptotic distribution of log-likelihood ratio tests. The usual stochastic ordering and the so called uniform and likelihood ratio orderings are considered as special cases. I…

Statistics and ProbabilityCombinatoricsIndependent and identically distributed random variablesLinear inequalityTransformation (function)Likelihood-ratio testAsymptotic distributionApplied mathematicsConditional probability distributionStatistics Probability and UncertaintyStochastic orderingStatistical hypothesis testingMathematicsJournal of the American Statistical Association
researchProduct

Test Procedures in Configural Frequency Analysis (CFA) Controlling the Local and Multiple Level

1987

The test statistics used until now in the CFA have been developed under the assumption of the overall hypothesis of total independence. Therefore, the multiple test procedures based on these statistics are really only different tests of the overall hypothesis. If one likes to test a special cell hypothesis, one should only assume that this hypothesis is true and not the whole overall hypothesis. Such cell tests can then be used as elements of a multiple test procedure. In this paper it is shown that the usual test procedures can be very anticonservative (except of the two-dimensional, and, for some procedures, the three-dimensional case), and corrected test procedures are developed. Further…

Statistics and ProbabilityContingency tableGeneral MedicineTest (assessment)StatisticsPortmanteau testEconometricsChi-square testTest statisticStatistics Probability and UncertaintyConfigural frequency analysisIndependence (probability theory)MathematicsStatistical hypothesis testingBiometrical Journal
researchProduct

Olley–Pakes productivity decomposition: computation and inference

2016

Summary We show how a moment-based estimation procedure can be used to compute point estimates and standard errors for the two components of the widely used Olley–Pakes decomposition of aggregate (weighted average) productivity. When applied to business level microdata, the procedure allows for autocovariance and heteroscedasticity robust inference and hypothesis testing about, for example, the coevolution of the productivity components in different groups of firms. We provide an application to Finnish firm level data and find that formal statistical inference casts doubt on the conclusions that one might draw on the basis of a visual inspection of the components of the decomposition.

Statistics and ProbabilityEconomics and EconometricsHeteroscedasticityproductivitytuottavuusInferenceFrequentist inference0502 economics and businessStatisticsStatistical inferenceEconometricsPoint estimation050207 economics050205 econometrics MathematicsStatistical hypothesis testingpäättelyta112inferenceta51105 social sciencesgeneralized method of momentsAutocovarianceweighted averageFiducial inferenceStatistics Probability and UncertaintySocial Sciences (miscellaneous)Journal of the Royal Statistical Society Series A: Statistics in Society
researchProduct