Search results for "STING"

showing 10 items of 3756 documents

Tests against stationary and explosive alternatives in vector autoregressive models

2008

.  The article proposes new tests for the number of unit roots in vector autoregressive models based on the eigenvalues of the companion matrix. Both stationary and explosive alternatives are considered. The limiting distributions of test statistics depend only on the number of unit roots. Size and power are investigated, and it is found that the new test against some stationary alternatives compares favourably with the widely used likelihood ratio test for the cointegrating rank. The powers are prominently higher against explosive than against stationary alternatives. Some empirical examples are provided to show how to use the new tests with real data.

Statistics and ProbabilityAutoregressive modelExplosive materialRank (linear algebra)Applied MathematicsLikelihood-ratio testCompanion matrixEconometricsUnit rootStatistics Probability and UncertaintyEigenvalues and eigenvectorsMathematicsStatistical hypothesis testingJournal of Time Series Analysis
researchProduct

A Bayesian analysis of classical hypothesis testing

1980

The procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses. The results shed further light on Lindley’s paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.

Statistics and ProbabilityBayes factorBayesian inferenceStatistics::ComputationBayesian statisticsStatisticsEconometricsBayesian experimental designStatistics::MethodologyStatistics Probability and UncertaintyBayesian linear regressionLindley's paradoxBayesian averageMathematicsStatistical hypothesis testingTrabajos de Estadistica Y de Investigacion Operativa
researchProduct

A Log-Rank Test for Equivalence of Two Survivor Functions

1993

We consider a hypothesis testing problem in which the alternative states that the vertical distance between the underlying survivor functions nowhere exceeds some prespecified bound delta0. Under the assumption of proportional hazards, this hypothesis is shown to be (logically) equivalent to the statement [beta[log(1 + epsilon), where beta denotes the regression coefficient associated with the treatment group indicator, and epsilon is a simple strictly increasing function of delta. The testing procedure proposed consists of carrying out in terms of beta (i.e., the standard Cox likelihood estimator of beta) the uniformly most powerful level alpha test for a suitable interval hypothesis about…

Statistics and ProbabilityBiometryGaussianGeneral Biochemistry Genetics and Molecular BiologyCombinatoricssymbols.namesakeNeoplasmsLinear regressionStatisticsChi-square testHumansComputer SimulationCerebellar NeoplasmsChildEquivalence (measure theory)Proportional Hazards ModelsStatistical hypothesis testingMathematicsClinical Trials as TopicGeneral Immunology and MicrobiologyApplied MathematicsEstimatorGeneral MedicineSurvival AnalysisLog-rank testLinear ModelssymbolsGeneral Agricultural and Biological SciencesMedulloblastomaQuantileBiometrics
researchProduct

Cluster-Localized Sparse Logistic Regression for SNP Data

2012

The task of analyzing high-dimensional single nucleotide polymorphism (SNP) data in a case-control design using multivariable techniques has only recently been tackled. While many available approaches investigate only main effects in a high-dimensional setting, we propose a more flexible technique, cluster-localized regression (CLR), based on localized logistic regression models, that allows different SNPs to have an effect for different groups of individuals. Separate multivariable regression models are fitted for the different groups of individuals by incorporating weights into componentwise boosting, which provides simultaneous variable selection, hence sparse fits. For model fitting, th…

Statistics and ProbabilityBoosting (machine learning)Computer scienceMultivariable calculusComputational BiologyHigh-Throughput Nucleotide SequencingFeature selectionRegression analysisModels TheoreticalLogistic regressioncomputer.software_genrePolymorphism Single NucleotideRegressionComputational MathematicsLogistic ModelsData Interpretation StatisticalGeneticsCluster AnalysisHumansData miningCluster analysisMolecular BiologyUnit-weighted regressioncomputerGenome-Wide Association StudyStatistical Applications in Genetics and Molecular Biology
researchProduct

Opportunities and challenges of combined effect measures based on prioritized outcomes

2013

Many authors have proposed different approaches to combine multiple endpoints in a univariate outcome measure in the literature. In case of binary or time-to-event variables, composite endpoints, which combine several event types within a single event or time-to-first-event analysis are often used to assess the overall treatment effect. A main drawback of this approach is that the interpretation of the composite effect can be difficult as a negative effect in one component can be masked by a positive effect in another. Recently, some authors proposed more general approaches based on a priority ranking of outcomes, which moreover allow to combine outcome variables of different scale levels. …

Statistics and ProbabilityClinical Trials as TopicEpidemiologyUnivariatecomputer.software_genreOutcome (game theory)Treatment OutcomeRankingScale (social sciences)Component (UML)Outcome Assessment Health CareMultiple comparisons problemHumansComputer SimulationData miningcomputerProportional Hazards ModelsMathematicsStatistical hypothesis testingEvent (probability theory)Statistics in Medicine
researchProduct

A Unified Approach to Likelihood Inference on Stochastic Orderings in a Nonparametric Context

1998

Abstract For data in a two-way contingency table with ordered margins, we consider various hypotheses of stochastic orders among the conditional distributions considered by rows and show that each is equivalent to requiring that an invertible transformation of the vectors of conditional row probabilities satisfies an appropriate set of linear inequalities. This leads to the construction of a general algorithm for maximum likelihood estimation under multinomial sampling and provides a simple framework for deriving the asymptotic distribution of log-likelihood ratio tests. The usual stochastic ordering and the so called uniform and likelihood ratio orderings are considered as special cases. I…

Statistics and ProbabilityCombinatoricsIndependent and identically distributed random variablesLinear inequalityTransformation (function)Likelihood-ratio testAsymptotic distributionApplied mathematicsConditional probability distributionStatistics Probability and UncertaintyStochastic orderingStatistical hypothesis testingMathematicsJournal of the American Statistical Association
researchProduct

Test Procedures in Configural Frequency Analysis (CFA) Controlling the Local and Multiple Level

1987

The test statistics used until now in the CFA have been developed under the assumption of the overall hypothesis of total independence. Therefore, the multiple test procedures based on these statistics are really only different tests of the overall hypothesis. If one likes to test a special cell hypothesis, one should only assume that this hypothesis is true and not the whole overall hypothesis. Such cell tests can then be used as elements of a multiple test procedure. In this paper it is shown that the usual test procedures can be very anticonservative (except of the two-dimensional, and, for some procedures, the three-dimensional case), and corrected test procedures are developed. Further…

Statistics and ProbabilityContingency tableGeneral MedicineTest (assessment)StatisticsPortmanteau testEconometricsChi-square testTest statisticStatistics Probability and UncertaintyConfigural frequency analysisIndependence (probability theory)MathematicsStatistical hypothesis testingBiometrical Journal
researchProduct

Comments on “Unobservable Selection and Coefficient Stability

2019

Abstract–: We establish a link between the approaches proposed by Oster (2019) and Pei, Pischke, and Schwandt (2019) which contribute to the development of inferential procedures for causal effects in the challenging and empirically relevant situation where the unknown data-generation process is not included in the set of models considered by the investigator. We use the general misspecification framework recently proposed by De Luca, Magnus, and Peracchi (2018) to analyze and understand the implications of the restrictions imposed by the two approaches.

Statistics and ProbabilityEconomics and EconometricEconomics and EconometricsTestingSettore SECS-P/05 - EconometriaOLSInconsistency01 natural sciencesUnobservable010104 statistics & probabilityBiaStability theory0502 economics and businessInconsistent Statistics and ProbabilityEconometrics0101 mathematicsSelection (genetic algorithm)050205 econometrics 05 social sciencesCausal effectConfoundingMean squared error (MSE)MisspecificationStatistics Probability and UncertaintyPsychologySocial Sciences (miscellaneous)Journal of Business and Economic Statistics
researchProduct

Olley–Pakes productivity decomposition: computation and inference

2016

Summary We show how a moment-based estimation procedure can be used to compute point estimates and standard errors for the two components of the widely used Olley–Pakes decomposition of aggregate (weighted average) productivity. When applied to business level microdata, the procedure allows for autocovariance and heteroscedasticity robust inference and hypothesis testing about, for example, the coevolution of the productivity components in different groups of firms. We provide an application to Finnish firm level data and find that formal statistical inference casts doubt on the conclusions that one might draw on the basis of a visual inspection of the components of the decomposition.

Statistics and ProbabilityEconomics and EconometricsHeteroscedasticityproductivitytuottavuusInferenceFrequentist inference0502 economics and businessStatisticsStatistical inferenceEconometricsPoint estimation050207 economics050205 econometrics MathematicsStatistical hypothesis testingpäättelyta112inferenceta51105 social sciencesgeneralized method of momentsAutocovarianceweighted averageFiducial inferenceStatistics Probability and UncertaintySocial Sciences (miscellaneous)Journal of the Royal Statistical Society Series A: Statistics in Society
researchProduct

Improvements and Modifications of Tarone's Multiple Test Procedure for Discrete Data

1998

Tarone (1990, Biometrics 46, 515-522) proposed a multiple test procedure for discrete test statistics improving the usual Bonferroni procedure. However, Tarone's procedure is not monotone depending on the predetermined multiple level a. Roth (1998, Journal of Statistical Planning and Inference, in press) developed a monotone version of Tarone's procedure. We present a similar procedure that is both monotone and an improvement of Tarone's proposal. Based on this extension, we derive a step-down procedure that is a corresponding improvement of Holm's (1979, Scandinavian Journal of Statistics 6, 65-70) sequentially rejective procedure. It is shown how adjusted p-values can be computed for the …

Statistics and ProbabilityGeneral Immunology and MicrobiologyBiometricsComputer scienceTest proceduresApplied MathematicsInferenceGeneral MedicineExtension (predicate logic)General Biochemistry Genetics and Molecular Biologysymbols.namesakeBonferroni correctionMonotone polygonsymbolsGeneral Agricultural and Biological SciencesAlgorithmStatistical hypothesis testingBiometrics
researchProduct