Search results for "Null hypothesis"
showing 10 items of 39 documents
Backbone of credit relationships in the Japanese credit market
2016
We detect the backbone of the weighted bipartite network of the Japanese credit market relationships. The backbone is detected by adapting a general method used in the investigation of weighted networks. With this approach we detect a backbone that is statistically validated against a null hypothesis of uniform diversification of loans for banks and firms. Our investigation is done year by year and it covers more than thirty years during the period from 1980 to 2011. We relate some of our findings with economic events that have characterized the Japanese credit market during the last years. The study of the time evolution of the backbone allows us to detect changes occurred in network size,…
Predicting the Significance of Necessity
2019
With Necessary Condition Analysis (NCA), a necessity effect is estimated by calculating the amount of empty space in the upper-left corner in a plot with a predictor X and an outcome Y, and recently a method for testing the statistical significance of the necessity effect through permutation has been proposed. In the present simulation study, this method was found to give significant results already with a very weak true population necessity effect, i.e., exhibit high power, unless the sample size is very small. However, in some situations the significance of the necessity effect tends to increase with increased degree of sufficiency, which is paradoxical for a method whose objective is to …
General Statistical Framework for Quantitative Proteomics by Stable Isotope Labeling
2014
Pedro J. Navarro et al.
P-Value, Confidence Intervals, and Statistical Inference: A New Dataset of Misinterpretation
2017
Statistical inference is essential for science since the twentieth century (Salsburg, 2001). Since it's introduction into science, the null hypothesis significance testing (NHST), in which the P-value serves as the index of “statistically significant,” is the most widely used statistical method in psychology (Sterling et al., 1995; Cumming et al., 2007), as well as other fields (Wasserstein and Lazar, 2016). However, surveys consistently showed that researchers in psychology may not able to interpret P-value and related statistical procedures correctly (Oakes, 1986; Haller and Krauss, 2002; Hoekstra et al., 2014; Badenes-Ribera et al., 2016). Even worse, these misinterpretations of P-value …
Commentary: Psychological Science's Aversion to the Null
2017
Tests of Independence Based on Sign and Rank Covariances
2003
In this paper three different concepts of bivariate sign and rank, namely marginal sign and rank, spatial sign and rank and affine equivariant sign and rank, are considered. The aim is to see whether these different sign and rank covariances can be used to construct tests for the hypothesis of independence. In some cases (spatial sign, affine equivariant sign and rank) an additional assumption on the symmetry of marginal distribution is needed. Limiting distributions of test statistics under the null hypothesis as well as under interesting sequences of contiguous alternatives are derived. Asymptotic relative efficiencies with respect to the regular correlation test are calculated and compar…
Statistical validation of simulation models of observable systems
2003
In this paper, for validating computer simulation models of real, observable systems, an uniformly most powerful invariant (UMPI) test is developed from the generalized maximum likelihood ratio (GMLR). This test can be considered as a result of a new approach to solving the Behrens‐Fisher problem when covariance matrices of two multivariate normal populations (compared with respect to their means) are different and unknown. The test is based on invariant statistic whose distribution, under the null hypothesis, does not depend on the unknown (nuisance) parameters. The sample size and threshold of the UMPI test are determined from minimization of the weighted sum of the model builder's risk a…
Testing Goodness-of-Fit with the Kernel Density Estimator: GoFKernel
2015
To assess the goodness-of-fit of a sample to a continuous random distribution, the most popular approach has been based on measuring, using either L∞ - or L2 -norms, the distance between the null hypothesis cumulative distribution function and the empirical cumulative distribution function. Indeed, as far as I know, almost all the tests currently available in R related to this issue (ks.test in package stats, ad.test in package ADGofTest, and ad.test, ad2.test, ks.test, v.test and w2.test in package truncgof) use one of these two distances on cumulative distribution functions. This paper (i) proposes dgeometric.test, a new implementation of the test that measures the discrepancy between a s…
Extending conventional priors for testing general hypotheses in linear models
2007
We consider that observations come from a general normal linear model and that it is desirable to test a simplifying null hypothesis about the parameters. We approach this problem from an objective Bayesian, model-selection perspective. Crucial ingredients for this approach are 'proper objective priors' to be used for deriving the Bayes factors. Jeffreys-Zellner-Siow priors have good properties for testing null hypotheses defined by specific values of the parameters in full-rank linear models. We extend these priors to deal with general hypotheses in general linear models, not necessarily of full rank. The resulting priors, which we call 'conventional priors', are expressed as a generalizat…
A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance
2018
Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all-cause hazard ratio, which gives equal weight to all events irrespective of their type …