Search results for "Bayes factor"
showing 10 items of 24 documents
Retract p < 0.005 and propose using JASP, instead
2018
Seeking to address the lack of research reproducibility in science, including psychology and the life sciences, a pragmatic solution has been raised recently: to use a stricter p < 0.005 standard for statistical significance when claiming evidence of new discoveries. Notwithstanding its potential impact, the proposal has motivated a large mass of authors to dispute it from different philosophical and methodological angles. This article reflects on the original argument and the consequent counterarguments, and concludes with a simpler and better-suited alternative that the authors of the proposal knew about and, perhaps, should have made from their Jeffresian perspective: to use a Bayes …
Two-Stage Bayesian Approach for GWAS With Known Genealogy
2019
Genome-wide association studies (GWAS) aim to assess relationships between single nucleotide polymorphisms (SNPs) and diseases. They are one of the most popular problems in genetics, and have some peculiarities given the large number of SNPs compared to the number of subjects in the study. Individuals might not be independent, especially in animal breeding studies or genetic diseases in isolated populations with highly inbred individuals. We propose a family-based GWAS model in a two-stage approach comprising a dimension reduction and a subsequent model selection. The first stage, in which the genetic relatedness between the subjects is taken into account, selects the promising SNPs. The se…
Prioritizing covariates in the planning of future studies in the meta-analytic framework
2016
Science can be seen as a sequential process where each new study augments evidence to the existing knowledge. To have the best prospects to make an impact in this process, a new study should be designed optimally taking into account the previous studies and other prior information. We propose a formal approach for the covariate prioritization, i.e., the decision about the covariates to be measured in a new study. The decision criteria can be based on conditional power, change of the p-value, change in lower confidence limit, Kullback-Leibler divergence, Bayes factors, Bayesian false discovery rate or difference between prior and posterior expectation. The criteria can be also used for decis…
Rejection odds and rejection ratios: A proposal for statistical practice in testing hypotheses
2016
Much of science is (rightly or wrongly) driven by hypothesis testing. Even in situations where the hypothesis testing paradigm is correct, the common practice of basing inferences solely on p-values has been under intense criticism for over 50 years. We propose, as an alternative, the use of the odds of a correct rejection of the null hypothesis to incorrect rejection. Both pre-experimental versions (involving the power and Type I error) and post-experimental versions (depending on the actual data) are considered. Implementations are provided that range from depending only on the p-value to consideration of full Bayesian analysis. A surprise is that all implementations -- even the full Baye…
Bayesian Methodology in Statistics
2009
Bayesian methods provide a complete paradigm for statistical inference under uncertainty. These may be derived from an axiomatic system and provide a coherent methodology which makes it possible to incorporate relevant initial information, and which solves many of the difficulties that frequentist methods are known to face. If no prior information is to be assumed, the more frequent situation met in scientific reporting, a formal initial prior function, the reference prior, mathematically derived from the assumed model, is used; this leads to objective Bayesian methods, objective in the precise sense that their results, like frequentist results, only depend on the assumed model and the data…
Molecular evolution and complete genome sequences in forensic analysis: Neisseria gonorrhoeae in a transmission case
2019
Molecular epidemiology and phylogenetic analyses are frequently used in the investigation of viral transmission cases in forensic contexts. Here, we present the methods and results of the analysis of a bacterial transmission in an alleged child abuse case using complete genome sequences obtained by high-throughput sequencing (HTS) methods.
Comparing normal means: new methods for an old problem
2007
Comparing the means of two normal populations is an old problem in mathematical statistics, but there is still no consensus about its most appropriate solution. In this paper we treat the problem of comparing two normal means as a Bayesian decision problem with only two alternatives: either to accept the hypothesis that the two means are equal, or to conclude that the observed data are, under the assumed model, incompatible with that hypothesis. The combined use of an information-theory based loss function, the intrinsic discrepancy (Bernardo and Rueda 2002}, and an objective prior function, the reference prior \citep{Bernardo 1979; Berger and Bernardo 1992), produces a new solution to this…
Simulation-based marginal likelihood for cluster strong lensing cosmology
2015
Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with $\Lambda$CDM cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, $\alpha$ and $\beta$. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected …
Manipulating the alpha level cannot cure significance testing
2018
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple in…
2017
The T2K experiment reports an updated analysis of neutrino and antineutrino oscillations in appearance and disappearance channels. A sample of electron neutrino candidates at Super-Kamiokande in which a pion decay has been tagged is added to the four single-ring samples used in previous T2K oscillation analyses. Through combined analyses of these five samples, simultaneous measurements of four oscillation parameters, |Δm322|, sin2θ23, sin2θ13, and δCP and of the mass ordering are made. A set of studies of simulated data indicates that the sensitivity to the oscillation parameters is not limited by neutrino interaction model uncertainty. Multiple oscillation analyses are performed, and frequ…