Search results for "Frequentist"
showing 10 items of 30 documents
Retract p < 0.005 and propose using JASP, instead
2018
Seeking to address the lack of research reproducibility in science, including psychology and the life sciences, a pragmatic solution has been raised recently: to use a stricter p < 0.005 standard for statistical significance when claiming evidence of new discoveries. Notwithstanding its potential impact, the proposal has motivated a large mass of authors to dispute it from different philosophical and methodological angles. This article reflects on the original argument and the consequent counterarguments, and concludes with a simpler and better-suited alternative that the authors of the proposal knew about and, perhaps, should have made from their Jeffresian perspective: to use a Bayes …
Rejection odds and rejection ratios: A proposal for statistical practice in testing hypotheses
2016
Much of science is (rightly or wrongly) driven by hypothesis testing. Even in situations where the hypothesis testing paradigm is correct, the common practice of basing inferences solely on p-values has been under intense criticism for over 50 years. We propose, as an alternative, the use of the odds of a correct rejection of the null hypothesis to incorrect rejection. Both pre-experimental versions (involving the power and Type I error) and post-experimental versions (depending on the actual data) are considered. Implementations are provided that range from depending only on the p-value to consideration of full Bayesian analysis. A surprise is that all implementations -- even the full Baye…
Bayesian Methodology in Statistics
2009
Bayesian methods provide a complete paradigm for statistical inference under uncertainty. These may be derived from an axiomatic system and provide a coherent methodology which makes it possible to incorporate relevant initial information, and which solves many of the difficulties that frequentist methods are known to face. If no prior information is to be assumed, the more frequent situation met in scientific reporting, a formal initial prior function, the reference prior, mathematically derived from the assumed model, is used; this leads to objective Bayesian methods, objective in the precise sense that their results, like frequentist results, only depend on the assumed model and the data…
Finding Prediction Limits for a Future Number of Failures in the Prescribed Time Interval under Parametric Uncertainty
2012
Computing prediction intervals is an important part of the forecasting process intended to indicate the likely uncertainty in point forecasts. Prediction intervals for future order statistics are widely used for reliability problems and other related problems. In this paper, we present an accurate procedure, called ‘within-sample prediction of order statistics', to obtain prediction limits for the number of failures that will be observed in a future inspection of a sample of units, based only on the results of the first in-service inspection of the same sample. The failure-time of such units is modeled with a two-parameter Weibull distribution indexed by scale and shape parameters β and δ, …
Estimation and visualization of confusability matrices from adaptive measurement data
2010
Abstract We present a simple but effective method based on Luce’s choice axiom [Luce, R.D. (1959). Individual choice behavior: A theoretical analysis. New York: John Wiley & Sons] for consistent estimation of the pairwise confusabilities of items in a multiple-choice recognition task with arbitrarily chosen choice-sets. The method combines the exact (non-asymptotic) Bayesian way of assessing uncertainty with the unbiasedness emphasized in the classical frequentist approach. We apply the method to data collected using an adaptive computer game designed for prevention of reading disability. A player’s estimated confusability of phonemes (or more accurately, phoneme–grapheme connections) and l…
Inference for Lorenz curve orderings
1999
In this paper we consider the issue of performing statistical inference for Lorenz curve orderings. This involves testing for an ordered relationship in a multivariate context and making comparisons among more than two population distributions. Our approach is to frame the hypotheses of interest as sets of linear inequality constraints on the vector of Lorenz curve ordinates, and apply order-restricted statistical inference to derive test statistics and their sampling distributions. We go on to relate our results to others which have appeared in recent literature, and use Monte Carlo analysis to highlight their respective properties and comparative performances. Finally, we discuss in gener…
WEIGHTED-AVERAGE LEAST SQUARES (WALS): A SURVEY
2014
Model averaging has become a popular method of estimation, following increasing evidence that model selection and estimation should be treated as one joint procedure. Weighted- average least squares (WALS) is a recent model-average approach, which takes an intermediate position between frequentist and Bayesian methods, allows a credible treatment of ignorance, and is extremely fast to compute. We review the theory of WALS and discuss extensions and applications.
Sampling properties of the Bayesian posterior mean with an application to WALS estimation
2022
Many statistical and econometric learning methods rely on Bayesian ideas, often applied or reinterpreted in a frequentist setting. Two leading examples are shrinkage estimators and model averaging estimators, such as weighted-average least squares (WALS). In many instances, the accuracy of these learning methods in repeated samples is assessed using the variance of the posterior distribution of the parameters of interest given the data. This may be permissible when the sample size is large because, under the conditions of the Bernstein--von Mises theorem, the posterior variance agrees asymptotically with the frequentist variance. In finite samples, however, things are less clear. In this pa…
Confidence Interval or P-Value?: Part 4 of a Series on Evaluation of Scientific Publications
2009
An understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts.The uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles.P-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide informatio…
Improved Frequentist Prediction Intervals for Autoregressive Models by Simulation
2015
It is well known that the so called plug-in prediction intervals for autoregressive processes, with Gaussian disturbances, are too narrow, i.e. the coverage probabilities fall below the nominal ones. However, simulation experiments show that the formulas borrowed from the ordinary linear regression theory yield one-step prediction intervals, which have coverage probabilities very close to what is claimed. From a Bayesian point of view the resulting intervals are posterior predictive intervals when uniform priors are assumed for both autoregressive coefficients and logarithm of the disturbance variance. This finding opens the path how to treat multi-step prediction intervals which are obtain…