0000000000000835

AUTHOR

James O. Berger

showing 18 related works from this author

Rejection odds and rejection ratios: A proposal for statistical practice in testing hypotheses

2016

Much of science is (rightly or wrongly) driven by hypothesis testing. Even in situations where the hypothesis testing paradigm is correct, the common practice of basing inferences solely on p-values has been under intense criticism for over 50 years. We propose, as an alternative, the use of the odds of a correct rejection of the null hypothesis to incorrect rejection. Both pre-experimental versions (involving the power and Type I error) and post-experimental versions (depending on the actual data) are considered. Implementations are provided that range from depending only on the p-value to consideration of full Bayesian analysis. A surprise is that all implementations -- even the full Baye…

Bayes' ruleFOS: Computer and information sciencesComputer sciencemedia_common.quotation_subjectBayesian probabilityBayesian01 natural sciencesArticle050105 experimental psychologyStatistical powerOddsMethodology (stat.ME)010104 statistics & probabilityFrequentist inferenceBayes factorsEconometrics0501 psychology and cognitive sciencesp-value0101 mathematicsFrequentistPsychology(all)General PsychologyStatistics - Methodologymedia_commonMathematicsStatistical hypothesis testingApplied Mathematics05 social sciencesBayes factorSurpriseOddsNull hypothesisType I and type II errorsJournal of Mathematical Psychology
researchProduct

Statistical inference and Monte Carlo algorithms

1996

This review article looks at a small part of the picture of the interrelationship between statistical theory and computational algorithms, especially the Gibbs sampler and the Accept-Reject algorithm. We pay particular attention to how the methodologies affect and complement each other.

Statistics and ProbabilityDecision theoryMonte Carlo methodMarkov chain Monte CarloStatistics::ComputationComplement (complexity)symbols.namesakeStatistical inferencesymbolsMonte Carlo method in statistical physicsStatistics Probability and UncertaintyStatistical theoryAlgorithmGibbs samplingMathematicsTest
researchProduct

The Effective Sample Size

2013

Model selection procedures often depend explicitly on the sample size n of the experiment. One example is the Bayesian information criterion (BIC) criterion and another is the use of Zellner–Siow priors in Bayesian model selection. Sample size is well-defined if one has i.i.d real observations, but is not well-defined for vector observations or in non-i.i.d. settings; extensions of critera such as BIC to such settings thus requires a definition of effective sample size that applies also in such cases. A definition of effective sample size that applies to fairly general linear models is proposed and illustrated in a variety of situations. The definition is also used to propose a suitable ‘sc…

Deviance information criterionEconomics and EconometricsBayesian information criterionSample size determinationModel selectionPrior probabilityStatisticsLinear modelBayesian inferenceAlgorithmSelection (genetic algorithm)Statistics::ComputationMathematicsEconometric Reviews
researchProduct

Prior-based Bayesian information criterion

2019

We present a new approach to model selection and Bayes factor determination, based on Laplace expansions (as in BIC), which we call Prior-based Bayes Information Criterion (PBIC). In this approach, the Laplace expansion is only done with the likelihood function, and then a suitable prior distribution is chosen to allow exact computation of the (approximate) marginal likelihood arising from the Laplace approximation and the prior. The result is a closed-form expression similar to BIC, but now involves a term arising from the prior distribution (which BIC ignores) and also incorporates the idea that different parameters can have different effective sample sizes (whereas BIC only allows one ov…

Statistics and ProbabilityLaplace expansionApplied MathematicsBayes factorMarginal likelihoodStatistics::Computationsymbols.namesakeComputational Theory and MathematicsLaplace's methodBayesian information criterionPrior probabilitysymbolsApplied mathematicsStatistics::MethodologyStatistics Probability and UncertaintyLikelihood functionFisher informationAnalysisMathematics
researchProduct

Applications and Limitations of Robust Bayesian Bounds and Type II MLE

1994

Three applications of robust Bayesian analysis and three examples of its limitations are given. The applications that are reviewed are the development of an automatic Ockham’s Razor, outlier detection, and analysis of weighted distributions. Limitations of robust Bayesian bounds are highlighted through examples that include analysis of a paranormal experiment and a hierarchical model. This last example shows a disturbing difference between actual hierarchical Bayesian analysis and robust Bayesian bounds, a difference which also arises if, instead, a Type II MLE or empirical Bayes analysis is performed.

Computer sciencebusiness.industryBayesian probabilityMachine learningcomputer.software_genreHierarchical database modelStatistics::ComputationBayesian robustnessRobust Bayesian analysisPrior probabilityAnomaly detectionArtificial intelligenceBayes analysisbusinesscomputer
researchProduct

Criteria for Bayesian model choice with application to variable selection

2012

In objective Bayesian model selection, no single criterion has emerged as dominant in defining objective prior distributions. Indeed, many criteria have been separately proposed and utilized to propose differing prior choices. We first formalize the most general and compelling of the various criteria that have been suggested, together with a new criterion. We then illustrate the potential of these criteria in determining objective model selection priors by considering their application to the problem of variable selection in normal linear models. This results in a new model selection objective prior with a number of compelling properties.

Statistics and ProbabilityMathematical optimization62C10Model selectiong-priorLinear modelMathematics - Statistics TheoryFeature selectionStatistics Theory (math.ST)Model selectionBayesian inferenceObjective model62J05Prior probability62J15FOS: MathematicsStatistics Probability and Uncertaintyobjective BayesSelection (genetic algorithm)variable selectionMathematicsThe Annals of Statistics
researchProduct

Objective Priors for Discrete Parameter Spaces

2012

This article considers the development of objective prior distributions for discrete parameter spaces. Formal approaches to such development—such as the reference prior approach—often result in a constant prior for a discrete parameter, which is questionable for problems that exhibit certain types of structure. To take advantage of structure, this article proposes embedding the original problem in a continuous problem that preserves the structure, and then using standard reference prior theory to determine the appropriate objective prior. Four different possibilities for this embedding are explored, and applied to a population-size model, the hypergeometric distribution, the multivariate hy…

Statistics and ProbabilityMathematical optimizationNegative hypergeometric distributionGeometric distributionHypergeometric distributionDirichlet distributionBinomial distributionsymbols.namesakeBeta-binomial distributionPrior probabilitysymbolsStatistics Probability and UncertaintyCompound probability distributionMathematicsJournal of the American Statistical Association
researchProduct

Incorporating Uncertainties into Traffic Simulators

2007

Computer scienceReal-time computingPosterior probabilityErrors-in-variables modelsHierarchical network modelTraffic generation modelTelecommunications networkVariable-order Bayesian networkSimulationNetwork simulationNetwork traffic simulation
researchProduct

Reference Priors in a Variance Components Problem

1992

The ordered group reference prior algorithm of Berger and Bernardo (1989b) is applied to the balanced variance components problem. Besides the intrinsic interest of developing good noninformative priors for the variance components problem, a number of theoretically interesting issues arise in application of the proposed procedure. The algorithm is described (for completeness) in an important special case, with a detailed heuristic motivation.

Mathematical optimizationGroup (mathematics)Heuristic (computer science)Completeness (order theory)Prior probabilityVariance componentsSpecial caseBayesian inferenceMathematics
researchProduct

A Bayesian analysis of the thermal challenge problem

2008

Abstract A major question for the application of computer models is Does the computer model adequately represent reality? Viewing the computer models as a potentially biased representation of reality, Bayarri et al. [M. Bayarri, J. Berger, R. Paulo, J. Sacks, J. Cafeo, J. Cavendish, C. Lin, J. Tu, A framework for validation of computer models, Technometrics 49 (2) (2007) 138–154] develop the simulator assessment and validation engine ( SAVE ) method as a general framework for answering this question. In this paper, we apply the SAVE method to the challenge problem which involves a thermal computer model designed for certain devices. We develop a statement of confidence that the devices mode…

Statement (computer science)Stochastic processComputer sciencebusiness.industryMechanical EngineeringBayesian probabilityComputational MechanicsGeneral Physics and AstronomyUnbiased EstimationComputer Science Applicationssymbols.namesakeMechanics of MaterialssymbolsArtificial intelligenceRepresentation (mathematics)businessGaussian processSimulationComputer Methods in Applied Mechanics and Engineering
researchProduct

Rejoinder on: Natural Induction: An Objective Bayesian Approach

2009

Giron and Moreno. We certainly agree with Professors Giron and Moreno on the interest in sensitivity of any Bayesian result to changes in the prior. That said, we also consider of considerable pragmatic importance to be able to single out a unique, particular prior which may reasonably be proposed as the reference prior for the problem under study, in the sense that the corresponding posterior of the quantity of interest could be routinely used in practice when no useful prior information is available or acceptable. This is precisely what we have tried to do for the twin problems of the rule of succession and the law of natural induction. The discussants consider the limiting binomial versi…

Algebra and Number TheoryRule of successionApplied MathematicsBayesian probabilityComputational MathematicsPrior probabilityNatural (music)Geometry and TopologySensitivity (control systems)Problem of inductionNull hypothesisMathematical economicsAnalysisMathematicsStatistical hypothesis testing
researchProduct

PROBABILISTIC QUANTIFICATION OF HAZARDS: A METHODOLOGY USING SMALL ENSEMBLES OF PHYSICS-BASED SIMULATIONS AND STATISTICAL SURROGATES

2015

This paper presents a novel approach to assessing the hazard threat to a locale due to a large volcanic avalanche. The methodology combines: (i) mathematical modeling of volcanic mass flows; (ii) field data of avalanche frequency, volume, and runout; (iii) large-scale numerical simulations of flow events; (iv) use of statistical methods to minimize computational costs, and to capture unlikely events; (v) calculation of the probability of a catastrophic flow event over the next T years at a location of interest; and (vi) innovative computational methodology to implement these methods. This unified presentation collects elements that have been separately developed, and incorporates new contri…

Statistics and ProbabilityHazard (logic)Volcanic hazardsgeographyControl and Optimizationgeography.geographical_feature_categoryProcess (engineering)Probabilistic logicHazard analysiscomputer.software_genreFlow (mathematics)VolcanoModeling and SimulationEconometricsDiscrete Mathematics and CombinatoricsEnvironmental scienceData miningcomputerEvent (probability theory)International Journal for Uncertainty Quantification
researchProduct

Overall Objective Priors

2015

In multi-parameter models, reference priors typically depend on the parameter or quantity of interest, and it is well known that this is necessary to produce objective posterior distributions with optimal properties. There are, however, many situations where one is simultaneously interested in all the parameters of the model or, more realistically, in functions of them that include aspects such as prediction, and it would then be useful to have a single objective prior that could safely be used to produce reasonable posterior inferences for all the quantities of interest. In this paper, we consider three methods for selecting a single objective prior and study, in a variety of problems incl…

Statistics and ProbabilityComputer sciencebusiness.industryApplied MathematicsMathematics - Statistics TheoryStatistics Theory (math.ST)Joint Reference PriorReference AnalysisMachine learningcomputer.software_genreLogarithmic DivergenceObjective PriorsVariety (cybernetics)Single objectiveMultinomial ModelPrior probabilityFOS: MathematicsMultinomial distributionMultinomial modelArtificial intelligencebusinesscomputerReference analysisBayesian Analysis
researchProduct

An overview of robust Bayesian analysis

1994

Robust Bayesian analysis is the study of the sensitivity of Bayesian answers to uncertain inputs. This paper seeks to provide an overview of the subject, one that is accessible to statisticians outside the field. Recent developments in the area are also reviewed, though with very uneven emphasis. © 1994 SEIO.

Statistics and ProbabilityComputer scienceBayesian probabilitycomputer.software_genreData scienceField (computer science)Bayesian robustnessN/ARobust Bayesian analysisPrior probabilityData miningSensitivity (control systems)Statistics Probability and Uncertaintycomputer
researchProduct

Natural induction: An objective bayesian approach

2009

The statistical analysis of a sample taken from a finite population is a classic problem for which no generally accepted objective Bayesian results seem to exist. Bayesian solutions to this problem may be very sensitive to the choice of the prior, and there is no consensus as to the appropriate prior to use.

education.field_of_studyAlgebra and Number Theorybusiness.industryApplied MathematicsBayesian probabilityPopulationBayes factorSample (statistics)Machine learningcomputer.software_genreBinomial distributionBayesian statisticsComputational MathematicsEconometricsBayesian hierarchical modelingGeometry and TopologyArtificial intelligencebusinesseducationcomputerAnalysisJeffreys priorMathematicsRevista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas
researchProduct

M.J. (Susie) Bayarri

2021

Bayesian statisticsComputer scienceMathematical economicsStatisticianWiley StatsRef: Statistics Reference Online
researchProduct

Using Statistical and Computer Models to Quantify Volcanic Hazards

2009

Risk assessment of rare natural hazards, such as large volcanic block and ash or pyroclastic flows, is addressed. Assessment is approached through a combination of computer modeling, statistical modeling, and extreme-event probability computation. A computer model of the natural hazard is used to provide the needed extrapolation to unseen parts of the hazard space. Statistical modeling of the available data is needed to determine the initializing distribution for exercising the computer model. In dealing with rare events, direct simulations involving the computer model are prohibitively expensive. The solution instead requires a combination of adaptive design of computer model approximation…

Statistics and ProbabilityHazard (logic)Risk analysisVolcanic hazardsComputer scienceApplied MathematicsComputationInitializationStatistical modelcomputer.software_genreModeling and SimulationNatural hazardRare eventsData miningcomputerTechnometrics
researchProduct

PValues for Composite Null Models

2000

Abstract The problem of investigating compatibility of an assumed model with the data is investigated in the situation when the assumed model has unknown parameters. The most frequently used measures of compatibility are p values, based on statistics T for which large values are deemed to indicate incompatibility of the data and the model. When the null model has unknown parameters, p values are not uniquely defined. The proposals for computing a p value in such a situation include the plug-in and similar p values on the frequentist side, and the predictive and posterior predictive p values on the Bayesian side. We propose two alternatives, the conditional predictive p value and the partial…

Statistics and ProbabilityModel checkingNull modelFrequentist inferenceStatisticsBayesian probabilityBayes factorp-valueStatistics Probability and UncertaintyMathematicsJournal of the American Statistical Association
researchProduct