Search results for "Prior probability"
showing 10 items of 47 documents
A New Technique of Invariant Statistical Embedding and Averaging in Terms of Pivots for Improvement of Statistical Decisions Under Parametric Uncerta…
2021
In this chapter, a new technique of invariant embedding of sample statistics in a decision criterion (performance index) and averaging this criterion via pivotal quantities (pivots) is proposed for intelligent constructing efficient (optimal, uniformly non-dominated, unbiased, improved) statistical decisions under parametric uncertainty. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, the technique of invariant statistical embedding and averaging in terms of pivotal quantities (ISE&APQ) is independent of the choice of priors and represents …
Reference Priors in a Variance Components Problem
1992
The ordered group reference prior algorithm of Berger and Bernardo (1989b) is applied to the balanced variance components problem. Besides the intrinsic interest of developing good noninformative priors for the variance components problem, a number of theoretically interesting issues arise in application of the proposed procedure. The algorithm is described (for completeness) in an important special case, with a detailed heuristic motivation.
Effective state estimation of stochastic systems
2003
In the present paper, for constructing the minimum risk estimators of state of stochastic systems, a new technique of invariant embedding of sample statistics in a loss function is proposed. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, an invariant embedding technique is independent of the choice of priors. It allows one to eliminate unknown parameters from the problem and to find the best invariant estimator, which has smaller risk than any of the well‐known estimators. There exists a class of control systems where observations are not …
Invariant Embedding Technique and Its Applications for Improvement or Optimization of Statistical Decisions
2010
In the present paper, for improvement or optimization of statistical decisions under parametric uncertainty, a new technique of invariant embedding of sample statistics in a performance index is proposed. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, an invariant embedding technique is independent of the choice of priors. It allows one to eliminate unknown parameters from the problem and to find the best invariant decision rule, which has smaller risk than any of the well-known decision rules. To illustrate the proposed technique, applica…
Bayesian joint models for longitudinal and survival data
2020
This paper takes a quick look at Bayesian joint models (BJM) for longitudinal and survival data. A general formulation for BJM is examined in terms of the sampling distribution of the longitudinal and survival processes, the conditional distribution of the random effects and the prior distribution. Next a basic BJM defined in terms of a mixed linear model and a Cox survival regression models is discussed and some extensions and other Bayesian topics are briefly outlined.
Markov Chain Monte Carlo Methods for High Dimensional Inversion in Remote Sensing
2004
SummaryWe discuss the inversion of the gas profiles (ozone, NO3, NO2, aerosols and neutral density) in the upper atmosphere from the spectral occultation measurements. The data are produced by the ‘Global ozone monitoring of occultation of stars’ instrument on board the Envisat satellite that was launched in March 2002. The instrument measures the attenuation of light spectra at various horizontal paths from about 100 km down to 10–20 km. The new feature is that these data allow the inversion of the gas concentration height profiles. A short introduction is given to the present operational data management procedure with examples of the first real data inversion. Several solution options for…
What Does Objective Mean in a Dirichlet-multinomial Process?
2017
Summary The Dirichlet-multinomial process can be seen as the generalisation of the binomial model with beta prior distribution when the number of categories is larger than two. In such a scenario, setting informative prior distributions when the number of categories is great becomes difficult, so the need for an objective approach arises. However, what does objective mean in the Dirichlet-multinomial process? To deal with this question, we study the sensitivity of the posterior distribution to the choice of an objective Dirichlet prior from those presented in the available literature. We illustrate the impact of the selection of the prior distribution in several scenarios and discuss the mo…
A Bayesian Sequential Look at u-Control Charts
2005
We extend the usual implementation of u-control charts (uCCs) in two ways. First, we overcome the restrictive (and often inadequate) assumptions of the Poisson model; next, we eliminate the need for the questionable base period by using a sequential procedure. We use empirical Bayes(EB) and Bayes methods and compare them with the traditional frequentist implementation. EB methods are somewhat easy to implement, and they deal nicely with extra-Poisson variability (and, at the same time, informally check the adequacy of the Poisson assumption). However, they still need the base period. The sequential, full Bayes approach, on the other hand, also avoids this drawback of traditional u-charts. T…
Bayesian analysis and design for comparison of effect-sizes
2002
Comparison of effect-sizes, or more generally, of non-centrality parameters of non-central t distributions, is a common problem, especially in meta-analysis. The usual simplifying assumptions of either identical or non-related effect-sizes are often too restrictive to be appropriate. In this paper, the effect-sizes are modeled as random effects with t distributions. Bayesian hierarchical models are used both to design and analyze experiments. The main goal is to compare effect-sizes. Sample sizes are chosen so as to make accurate inferences about the difference of effect-sizes and also to convincingly solve the testing of equality of effect-sizes if such is the goal.
Poisson Regression with Change-Point Prior in the Modelling of Disease Risk around a Point Source
2003
Bayesian estimation of the risk of a disease around a known point source of exposure is considered. The minimal requirements for data are that cases and populations at risk are known for a fixed set of concentric annuli around the point source, and each annulus has a uniquely defined distance from the source. The conventional Poisson likelihood is assumed for the counts of disease cases in each annular zone with zone-specific relative risk and parameters and, conditional on the risks, the counts are considered to be independent. The prior for the relative risk parameters is assumed to be piecewise constant at the distance having a known number of components. This prior is the well-known cha…