Search results for "Uncertainty"
showing 10 items of 1010 documents
Temporal aggregation in chain graph models
2005
The dependence structure of an observed process induced by temporal aggregation of a time evolving hidden spatial phenomenon is addressed. Data are described by means of chain graph models and an algorithm to compute the chain graph resulting from the temporal aggregation of a directed acyclic graph is provided. This chain graph is the best graph which covers the independencies of the resulting process within the chain graph class. A sufficient condition that produces a memory loss of the observed process with respect to its hidden origin is analyzed. Some examples are used for illustrating algorithms and results.
Time-dependent weak rate of convergence for functions of generalized bounded variation
2016
Let $W$ denote the Brownian motion. For any exponentially bounded Borel function $g$ the function $u$ defined by $u(t,x)= \mathbb{E}[g(x{+}\sigma W_{T-t})]$ is the stochastic solution of the backward heat equation with terminal condition $g$. Let $u^n(t,x)$ denote the corresponding approximation generated by a simple symmetric random walk with time steps $2T/n$ and space steps $\pm \sigma \sqrt{T/n}$ where $\sigma > 0$. For quite irregular terminal conditions $g$ (bounded variation on compact intervals, locally H\"older continuous) the rate of convergence of $u^n(t,x)$ to $u(t,x)$ is considered, and also the behavior of the error $u^n(t,x)-u(t,x)$ as $t$ tends to $T$
A penalized approach for the bivariate ordered logistic model with applications to social and medical data
2018
Bivariate ordered logistic models (BOLMs) are appealing to jointly model the marginal distribution of two ordered responses and their association, given a set of covariates. When the number of categories of the responses increases, the number of global odds ratios to be estimated also increases, and estimation gets problematic. In this work we propose a non-parametric approach for the maximum likelihood (ML) estimation of a BOLM, wherein penalties to the differences between adjacent row and column effects are applied. Our proposal is then compared to the Goodman and Dale models. Some simulation results as well as analyses of two real data sets are presented and discussed.
On Independent Component Analysis with Stochastic Volatility Models
2017
Consider a multivariate time series where each component series is assumed to be a linear mixture of latent mutually independent stationary time series. Classical independent component analysis (ICA) tools, such as fastICA, are often used to extract latent series, but they don't utilize any information on temporal dependence. Also financial time series often have periods of low and high volatility. In such settings second order source separation methods, such as SOBI, fail. We review here some classical methods used for time series with stochastic volatility, and suggest modifications of them by proposing a family of vSOBI estimators. These estimators use different nonlinearity functions to…
Tests against stationary and explosive alternatives in vector autoregressive models
2008
. The article proposes new tests for the number of unit roots in vector autoregressive models based on the eigenvalues of the companion matrix. Both stationary and explosive alternatives are considered. The limiting distributions of test statistics depend only on the number of unit roots. Size and power are investigated, and it is found that the new test against some stationary alternatives compares favourably with the widely used likelihood ratio test for the cointegrating rank. The powers are prominently higher against explosive than against stationary alternatives. Some empirical examples are provided to show how to use the new tests with real data.
Correcting for non-ignorable missingness in smoking trends
2015
Data missing not at random (MNAR) is a major challenge in survey sampling. We propose an approach based on registry data to deal with non-ignorable missingness in health examination surveys. The approach relies on follow-up data available from administrative registers several years after the survey. For illustration we use data on smoking prevalence in Finnish National FINRISK study conducted in 1972-1997. The data consist of measured survey information including missingness indicators, register-based background information and register-based time-to-disease survival data. The parameters of missingness mechanism are estimable with these data although the original survey data are MNAR. The u…
Conditionally heteroscedastic intensity-dependent marking of log Gaussian Cox processes
2009
Spatial marked point processes are models for systems of points which are randomly distributed in space and provided with measured quantities called marks. This study deals with marking, that is methods of constructing marked point processes from unmarked ones. The focus is density-dependent marking where the local point intensity affects the mark distribution. This study develops new markings for log Gaussian Cox processes. In these markings, both the mean and variance of the mark distribution depend on the local intensity. The mean, variance and mark correlation properties are presented for the new markings, and a Bayesian estimation procedure is suggested for statistical inference. The p…
Poisson Regression with Change-Point Prior in the Modelling of Disease Risk around a Point Source
2003
Bayesian estimation of the risk of a disease around a known point source of exposure is considered. The minimal requirements for data are that cases and populations at risk are known for a fixed set of concentric annuli around the point source, and each annulus has a uniquely defined distance from the source. The conventional Poisson likelihood is assumed for the counts of disease cases in each annular zone with zone-specific relative risk and parameters and, conditional on the risks, the counts are considered to be independent. The prior for the relative risk parameters is assumed to be piecewise constant at the distance having a known number of components. This prior is the well-known cha…
A Bayesian analysis of classical hypothesis testing
1980
The procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses. The results shed further light on Lindley’s paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.
Bayesian subcohort selection for longitudinal covariate measurements in follow‐up studies
2022
We propose an approach for the planning of longitudinal covariate measurements in follow-up studies where covariates are time-varying. We assume that the entire cohort cannot be selected for longitudinal measurements due to financial limitations, and study how a subset of the cohort should be selected optimally, in order to obtain precise estimates of covariate effects in a survival model. In our approach, the study will be designed sequentially utilizing the data collected in previous measurements of the individuals as prior information. We propose using a Bayesian optimality criterion in the subcohort selections, which is compared with simple random sampling using simulated and real follo…