Search results for " normal distribution"
showing 10 items of 57 documents
Color image quality assessment measure using multivariate generalized Gaussian distribution
2014
This paper deals with color image quality assessment in the reduced-reference framework based on natural scenes statistics. In this context, we propose to model the statistics of the steer able pyramid coefficients by a Multivariate Generalized Gaussian distribution (MGGD). This model allows taking into account the high correlation between the components of the RGB color space. For each selected scale and orientation, we extract a parameter matrix from the three color components sub bands. In order to quantify the visual degradation, we use a closed-form of Kullback-Leibler Divergence (KLD) between two MGGDs. Using "TID 2008" benchmark, the proposed measure has been compared with the most i…
Unsupervised Anomaly and Change Detection With Multivariate Gaussianization
2022
Anomaly detection (AD) is a field of intense research in remote sensing (RS) image processing. Identifying low probability events in RS images is a challenging problem given the high dimensionality of the data, especially when no (or little) information about the anomaly is available a priori. While a plenty of methods are available, the vast majority of them do not scale well to large datasets and require the choice of some (very often critical) hyperparameters. Therefore, unsupervised and computationally efficient detection methods become strictly necessary, especially now with the data deluge problem. In this article, we propose an unsupervised method for detecting anomalies and changes …
Randomized Rx For Target Detection
2018
This work tackles the target detection problem through the well-known global RX method. The RX method models the clutter as a multivariate Gaussian distribution, and has been extended to nonlinear distributions using kernel methods. While the kernel RX can cope with complex clutters, it requires a considerable amount of computational resources as the number of clutter pixels gets larger. Here we propose random Fourier features to approximate the Gaussian kernel in kernel RX and consequently our development keep the accuracy of the nonlinearity while reducing the computational cost which is now controlled by an hyperparameter. Results over both synthetic and real-world image target detection…
Using SMAA-2 method with dependent uncertainties for strategic forest planning
2006
Abstract Uncertainty included in forest variables is normally ignored in forest management planning. When the uncertainty is accounted for, it is typically assumed to be independently distributed for the criteria measurements of different alternatives. In forest management planning, the factors introducing the uncertainty can be classified into three main sources: the errors in the basic forestry data, the uncertainty of the (relative) future prices of timber, and the uncertainty in predicting the forest development. Due to the nature of these error sources, most of the involved uncertainties can be assumed to be positively correlated across the alternative management plans and/or criteria.…
SIMULATION EXPERIMENTS WITH MULTIPLE GROUP LINEAR AND QUADRATIC DISCRIMINANT ANALYSIS
1973
Summary A simulation program is described which can be performed to obtain estimates of the different types of misclassification probabilities for multiple group linear and quadratic discriminant analysis. The program can be used to study how these errors depend on sample sizes and the different parameters of the multivariate normal distribution. Examples for several simulation experiments are given and possible conclusions are discussed.
Bayesian modeling of the evolution of male height in 18th century Finland from incomplete data.
2012
Abstract Data on army recruits’ height are frequently available and can be used to analyze the economics and welfare of the population in different periods of history. However, such data are not a random sample from the whole population at the time of interest, but instead is skewed since the short men were less likely to be recruited. In statistical terms this means that the data are left-truncated. Although truncation is well-understood in statistics a further complication is that the truncation threshold is not known, may vary from time to time, and auxiliary information on the threshold is not at our disposal. The advantage of the fully Bayesian approach presented here is that both the …
How to simulate normal data sets with the desired correlation structure
2010
The Cholesky decomposition is a widely used method to draw samples from multivariate normal distribution with non-singular covariance matrices. In this work we introduce a simple method by using singular value decomposition (SVD) to simulate multivariate normal data even if the covariance matrix is singular, which is often the case in chemometric problems. The covariance matrix can be specified by the user or can be generated by specifying a subset of the eigenvalues. The latter can be an advantage for simulating data sets with a particular latent structure. This can be useful for testing the performance of chemometric methods with data sets matching the theoretical conditions for their app…
Statistical validation of rival models for observable stochastic process and its identification
2011
In this paper, for statistical validation of rival (analytical or simulation) models collected for modeling observable process in stochastic system (say, transportation or service system), a uniformly most powerful invariant (UMPI) test is developed from the generalized maximum likelihood ratio (GMLR). This test can be considered as a result of a new approach to solving the Behrens-Fisher problem when covariance matrices of multivariate normal populations (compared with respect to their means) are different and unknown. The test makes use of an invariant statistic whose distribution, under the null hypothesis, does not depend on the unknown (nuisance) parameters. The sample size and thresho…
Adaptive Gaussian particle method for the solution of the Fokker-Planck equation
2012
The Fokker-Planck equation describes the evolution of the probability density for a stochastic ordinary differential equation (SODE). A solution strategy for this partial differential equation (PDE) up to a relatively large number of dimensions is based on particle methods using Gaussians as basis functions. An initial probability density is decomposed into a sum of multivariate normal distributions and these are propagated according to the SODE. The decomposition as well as the propagation is subject to possibly large numeric errors due to the difficulty to control the spatial residual over the whole domain. In this paper a new particle method is derived, which allows a deterministic error…
A Highly Flexible Trajectory Model Based on the Primitives of Brownian Fields—Part II: Analysis of the Statistical Properties
2016
In the first part of our paper, we have proposed a highly flexible trajectory model based on the primitives of Brownian fields (BFs). In this second part, we study the statistical properties of that trajectory model in depth. These properties include the autocorrelation function (ACF), mean, and the variance of the path along each axis. We also derive the distribution of the angle-of-motion (AOM) process, the incremental traveling length process, and the overall traveling length. It is shown that the path process is in general non-stationary. We show that the AOM and the incremental traveling length processes can be modeled by the phase and the envelope of a complex Gaussian process with no…