Search results for "computer.software_genre"
showing 10 items of 3858 documents
Trading leads to scale-free self-organization
2009
Financial markets display scale-free behavior in many different aspects. The power-law behavior of part of the distribution of individual wealth has been recognized by Pareto as early as the nineteenth century. Heavy-tailed and scale-free behavior of the distribution of returns of different financial assets have been confirmed in a series of works. The existence of a Pareto-like distribution of the wealth of market participants has been connected with the scale-free distribution of trading volumes and price-returns. The origin of the Pareto-like wealth distribution, however, remained obscure. Here we show that it is the process of trading itself that under two mild assumptions spontaneously…
S36.4: Control of false discovery rate in adaptive designs
2004
Methods and Tools for Bayesian Variable Selection and Model Averaging in Normal Linear Regression
2018
In this paper, we briefly review the main methodological aspects concerned with the application of the Bayesian approach to model choice and model averaging in the context of variable selection in regression models. This includes prior elicitation, summaries of the posterior distribution and computational strategies. We then examine and compare various publicly available R-packages, summarizing and explaining the differences between packages and giving recommendations for applied users. We find that all packages reviewed (can) lead to very similar results, but there are potentially important differences in flexibility and efficiency of the packages.
Using Statistical and Computer Models to Quantify Volcanic Hazards
2009
Risk assessment of rare natural hazards, such as large volcanic block and ash or pyroclastic flows, is addressed. Assessment is approached through a combination of computer modeling, statistical modeling, and extreme-event probability computation. A computer model of the natural hazard is used to provide the needed extrapolation to unseen parts of the hazard space. Statistical modeling of the available data is needed to determine the initializing distribution for exercising the computer model. In dealing with rare events, direct simulations involving the computer model are prohibitively expensive. The solution instead requires a combination of adaptive design of computer model approximation…
PROBABILISTIC QUANTIFICATION OF HAZARDS: A METHODOLOGY USING SMALL ENSEMBLES OF PHYSICS-BASED SIMULATIONS AND STATISTICAL SURROGATES
2015
This paper presents a novel approach to assessing the hazard threat to a locale due to a large volcanic avalanche. The methodology combines: (i) mathematical modeling of volcanic mass flows; (ii) field data of avalanche frequency, volume, and runout; (iii) large-scale numerical simulations of flow events; (iv) use of statistical methods to minimize computational costs, and to capture unlikely events; (v) calculation of the probability of a catastrophic flow event over the next T years at a location of interest; and (vi) innovative computational methodology to implement these methods. This unified presentation collects elements that have been separately developed, and incorporates new contri…
On the convenience of heteroscedasticity in highly multivariate disease mapping
2019
Highly multivariate disease mapping has recently been proposed as an enhancement of traditional multivariate studies, making it possible to perform the joint analysis of a large number of diseases. This line of research has an important potential since it integrates the information of many diseases into a single model yielding richer and more accurate risk maps. In this paper we show how some of the proposals already put forward in this area display some particular problems when applied to small regions of study. Specifically, the homoscedasticity of these proposals may produce evident misfits and distorted risk maps. In this paper we propose two new models to deal with the variance-adaptiv…
Bayesian assessment of times to diagnosis in breast cancer screening
2008
Breast cancer is one of the diseases with the most profound impact on health in developed countries and mammography is the most popular method for detecting breast cancer at a very early stage. This paper focuses on the waiting period from a positive mammogram until a confirmatory diagnosis is carried out in hospital. Generalized linear mixed models are used to perform the statistical analysis, always within the Bayesian reasoning. Markov chain Monte Carlo algorithms are applied for estimation by simulating the posterior distribution of the parameters and hyperparameters of the model through the free software WinBUGS.
Visualizing categorical data in ViSta
2003
The modules in the statistical package ViSta related to categorical data analysis are presented These modules are: visualization of frequency data with mosaic and bar plots, correspondence analysis, multiple correspondence analysis and loglinear analysis. All these methods are implemented in ViSta with a big emphasis on plots and graphical representations of data, as well as interactivity for the user with the system. These provide a system that has shown to be easy, useful, and powerful, both for novice and experienced users.
A Knowledge Management and Decision Support Model for Enterprises
2011
We propose a novel knowledge management system (KMS) for enterprises. Our system exploits two different approaches for knowledge representation and reasoning: a document-based approach based on data-driven creation of a semantic space and an ontology-based model. Furthermore, we provide an expert system capable of supporting the enterprise decisional processes and a semantic engine which performs intelligent search on the enterprise knowledge bases. The decision support process exploits the Bayesian networks model to improve business planning process when performed under uncertainty. Copyright © 2011 Patrizia Ribino et al.
Sparse kernel methods for high-dimensional survival data
2008
Abstract Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be ‘kernelized’. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, dependin…