Search results for "Software"
showing 10 items of 7396 documents
Using Statistical and Computer Models to Quantify Volcanic Hazards
2009
Risk assessment of rare natural hazards, such as large volcanic block and ash or pyroclastic flows, is addressed. Assessment is approached through a combination of computer modeling, statistical modeling, and extreme-event probability computation. A computer model of the natural hazard is used to provide the needed extrapolation to unseen parts of the hazard space. Statistical modeling of the available data is needed to determine the initializing distribution for exercising the computer model. In dealing with rare events, direct simulations involving the computer model are prohibitively expensive. The solution instead requires a combination of adaptive design of computer model approximation…
PROBABILISTIC QUANTIFICATION OF HAZARDS: A METHODOLOGY USING SMALL ENSEMBLES OF PHYSICS-BASED SIMULATIONS AND STATISTICAL SURROGATES
2015
This paper presents a novel approach to assessing the hazard threat to a locale due to a large volcanic avalanche. The methodology combines: (i) mathematical modeling of volcanic mass flows; (ii) field data of avalanche frequency, volume, and runout; (iii) large-scale numerical simulations of flow events; (iv) use of statistical methods to minimize computational costs, and to capture unlikely events; (v) calculation of the probability of a catastrophic flow event over the next T years at a location of interest; and (vi) innovative computational methodology to implement these methods. This unified presentation collects elements that have been separately developed, and incorporates new contri…
On the convenience of heteroscedasticity in highly multivariate disease mapping
2019
Highly multivariate disease mapping has recently been proposed as an enhancement of traditional multivariate studies, making it possible to perform the joint analysis of a large number of diseases. This line of research has an important potential since it integrates the information of many diseases into a single model yielding richer and more accurate risk maps. In this paper we show how some of the proposals already put forward in this area display some particular problems when applied to small regions of study. Specifically, the homoscedasticity of these proposals may produce evident misfits and distorted risk maps. In this paper we propose two new models to deal with the variance-adaptiv…
Bayesian assessment of times to diagnosis in breast cancer screening
2008
Breast cancer is one of the diseases with the most profound impact on health in developed countries and mammography is the most popular method for detecting breast cancer at a very early stage. This paper focuses on the waiting period from a positive mammogram until a confirmatory diagnosis is carried out in hospital. Generalized linear mixed models are used to perform the statistical analysis, always within the Bayesian reasoning. Markov chain Monte Carlo algorithms are applied for estimation by simulating the posterior distribution of the parameters and hyperparameters of the model through the free software WinBUGS.
Visualizing categorical data in ViSta
2003
The modules in the statistical package ViSta related to categorical data analysis are presented These modules are: visualization of frequency data with mosaic and bar plots, correspondence analysis, multiple correspondence analysis and loglinear analysis. All these methods are implemented in ViSta with a big emphasis on plots and graphical representations of data, as well as interactivity for the user with the system. These provide a system that has shown to be easy, useful, and powerful, both for novice and experienced users.
Updating input–output matrices: assessing alternatives through simulation
2009
A problem that frequently arises in economics, demography, statistics, transportation planning and stochastic modelling is how to adjust the entries of a matrix to fulfil row and column aggregation constraints. Biproportional methods in general and the so-called RAS algorithm in particular, have been used for decades to find solutions to this type of problem. Although alternatives exist, the RAS algorithm and its extensions are still the most popular. Apart from some interesting empirical and theoretical properties, tradition, simplicity and very low computational costs are among the reasons behind the great success of RAS. Nowadays computer hardware and software have made alternative proce…
SeqEditor: an application for primer design and sequence analysis with or without GTF/GFF files
2021
[Motivation]: Sequence analyses oriented to investigate specific features, patterns and functions of protein and DNA/RNA sequences usually require tools based on graphic interfaces whose main characteristic is their intuitiveness and interactivity with the user’s expertise, especially when curation or primer design tasks are required. However, interface-based tools usually pose certain computational limitations when managing large sequences or complex datasets, such as genome and transcriptome assemblies. Having these requirments in mind we have developed SeqEditor an interactive software tool for nucleotide and protein sequences’ analysis.
A Knowledge Management and Decision Support Model for Enterprises
2011
We propose a novel knowledge management system (KMS) for enterprises. Our system exploits two different approaches for knowledge representation and reasoning: a document-based approach based on data-driven creation of a semantic space and an ontology-based model. Furthermore, we provide an expert system capable of supporting the enterprise decisional processes and a semantic engine which performs intelligent search on the enterprise knowledge bases. The decision support process exploits the Bayesian networks model to improve business planning process when performed under uncertainty. Copyright © 2011 Patrizia Ribino et al.
Humanities Data inR
2016
Sparse kernel methods for high-dimensional survival data
2008
Abstract Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be ‘kernelized’. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, dependin…