Search results for "Heteroscedastic"

showing 10 items of 32 documents

Uncertainty and Equifinality in Calibrating Distributed Roughness Coefficients in a Flood Propagation Model with Limited Data

1998

Monte-Carlo simulations of a two-dimensional finite element model of a flood in the southern part of Sicily were used to explore the parameter space of distributed bed-roughness coefficients. For many real-world events specific data are extremely limited so that there is not only fuzziness in the information available to calibrate the model, but fuzziness in the degree of acceptability of model predictions based upon the different parameter values, owing to model structural errors. Here the GLUE procedure is used to compare model predictions and observations for a certain event, coupled with both a fuzzy-rule-based calibration, and a calibration technique based upon normal and heteroscedast…

HydrologyHeteroscedasticityComputer scienceRange (statistics)A priori and a posterioriEquifinalityParameter spaceGLUEAlgorithmFuzzy logicWater Science and TechnologyEvent (probability theory)
researchProduct

Testing for Financial Contagion Between Developed and Emerging Markets During the 1997 East Asian Crisis

2003

In this paper we examine whether during the 1997 East Asian crisis there was any contagion from the four largest economies in the region (Thailand, Indonesia, Korea and Malaysia) to a number of developed countries (Japan, UK, Germany and France). Following Forbes and Rigobon (2002), we test for contagion as a significant positive shift in the correlation between asset returns, taking into account heteroscedasticity and endogeneity bias. Furthermore, we improve on earlier empirical studies by carrying out a full sample test of the stability of the system that relies on more plausible (over)identifying restrictions. The estimation results provide some evidence of contagion, in particular from…

MacroeconomicsEstimationHeteroscedasticityEmpirical researchFinancial contagionEconomicsEast AsiaMonetary economicsEndogeneityEmerging marketsDeveloped countrySSRN Electronic Journal
researchProduct

Signal-to-noise ratio in reproducing kernel Hilbert spaces

2018

This paper introduces the kernel signal-to-noise ratio (kSNR) for different machine learning and signal processing applications}. The kSNR seeks to maximize the signal variance while minimizing the estimated noise variance explicitly in a reproducing kernel Hilbert space (rkHs). The kSNR gives rise to considering complex signal-to-noise relations beyond additive noise models, and can be seen as a useful signal-to-noise regularizer for feature extraction and dimensionality reduction. We show that the kSNR generalizes kernel PCA (and other spectral dimensionality reduction methods), least squares SVM, and kernel ridge regression to deal with cases where signal and noise cannot be assumed inde…

Noise model02 engineering and technologySNR010501 environmental sciences01 natural sciencesKernel principal component analysisSenyal Teoria del (Telecomunicació)Signal-to-noise ratioArtificial Intelligence0202 electrical engineering electronic engineering information engineeringHeteroscedastic0105 earth and related environmental sciencesMathematicsNoise (signal processing)Dimensionality reductionKernel methodsSignal classificationSupport vector machineKernel methodKernel (statistics)Anàlisi funcionalSignal ProcessingFeature extraction020201 artificial intelligence & image processingSignal-to-noise ratioComputer Vision and Pattern RecognitionAlgorithmSoftwareImatges ProcessamentReproducing kernel Hilbert spaceCausal inference
researchProduct

Permutation Tests in Linear Regression

2015

Exact permutation tests are available only in rather simple linear models. The problem is that, although standard assumptions allow permuting the errors of the model, we cannot permute them in practice, because they are unobservable. Nevertheless, the residuals of the model can be permuted. A proof is given here which shows that it is possible to approximate the unobservable permutation distribution where the true errors are permuted by permuting the residuals. It is shown that approximation holds asymptotically and almost surely for certain quadratic statistics as well as for statistics which are expressible as the maximum of appropriate linear functions. The result is applied to testing t…

Polynomial regressionGeneral linear modelHeteroscedasticityPermutationMathematics::CombinatoricsLinear predictor functionStatisticsLinear regressionLinear modelApplied mathematicsSegmented regressionMathematics
researchProduct

Volatility co-movements: a time scale decomposition analysis

2014

In this paper we are interested in detecting contagion from US to European stock market volatilities in the period immediately after the Lehman Brothers’ collapse. The analysis, based on a factor decomposition of the covariance matrix of implied and realized volatilities, is carried for different sub-samples (identified as normal and crisis periods) and across different (high) frequency bands. In particular, the analysis is split in two stages. In the first stage, we retrieve the time series of wavelet coefficients for each volatility series for high frequency scales, using the Maximal Overlapping Discrete Wavelet transform and, in a second stage, we apply Maximum Likelihood for a factor de…

Settore SECS-P/05 - EconometriaImplied volatility Realized Volatility Contagion Heteroscedasticity bias Wavelets
researchProduct

Conditionally heteroscedastic intensity-dependent marking of log Gaussian Cox processes

2009

Spatial marked point processes are models for systems of points which are randomly distributed in space and provided with measured quantities called marks. This study deals with marking, that is methods of constructing marked point processes from unmarked ones. The focus is density-dependent marking where the local point intensity affects the mark distribution. This study develops new markings for log Gaussian Cox processes. In these markings, both the mean and variance of the mark distribution depend on the local intensity. The mean, variance and mark correlation properties are presented for the new markings, and a Bayesian estimation procedure is suggested for statistical inference. The p…

Statistics and ProbabilityBayes estimatorHeteroscedasticityGaussianVariance (accounting)Point processsymbols.namesakeStatisticsStatistical inferencesymbolsPoint (geometry)Statistics Probability and UncertaintyFocus (optics)MathematicsStatistica Neerlandica
researchProduct

Olley–Pakes productivity decomposition: computation and inference

2016

Summary We show how a moment-based estimation procedure can be used to compute point estimates and standard errors for the two components of the widely used Olley–Pakes decomposition of aggregate (weighted average) productivity. When applied to business level microdata, the procedure allows for autocovariance and heteroscedasticity robust inference and hypothesis testing about, for example, the coevolution of the productivity components in different groups of firms. We provide an application to Finnish firm level data and find that formal statistical inference casts doubt on the conclusions that one might draw on the basis of a visual inspection of the components of the decomposition.

Statistics and ProbabilityEconomics and EconometricsHeteroscedasticityproductivitytuottavuusInferenceFrequentist inference0502 economics and businessStatisticsStatistical inferenceEconometricsPoint estimation050207 economics050205 econometrics MathematicsStatistical hypothesis testingpäättelyta112inferenceta51105 social sciencesgeneralized method of momentsAutocovarianceweighted averageFiducial inferenceStatistics Probability and UncertaintySocial Sciences (miscellaneous)Journal of the Royal Statistical Society Series A: Statistics in Society
researchProduct

Change-points detection for variance piecewise constant models

2011

A new approach based on the fit of a generalized linear regression model is introduced for detecting change-points in the variance of heteroscedastic Gaussian variables, with piecewise constant variance function. This approach overcome some limitations of both exact and approximate well-known methods that are based on successive application of search and tend to overestimate the real number of changes in the variance of the series. The proposed method just requires the computation of a gamma GLM with log-link, resulting in a very efficient algorithm even with large sample size and many change points to be estimated.

Statistics and ProbabilityGeneralized linear modelHeteroscedasticityVariance (accounting)Law of total varianceOne-way analysis of varianceModeling and SimulationStatisticsPiecewiseChange-points changes in variation cumulative segmentationVariance-based sensitivity analysisSettore SECS-S/01 - StatisticaMathematicsVariance function
researchProduct

Robust estimation and inference for bivariate line-fitting in allometry.

2011

In allometry, bivariate techniques related to principal component analysis are often used in place of linear regression, and primary interest is in making inferences about the slope. We demonstrate that the current inferential methods are not robust to bivariate contamination, and consider four robust alternatives to the current methods -- a novel sandwich estimator approach, using robust covariance matrices derived via an influence function approach, Huber's M-estimator and the fast-and-robust bootstrap. Simulations demonstrate that Huber's M-estimators are highly efficient and robust against bivariate contamination, and when combined with the fast-and-robust bootstrap, we can make accurat…

Statistics and ProbabilityHeteroscedasticityAnalysis of VarianceCovariance matrixRobust statisticsEstimatorGeneral MedicineBivariate analysisCovarianceBiostatisticsStatistics::ComputationEfficient estimatorPrincipal component analysisStatisticsEconometricsStatistics::MethodologyBody SizeStatistics Probability and UncertaintyMathematicsProbabilityBiometrical journal. Biometrische Zeitschrift
researchProduct

On the convenience of heteroscedasticity in highly multivariate disease mapping

2019

Highly multivariate disease mapping has recently been proposed as an enhancement of traditional multivariate studies, making it possible to perform the joint analysis of a large number of diseases. This line of research has an important potential since it integrates the information of many diseases into a single model yielding richer and more accurate risk maps. In this paper we show how some of the proposals already put forward in this area display some particular problems when applied to small regions of study. Specifically, the homoscedasticity of these proposals may produce evident misfits and distorted risk maps. In this paper we propose two new models to deal with the variance-adaptiv…

Statistics and ProbabilityHeteroscedasticityMultivariate statisticsComputer scienceDiseaseJoint analysisMachine learningcomputer.software_genreBayesian statistics01 natural sciencesGaussian Markov random fields010104 statistics & probability03 medical and health sciences0302 clinical medicineHomoscedasticity0101 mathematicsMultivariate disease mappingSpatial analysisMortality studiesInterpretation (logic)Spatial statisticsbusiness.industryBayesian statisticsEstadística bayesianaMalalties030211 gastroenterology & hepatologyArtificial intelligenceStatistics Probability and Uncertaintybusinesscomputer
researchProduct