Search results for "Machine learning"

showing 10 items of 1464 documents

Local Granger causality

2021

Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. For Gaussian variables it is equivalent to transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes. We exploit such equivalence and calculate exactly the 'local Granger causality', i.e. the profile of the information transfer at each discrete time point in Gaussian processes; in this frame Granger causality is the average of its local version. Our approach offers a robust and computationally fast method to follow the information transfer along the time history of linear stochastic processes, as well as of nonlinear …

FOS: Computer and information sciencesInformation transferGaussianFOS: Physical sciencestechniques; information theory; granger causalityMachine Learning (stat.ML)Quantitative Biology - Quantitative Methods01 natural sciences010305 fluids & plasmasVector autoregressionsymbols.namesakegranger causalityGranger causalityStatistics - Machine Learning0103 physical sciencesApplied mathematicstime serie010306 general physicsQuantitative Methods (q-bio.QM)Mathematicsinformation theoryStochastic processDisordered Systems and Neural Networks (cond-mat.dis-nn)Condensed Matter - Disordered Systems and Neural NetworksComputational Physics (physics.comp-ph)Discrete time and continuous timeAutoregressive modelFOS: Biological sciencesSettore ING-INF/06 - Bioingegneria Elettronica E InformaticasymbolsTransfer entropytechniquesPhysics - Computational Physics
researchProduct

CLEAR: Covariant LEAst-Square Refitting with Applications to Image Restoration

2017

International audience; In this paper, we propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with a special focus for image processing tasks. Generalizing ideas that emerged for $\ell_1$ regularization, we develop an approach re-fitting the results of standard methods towards the input data. Total variation regularizations and non-local means are special cases of interest. We identify important covariant information that should be preserved by the re-fitting method, and emphasize the importance of preserving the Jacobian (w.r.t. the observed signal) of the original estimator. Then, we provide an approach that has a ``twicing'' flavor a…

FOS: Computer and information sciencesInverse problemsMathematical optimization[ INFO.INFO-TS ] Computer Science [cs]/Signal and Image ProcessingComputer Vision and Pattern Recognition (cs.CV)General MathematicsComputer Science - Computer Vision and Pattern RecognitionMachine Learning (stat.ML)Mathematics - Statistics TheoryImage processingStatistics Theory (math.ST)02 engineering and technologyDebiasing[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]01 natural sciencesRegularization (mathematics)Boosting010104 statistics & probabilitysymbols.namesake[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing[STAT.ML]Statistics [stat]/Machine Learning [stat.ML]Variational methods[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]Statistics - Machine LearningRefittingMSC: 49N45 65K10 68U10[ INFO.INFO-TI ] Computer Science [cs]/Image ProcessingFOS: Mathematics0202 electrical engineering electronic engineering information engineeringCovariant transformation[ MATH.MATH-ST ] Mathematics [math]/Statistics [math.ST]0101 mathematicsImage restoration[ STAT.ML ] Statistics [stat]/Machine Learning [stat.ML]MathematicsApplied Mathematics[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]EstimatorInverse problem[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV]Jacobian matrix and determinantsymbolsTwicing020201 artificial intelligence & image processingAffine transformationAlgorithm
researchProduct

Estimating crop primary productivity with Sentinel-2 and Landsat 8 using machine learning methods trained with radiative transfer simulations

2019

Abstract Satellite remote sensing has been widely used in the last decades for agricultural applications, both for assessing vegetation condition and for subsequent yield prediction. Existing remote sensing-based methods to estimate gross primary productivity (GPP), which is an important variable to indicate crop photosynthetic function and stress, typically rely on empirical or semi-empirical approaches, which tend to over-simplify photosynthetic mechanisms. In this work, we take advantage of all parallel developments in mechanistic photosynthesis modeling and satellite data availability for an advanced monitoring of crop productivity. In particular, we combine process-based modeling with …

FOS: Computer and information sciencesLandsat 8Earth observation010504 meteorology & atmospheric sciencesComputer Vision and Pattern Recognition (cs.CV)0208 environmental biotechnologyComputer Science - Computer Vision and Pattern RecognitionSoil Science02 engineering and technologyGross primary productivity (GPP)Sentinel-2 (S2)Machine learningcomputer.software_genre01 natural sciencesRadiative transfer modeling (RTM)Atmospheric radiative transfer codesSoil-canopy-observation of photosynthesis and the energy balance (SCOPE)Computers in Earth SciencesC3 crops0105 earth and related environmental sciencesRemote sensing2. Zero hungerArtificial neural networkbusiness.industryEmpirical modellingNeural networks (NN)GeologyVegetationMachine learning (ML)15. Life on landHybrid approach22/4 OA procedure020801 environmental engineeringVariable (computer science)ITC-ISI-JOURNAL-ARTICLEEnvironmental scienceSatelliteArtificial intelligenceScale (map)businesscomputerRemote sensing of environment
researchProduct

Effectiveness of Data-Driven Induction of Semantic Spaces and Traditional Classifiers for Sarcasm Detection

2019

Irony and sarcasm are two complex linguistic phenomena that are widely used in everyday language and especially over the social media, but they represent two serious issues for automated text understanding. Many labeled corpora have been extracted from several sources to accomplish this task, and it seems that sarcasm is conveyed in different ways for different domains. Nonetheless, very little work has been done for comparing different methods among the available corpora. Furthermore, usually, each author collects and uses their own datasets to evaluate his own method. In this paper, we show that sarcasm detection can be tackled by applying classical machine learning algorithms to input te…

FOS: Computer and information sciencesLinguistics and LanguageComputer Science - Machine LearningComputer sciencemedia_common.quotation_subjectSemantic spaceMachine Learning (stat.ML)02 engineering and technologycomputer.software_genreLanguage and LinguisticsTask (project management)Data-drivenMachine Learning (cs.LG)Artificial IntelligenceStatistics - Machine Learning020204 information systemsEveryday language0202 electrical engineering electronic engineering information engineeringSocial medianatural language processingmedia_commonComputer Science - Computation and LanguageSarcasmSettore INF/01 - Informaticabusiness.industryirony detectionIronymachine learningsemantic spaces020201 artificial intelligence & image processingArtificial intelligencebusinessIrony detectionsemantic spacecomputerComputation and Language (cs.CL)SoftwareNatural language processingsarcasm detection
researchProduct

Metropolis Sampling

2017

Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techniques, introducing the basic notions and different properties. We describe in details all the elements involved in the MH algorithm and the most relevant variants. Several improvements and recent extensions proposed in the literature are also briefly discussed, providing a quick but exhaustive overvie…

FOS: Computer and information sciencesMachine Learning (stat.ML)020206 networking & telecommunications02 engineering and technologyStatistics - Computation01 natural sciencesStatistics::ComputationMethodology (stat.ME)010104 statistics & probabilityStatistics - Machine Learning0202 electrical engineering electronic engineering information engineering0101 mathematicsComputation (stat.CO)Statistics - MethodologyWiley StatsRef: Statistics Reference Online
researchProduct

The Dreaming Variational Autoencoder for Reinforcement Learning Environments

2018

Reinforcement learning has shown great potential in generalizing over raw sensory data using only a single neural network for value optimization. There are several challenges in the current state-of-the-art reinforcement learning algorithms that prevent them from converging towards the global optima. It is likely that the solution to these problems lies in short- and long-term planning, exploration and memory management for reinforcement learning algorithms. Games are often used to benchmark reinforcement learning algorithms as they provide a flexible, reproducible, and easy to control environment. Regardless, few games feature a state-space where results in exploration, memory, and plannin…

FOS: Computer and information sciencesMaskinlæringComputer Science - Machine LearningVDP::Computer technology: 551Artificial Intelligence (cs.AI)VDP::Datateknologi: 551Computer Science - Artificial IntelligenceMachine learningDeep learningMachine Learning (cs.LG)
researchProduct

Randomized Block Frank–Wolfe for Convergent Large-Scale Learning

2017

Owing to their low-complexity iterations, Frank-Wolfe (FW) solvers are well suited for various large-scale learning tasks. When block-separable constraints are present, randomized block FW (RB-FW) has been shown to further reduce complexity by updating only a fraction of coordinate blocks per iteration. To circumvent the limitations of existing methods, the present work develops step sizes for RB-FW that enable a flexible selection of the number of blocks to update per iteration while ensuring convergence and feasibility of the iterates. To this end, convergence rates of RB-FW are established through computational bounds on a primal sub-optimality measure and on the duality gap. The novel b…

FOS: Computer and information sciencesMathematical optimization0102 computer and information sciences02 engineering and technology01 natural sciencesMeasure (mathematics)Machine Learning (cs.LG)Convergence (routing)FOS: Mathematics0202 electrical engineering electronic engineering information engineeringFraction (mathematics)Electrical and Electronic EngineeringMathematics - Optimization and ControlMathematicsSequenceDuality gapComputer Science - Numerical Analysis020206 networking & telecommunicationsNumerical Analysis (math.NA)Stationary pointSupport vector machineComputer Science - LearningOptimization and Control (math.OC)010201 computation theory & mathematicsIterated functionSignal ProcessingAlgorithmIEEE Transactions on Signal Processing
researchProduct

Adaptive independent sticky MCMC algorithms

2018

In this work, we introduce a novel class of adaptive Monte Carlo methods, called adaptive independent sticky MCMC algorithms, for efficient sampling from a generic target probability density function (pdf). The new class of algorithms employs adaptive non-parametric proposal densities which become closer and closer to the target as the number of iterations increases. The proposal pdf is built using interpolation procedures based on a set of support points which is constructed iteratively based on previously drawn samples. The algorithm's efficiency is ensured by a test that controls the evolution of the set of support points. This extra stage controls the computational cost and the converge…

FOS: Computer and information sciencesMathematical optimizationAdaptive Markov chain Monte Carlo (MCMC)Monte Carlo methodBayesian inferenceHASettore SECS-P/05 - Econometrialcsh:TK7800-8360Machine Learning (stat.ML)02 engineering and technologyBayesian inference01 natural sciencesStatistics - Computationlcsh:Telecommunication010104 statistics & probabilitysymbols.namesakeAdaptive Markov chain Monte Carlo (MCMC); Adaptive rejection Metropolis sampling (ARMS); Bayesian inference; Gibbs sampling; Hit and run algorithm; Metropolis-within-Gibbs; Monte Carlo methods; Signal Processing; Hardware and Architecture; Electrical and Electronic EngineeringGibbs samplingStatistics - Machine Learninglcsh:TK5101-67200202 electrical engineering electronic engineering information engineeringComputational statisticsMetropolis-within-GibbsHit and run algorithm0101 mathematicsElectrical and Electronic EngineeringGaussian processComputation (stat.CO)MathematicsSignal processinglcsh:Electronics020206 networking & telecommunicationsMarkov chain Monte CarloMonte Carlo methodsHardware and ArchitectureSignal ProcessingSettore SECS-S/03 - Statistica EconomicasymbolsSettore SECS-S/01 - StatisticaStatistical signal processingGibbs samplingAdaptive rejection Metropolis sampling (ARMS)EURASIP Journal on Advances in Signal Processing
researchProduct

Consistent Regression of Biophysical Parameters with Kernel Methods

2020

This paper introduces a novel statistical regression framework that allows the incorporation of consistency constraints. A linear and nonlinear (kernel-based) formulation are introduced, and both imply closed-form analytical solutions. The models exploit all the information from a set of drivers while being maximally independent of a set of auxiliary, protected variables. We successfully illustrate the performance in the estimation of chlorophyll content.

FOS: Computer and information sciencesMathematical optimizationComputer Science - Machine Learning010504 meteorology & atmospheric sciences0211 other engineering and technologiesRegression analysisMachine Learning (stat.ML)02 engineering and technology01 natural sciencesRegressionData modelingMachine Learning (cs.LG)Set (abstract data type)Methodology (stat.ME)Nonlinear systemKernel methodConsistency (statistics)Statistics - Machine LearningKernel (statistics)Statistics - Methodology021101 geological & geomatics engineering0105 earth and related environmental sciencesMathematicsIGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium
researchProduct

Bayesian Unification of Gradient and Bandit-based Learning for Accelerated Global Optimisation

2017

Bandit based optimisation has a remarkable advantage over gradient based approaches due to their global perspective, which eliminates the danger of getting stuck at local optima. However, for continuous optimisation problems or problems with a large number of actions, bandit based approaches can be hindered by slow learning. Gradient based approaches, on the other hand, navigate quickly in high-dimensional continuous spaces through local optimisation, following the gradient in fine grained steps. Yet, apart from being susceptible to local optima, these schemes are less suited for online learning due to their reliance on extensive trial-and-error before the optimum can be identified. In this…

FOS: Computer and information sciencesMathematical optimizationComputer scienceComputer Science - Artificial IntelligenceBayesian probability02 engineering and technologyMachine learningcomputer.software_genreMachine Learning (cs.LG)symbols.namesakeLocal optimumMargin (machine learning)0202 electrical engineering electronic engineering information engineeringGaussian processFlexibility (engineering)business.industry020206 networking & telecommunicationsFunction (mathematics)Computer Science - LearningArtificial Intelligence (cs.AI)symbols020201 artificial intelligence & image processingAlgorithm designLinear approximationArtificial intelligencebusinesscomputer
researchProduct