Search results for "Computer Science - Learning"

showing 8 items of 18 documents

Diffusion map for clustering fMRI spatial maps extracted by Indipendent Component Analysis

2013

Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering.…

FOS: Computer and information sciencesDiffusion (acoustics)Computer sciencediffusion mapMachine Learning (stat.ML)02 engineering and technologycomputer.software_genreMachine Learning (cs.LG)Computational Engineering Finance and Science (cs.CE)Correlation03 medical and health sciencesTotal variation0302 clinical medicineStatistics - Machine LearningVoxel0202 electrical engineering electronic engineering information engineeringComputer Science - Computational Engineering Finance and ScienceCluster analysisdimensionality reductionta113spatial mapsbusiness.industryDimensionality reductionfunctional magnetic resonance imaging (fMRI)Pattern recognitionIndependent component analysisSpectral clusteringComputer Science - Learningindependent component analysista6131020201 artificial intelligence & image processingArtificial intelligenceDYNAMICAL-SYSTEMSbusinesscomputer030217 neurology & neurosurgeryclustering
researchProduct

Joint Gaussian Processes for Biophysical Parameter Retrieval

2017

Solving inverse problems is central to geosciences and remote sensing. Radiative transfer models (RTMs) represent mathematically the physical laws which govern the phenomena in remote sensing applications (forward models). The numerical inversion of the RTM equations is a challenging and computationally demanding problem, and for this reason, often the application of a nonlinear statistical regression is preferred. In general, regression models predict the biophysical parameter of interest from the corresponding received radiance. However, this approach does not employ the physical information encoded in the RTMs. An alternative strategy, which attempts to include the physical knowledge, co…

FOS: Computer and information sciencesHyperparameter010504 meteorology & atmospheric sciencesComputer scienceRemote sensing application0211 other engineering and technologiesMachine Learning (stat.ML)Regression analysis02 engineering and technologyInverse problem01 natural sciencesMachine Learning (cs.LG)Data modelingNonparametric regressionComputer Science - Learningsymbols.namesakeStatistics - Machine LearningRadiative transfersymbolsGeneral Earth and Planetary SciencesElectrical and Electronic EngineeringGaussian processAlgorithm021101 geological & geomatics engineering0105 earth and related environmental sciencesIEEE Transactions on Geoscience and Remote Sensing
researchProduct

Randomized Block Frank–Wolfe for Convergent Large-Scale Learning

2017

Owing to their low-complexity iterations, Frank-Wolfe (FW) solvers are well suited for various large-scale learning tasks. When block-separable constraints are present, randomized block FW (RB-FW) has been shown to further reduce complexity by updating only a fraction of coordinate blocks per iteration. To circumvent the limitations of existing methods, the present work develops step sizes for RB-FW that enable a flexible selection of the number of blocks to update per iteration while ensuring convergence and feasibility of the iterates. To this end, convergence rates of RB-FW are established through computational bounds on a primal sub-optimality measure and on the duality gap. The novel b…

FOS: Computer and information sciencesMathematical optimization0102 computer and information sciences02 engineering and technology01 natural sciencesMeasure (mathematics)Machine Learning (cs.LG)Convergence (routing)FOS: Mathematics0202 electrical engineering electronic engineering information engineeringFraction (mathematics)Electrical and Electronic EngineeringMathematics - Optimization and ControlMathematicsSequenceDuality gapComputer Science - Numerical Analysis020206 networking & telecommunicationsNumerical Analysis (math.NA)Stationary pointSupport vector machineComputer Science - LearningOptimization and Control (math.OC)010201 computation theory & mathematicsIterated functionSignal ProcessingAlgorithmIEEE Transactions on Signal Processing
researchProduct

Bayesian Unification of Gradient and Bandit-based Learning for Accelerated Global Optimisation

2017

Bandit based optimisation has a remarkable advantage over gradient based approaches due to their global perspective, which eliminates the danger of getting stuck at local optima. However, for continuous optimisation problems or problems with a large number of actions, bandit based approaches can be hindered by slow learning. Gradient based approaches, on the other hand, navigate quickly in high-dimensional continuous spaces through local optimisation, following the gradient in fine grained steps. Yet, apart from being susceptible to local optima, these schemes are less suited for online learning due to their reliance on extensive trial-and-error before the optimum can be identified. In this…

FOS: Computer and information sciencesMathematical optimizationComputer scienceComputer Science - Artificial IntelligenceBayesian probability02 engineering and technologyMachine learningcomputer.software_genreMachine Learning (cs.LG)symbols.namesakeLocal optimumMargin (machine learning)0202 electrical engineering electronic engineering information engineeringGaussian processFlexibility (engineering)business.industry020206 networking & telecommunicationsFunction (mathematics)Computer Science - LearningArtificial Intelligence (cs.AI)symbols020201 artificial intelligence & image processingAlgorithm designLinear approximationArtificial intelligencebusinesscomputer
researchProduct

An LP-based hyperparameter optimization model for language modeling

2018

In order to find hyperparameters for a machine learning model, algorithms such as grid search or random search are used over the space of possible values of the models hyperparameters. These search algorithms opt the solution that minimizes a specific cost function. In language models, perplexity is one of the most popular cost functions. In this study, we propose a fractional nonlinear programming model that finds the optimal perplexity value. The special structure of the model allows us to approximate it by a linear programming model that can be solved using the well-known simplex algorithm. To the best of our knowledge, this is the first attempt to use optimization techniques to find per…

FOS: Computer and information sciencesMathematical optimizationPerplexityLinear programmingComputer scienceMachine Learning (stat.ML)02 engineering and technology010501 environmental sciences01 natural sciencesTheoretical Computer ScienceNonlinear programmingMachine Learning (cs.LG)Random searchSimplex algorithmSearch algorithmStatistics - Machine Learning0202 electrical engineering electronic engineering information engineeringFOS: MathematicsMathematics - Optimization and Control0105 earth and related environmental sciencesHyperparameterComputer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing)Computer Science - LearningHardware and ArchitectureOptimization and Control (math.OC)Hyperparameter optimization020201 artificial intelligence & image processingLanguage modelSoftwareInformation Systems
researchProduct

The Recycling Gibbs sampler for efficient learning

2018

Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning, and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently samples from the full-conditional probability density functions. Since in the general case this is not possible, in order to speed up the convergence of the chain, it is required to generate auxiliary samples whose information is eventually disregarded. In this work, we show that these auxiliary sample…

FOS: Computer and information sciencesMonte Carlo methodSlice samplingInferenceMachine Learning (stat.ML)02 engineering and technologyBayesian inferenceStatistics - Computation01 natural sciencesMachine Learning (cs.LG)010104 statistics & probabilitysymbols.namesake[INFO.INFO-TS]Computer Science [cs]/Signal and Image ProcessingStatistics - Machine LearningArtificial IntelligenceStatistics0202 electrical engineering electronic engineering information engineering0101 mathematicsElectrical and Electronic EngineeringGaussian processComputation (stat.CO)ComputingMilieux_MISCELLANEOUSMathematicsChain rule (probability)Applied Mathematics020206 networking & telecommunicationsMarkov chain Monte CarloStatistics::ComputationComputer Science - LearningComputational Theory and MathematicsSignal ProcessingsymbolsComputer Vision and Pattern RecognitionStatistics Probability and UncertaintyAlgorithm[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processingGibbs samplingDigital Signal Processing
researchProduct

Optimization of anemia treatment in hemodialysis patients via reinforcement learning

2013

Objective: Anemia is a frequent comorbidity in hemodialysis patients that can be successfully treated by administering erythropoiesis-stimulating agents (ESAs). ESAs dosing is currently based on clinical protocols that often do not account for the high inter- and intra-individual variability in the patient's response. As a result, the hemoglobin level of some patients oscillates around the target range, which is associated with multiple risks and side-effects. This work proposes a methodology based on reinforcement learning (RL) to optimize ESA therapy. Methods: RL is a data-driven approach for solving sequential decision-making problems that are formulated as Markov decision processes (MDP…

MaleFOS: Computer and information sciencesMathematical optimizationDarbepoetin alfaComputer scienceAnemiaComputer Science - Artificial Intelligencemedicine.medical_treatmentMedicine (miscellaneous)Machine Learning (stat.ML)Outcome (game theory)Decision Support TechniquesMachine Learning (cs.LG)Renal DialysisArtificial IntelligenceStatistics - Machine LearningmedicineHumansReinforcement learningDosingAgedProtocol (science)Patient SelectionAnemiaHemoglobin AMiddle Agedmedicine.diseaseMarkov ChainsComputer Science - LearningArtificial Intelligence (cs.AI)Chronic DiseaseHematinicsKidney Failure ChronicFemaleHemodialysisMarkov decision processReinforcement PsychologyAlgorithmsmedicine.drug
researchProduct

Enhancing identification of causal effects by pruning

2018

Causal models communicate our assumptions about causes and effects in real-world phe- nomena. Often the interest lies in the identification of the effect of an action which means deriving an expression from the observed probability distribution for the interventional distribution resulting from the action. In many cases an identifiability algorithm may return a complicated expression that contains variables that are in fact unnecessary. In practice this can lead to additional computational burden and increased bias or inefficiency of estimates when dealing with measurement error or missing data. We present graphical criteria to detect variables which are redundant in identifying causal effe…

päättelyFOS: Computer and information sciencesalgorithmcausal modelMachine Learning (stat.ML)Machine Learning (cs.LG)Computer Science - Learningleikkaus (kasvit)koneoppiminenStatistics - Machine Learningidentiafiabilityalgoritmitkausaliteetticausal inferencetunnistaminen
researchProduct