Search results for " Machine Learning"

showing 10 items of 300 documents

The Convolutional Tsetlin Machine

2019

Convolutional neural networks (CNNs) have obtained astounding successes for important pattern recognition tasks, but they suffer from high computational complexity and the lack of interpretability. The recent Tsetlin Machine (TM) attempts to address this lack by using easy-to-interpret conjunctive clauses in propositional logic to solve complex pattern recognition problems. The TM provides competitive accuracy in several benchmarks, while keeping the important property of interpretability. It further facilitates hardware-near implementation since inputs, patterns, and outputs are expressed as bits, while recognition and learning rely on straightforward bit manipulation. In this paper, we ex…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial IntelligenceStatistics - Machine LearningMachine Learning (stat.ML)Machine Learning (cs.LG)
researchProduct

Increasing the Inference and Learning Speed of Tsetlin Machines with Clause Indexing

2020

The Tsetlin Machine (TM) is a machine learning algorithm founded on the classical Tsetlin Automaton (TA) and game theory. It further leverages frequent pattern mining and resource allocation principles to extract common patterns in the data, rather than relying on minimizing output error, which is prone to overfitting. Unlike the intertwined nature of pattern representation in neural networks, a TM decomposes problems into self-contained patterns, represented as conjunctive clauses. The clause outputs, in turn, are combined into a classification decision through summation and thresholding, akin to a logistic regression function, however, with binary weights and a unit step output function. …

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial IntelligenceStatistics - Machine LearningMachine Learning (stat.ML)Machine Learning (cs.LG)
researchProduct

A Regression Tsetlin Machine with Integer Weighted Clauses for Compact Pattern Representation

2020

The Regression Tsetlin Machine (RTM) addresses the lack of interpretability impeding state-of-the-art nonlinear regression models. It does this by using conjunctive clauses in propositional logic to capture the underlying non-linear frequent patterns in the data. These, in turn, are combined into a continuous output through summation, akin to a linear regression function, however, with non-linear components and unity weights. Although the RTM has solved non-linear regression problems with competitive accuracy, the resolution of the output is proportional to the number of clauses employed. This means that computation cost increases with resolution. To reduce this problem, we here introduce i…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial IntelligenceStatistics - Machine LearningMachine Learning (stat.ML)Machine Learning (cs.LG)
researchProduct

Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data

2020

Tsetlin Machines (TMs) capture patterns using conjunctive clauses in propositional logic, thus facilitating interpretation. However, recent TM-based approaches mainly rely on inspecting the full range of clauses individually. Such inspection does not necessarily scale to complex prediction problems that require a large number of clauses. In this paper, we propose closed-form expressions for understanding why a TM model makes a specific prediction (local interpretability). Additionally, the expressions capture the most important features of the model overall (global interpretability). We further introduce expressions for measuring the importance of feature value ranges for continuous feature…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial IntelligenceStatistics - Machine LearningMachine Learning (stat.ML)Machine Learning (cs.LG)
researchProduct

Reinforcement Learning with Intrinsic Affinity for Personalized Prosperity Management

2022

AbstractThe purpose of applying reinforcement learning (RL) to portfolio management is commonly the maximization of profit. The extrinsic reward function used to learn an optimal strategy typically does not take into account any other preferences or constraints. We have developed a regularization method that ensures that strategies have global intrinsic affinities, i.e., different personalities may have preferences for certain asset classes which may change over time. We capitalize on these intrinsic policy affinities to make our RL model inherently interpretable. We demonstrate how RL agents can be trained to orchestrate such individual policies for particular personality profiles and stil…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial IntelligenceVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550Machine Learning (cs.LG)
researchProduct

Reinforcement Learning Your Way: Agent Characterization through Policy Regularization

2022

The increased complexity of state-of-the-art reinforcement learning (RL) algorithms has resulted in an opacity that inhibits explainability and understanding. This has led to the development of several post hoc explainability methods that aim to extract information from learned policies, thus aiding explainability. These methods rely on empirical observations of the policy, and thus aim to generalize a characterization of agents’ behaviour. In this study, we have instead developed a method to imbue agents’ policies with a characteristic behaviour through regularization of their objective functions. Our method guides the agents’ behaviour during learning, which results in a…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial Intelligenceexplainable AI; multi-agent systems; deterministic policy gradientsGeneral Earth and Planetary SciencesVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550General Environmental ScienceMachine Learning (cs.LG)
researchProduct

Combining a Context Aware Neural Network with a Denoising Autoencoder for Measuring String Similarities

2018

Measuring similarities between strings is central for many established and fast growing research areas including information retrieval, biology, and natural language processing. The traditional approach for string similarity measurements is to define a metric over a word space that quantifies and sums up the differences between characters in two strings. The state-of-the-art in the area has, surprisingly, not evolved much during the last few decades. The majority of the metrics are based on a simple comparison between character and character distributions without consideration for the context of the words. This paper proposes a string metric that encompasses similarities between strings bas…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Computation and LanguageComputer Science - Artificial IntelligenceComputation and Language (cs.CL)Information Retrieval (cs.IR)Machine Learning (cs.LG)Computer Science - Information Retrieval
researchProduct

RotNet: Fast and Scalable Estimation of Stellar Rotation Periods Using Convolutional Neural Networks

2020

Magnetic activity in stars manifests as dark spots on their surfaces that modulate the brightness observed by telescopes. These light curves contain important information on stellar rotation. However, the accurate estimation of rotation periods is computationally expensive due to scarce ground truth information, noisy data, and large parameter spaces that lead to degenerate solutions. We harness the power of deep learning and successfully apply Convolutional Neural Networks to regress stellar rotation periods from Kepler light curves. Geometry-preserving time-series to image transformations of the light curves serve as inputs to a ResNet-18 based architecture which is trained through transf…

FOS: Computer and information sciencesComputer Science - Machine LearningAstrophysics - Solar and Stellar AstrophysicsFOS: Physical sciencesSolar and Stellar Astrophysics (astro-ph.SR)Machine Learning (cs.LG)
researchProduct

A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory

2021

Continued improvements on existing reconstruction methods are vital to the success of high-energy physics experiments, such as the IceCube Neutrino Observatory. In IceCube, further challenges arise as the detector is situated at the geographic South Pole where computational resources are limited. However, to perform real-time analyses and to issue alerts to telescopes around the world, powerful and fast reconstruction methods are desired. Deep neural networks can be extremely powerful, and their usage is computationally inexpensive once the networks are trained. These characteristics make a deep learning-based approach an excellent candidate for the application in IceCube. A reconstruction …

FOS: Computer and information sciencesComputer Science - Machine LearningAstrophysics::High Energy Astrophysical Phenomenacs.LGData analysisFOS: Physical sciencesFitting methods01 natural sciencesConvolutional neural networkCalibration; Cluster finding; Data analysis; Fitting methods; Neutrino detectors; Pattern recognitionHigh Energy Physics - ExperimentIceCube Neutrino ObservatoryMachine Learning (cs.LG)High Energy Physics - Experiment (hep-ex)Pattern recognition0103 physical sciencesNeutrino detectors010303 astronomy & astrophysicsInstrumentationMathematical Physics010308 nuclear & particles physicsbusiness.industryhep-exDeep learningCluster findingDetectorNeutrino detectorComputer engineeringOrders of magnitude (time)13. Climate actionCascadeCalibrationPattern recognition (psychology)Artificial intelligencebusiness
researchProduct

Emulation as an Accurate Alternative to Interpolation in Sampling Radiative Transfer Codes

2018

Computationally expensive radiative transfer models (RTMs) are widely used to realistically reproduce the light interaction with the earth surface and atmosphere. Because these models take long processing time, the common practice is to first generate a sparse look-up table (LUT) and then make use of interpolation methods to sample the multidimensional LUT input variable space. However, the question arise whether common interpolation methodsperform most accurate. As an alternative to interpolation, this paper proposes to use emulation, i.e., approximating the RTM output by means of the statistical learning. Two experiments were conducted to assess the accuracy in delivering spectral outputs…

FOS: Computer and information sciencesComputer Science - Machine LearningAtmospheric Science010504 meteorology & atmospheric sciencesComputer science0211 other engineering and technologiesFOS: Physical sciences02 engineering and technologyStatistics - Applications01 natural sciencesArticleMachine Learning (cs.LG)Sampling (signal processing)KrigingInverse distance weightingApplications (stat.AP)Computers in Earth Sciences021101 geological & geomatics engineering0105 earth and related environmental sciencesEmulationArtificial neural networkMODTRANComputational Physics (physics.comp-ph)Physics - Atmospheric and Oceanic PhysicsAtmospheric and Oceanic Physics (physics.ao-ph)Lookup tablePhysics - Computational PhysicsAlgorithmInterpolationIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
researchProduct