Search results for " Machine Learning"
showing 10 items of 300 documents
On the impact of forgetting on learning machines
1995
People tend not to have perfect memories when it comes to learning, or to anything else for that matter. Most formal studies of learning, however, assume a perfect memory. Some approaches have restricted the number of items that could be retained. We introduce a complexity theoretic accounting of memory utilization by learning machines. In our new model, memory is measured in bits as a function of the size of the input. There is a hierarchy of learnability based on increasing memory allotment. The lower bound results are proved using an unusual combination of pumping and mutual recursion theorem arguments. For technical reasons, it was necessary to consider two types of memory : long and sh…
Nonlinear Cook distance for Anomalous Change Detection
2020
In this work we propose a method to find anomalous changes in remote sensing images based on the chronochrome approach. A regressor between images is used to discover the most {\em influential points} in the observed data. Typically, the pixels with largest residuals are decided to be anomalous changes. In order to find the anomalous pixels we consider the Cook distance and propose its nonlinear extension using random Fourier features as an efficient nonlinear measure of impact. Good empirical performance is shown over different multispectral images both visually and quantitatively evaluated with ROC curves.
Focusing Knowledge-based Graph Argument Mining via Topic Modeling
2021
Decision-making usually takes five steps: identifying the problem, collecting data, extracting evidence, identifying pro and con arguments, and making decisions. Focusing on extracting evidence, this paper presents a hybrid model that combines latent Dirichlet allocation and word embeddings to obtain external knowledge from structured and unstructured data. We study the task of sentence-level argument mining, as arguments mostly require some degree of world knowledge to be identified and understood. Given a topic and a sentence, the goal is to classify whether a sentence represents an argument in regard to the topic. We use a topic model to extract topic- and sentence-specific evidence from…
Unlocking the potential of deep learning for marine ecology: overview, applications, and outlook
2022
The deep learning revolution is touching all scientific disciplines and corners of our lives as a means of harnessing the power of big data. Marine ecology is no exception. These new methods provide analysis of data from sensors, cameras, and acoustic recorders, even in real time, in ways that are reproducible and rapid. Off-the-shelf algorithms can find, count, and classify species from digital images or video and detect cryptic patterns in noisy data. Using these opportunities requires collaboration across ecological and data science disciplines, which can be challenging to initiate. To facilitate these collaborations and promote the use of deep learning towards ecosystem-based management…
Supervised Quantum Learning without Measurements
2017
We propose a quantum machine learning algorithm for efficiently solving a class of problems encoded in quantum controlled unitary operations. The central physical mechanism of the protocol is the iteration of a quantum time-delayed equation that introduces feedback in the dynamics and eliminates the necessity of intermediate measurements. The performance of the quantum algorithm is analyzed by comparing the results obtained in numerical simulations with the outcome of classical machine learning methods for the same problem. The use of time-delayed equations enhances the toolbox of the field of quantum machine learning, which may enable unprecedented applications in quantum technologies. The…
The Convolutional Tsetlin Machine
2019
Convolutional neural networks (CNNs) have obtained astounding successes for important pattern recognition tasks, but they suffer from high computational complexity and the lack of interpretability. The recent Tsetlin Machine (TM) attempts to address this lack by using easy-to-interpret conjunctive clauses in propositional logic to solve complex pattern recognition problems. The TM provides competitive accuracy in several benchmarks, while keeping the important property of interpretability. It further facilitates hardware-near implementation since inputs, patterns, and outputs are expressed as bits, while recognition and learning rely on straightforward bit manipulation. In this paper, we ex…
Problem Transformation Methods with Distance-Based Learning for Multi-Target Regression
2020
Multi-target regression is a special subset of supervised machine learning problems. Problem transformation methods are used in the field to improve the performance of basic methods. The purpose of this article is to test the use of recently popularized distance-based methods, the minimal learning machine (MLM) and the extreme minimal learning machine (EMLM), in problem transformation. The main advantage of the full data variants of these methods is the lack of any meta-parameter. The experimental results for the MLM and EMLM show promising potential, emphasizing the utility of the problem transformation especially with the EMLM. peerReviewed
Strategies to develop radiomics and machine learning models for lung cancer stage and histology prediction using small data samples
2021
Abstract Predictive models based on radiomics and machine-learning (ML) need large and annotated datasets for training, often difficult to collect. We designed an operative pipeline for model training to exploit data already available to the scientific community. The aim of this work was to explore the capability of radiomic features in predicting tumor histology and stage in patients with non-small cell lung cancer (NSCLC). We analyzed the radiotherapy planning thoracic CT scans of a proprietary sample of 47 subjects (L-RT) and integrated this dataset with a publicly available set of 130 patients from the MAASTRO NSCLC collection (Lung1). We implemented intra- and inter-sample cross-valida…
Organized Learning Models (Pursuer Control Optimisation)
1982
Abstract The concept of Organized Learning is defined, and some random models are presented. For Not Transferable Learning, it is necessary to start from an instantaneous learning; by a discrete way, we must form a stochastic model considering the probability of each path; with a continue aproximation, we can study the evolution of the internal state through to consider the relative and absolute probabilities, by means of differential equations systems. For Transferable Learning, the instantaneous learning give us directly the System evolution. So, the Algoritmes for the different models are compared.
Neural Networks, Inside Out: Solving for Inputs Given Parameters (A Preliminary Investigation)
2021
Artificial neural network (ANN) is a supervised learning algorithm, where parameters are learned by several back-and-forth iterations of passing the inputs through the network, comparing the output with the expected labels, and correcting the parameters. Inspired by a recent work of Boer and Kramer (2020), we investigate a different problem: Suppose an observer can view how the ANN parameters evolve over many iterations, but the dataset is oblivious to him. For instance, this can be an adversary eavesdropping on a multi-party computation of an ANN parameters (where intermediate parameters are leaked). Can he form a system of equations, and solve it to recover the dataset?