Search results for "Intelligence"
showing 10 items of 6959 documents
Organized Learning Models (Pursuer Control Optimisation)
1982
Abstract The concept of Organized Learning is defined, and some random models are presented. For Not Transferable Learning, it is necessary to start from an instantaneous learning; by a discrete way, we must form a stochastic model considering the probability of each path; with a continue aproximation, we can study the evolution of the internal state through to consider the relative and absolute probabilities, by means of differential equations systems. For Transferable Learning, the instantaneous learning give us directly the System evolution. So, the Algoritmes for the different models are compared.
SVM approximation for real-time image segmentation by using an improved hyperrectangles-based method
2003
A real-time implementation of an approximation of the support vector machine (SVM) decision rule is proposed. This method is based on an improvement of a supervised classification method using hyperrectangles, which is useful for real-time image segmentation. The final decision combines the accuracy of the SVM learning algorithm and the speed of a hyperrectangles-based method. We review the principles of the classification methods and we evaluate the hardware implementation cost of each method. We present the combination algorithm, which consists of rejecting ambiguities in the learning set using SVM decision, before using the learning step of the hyperrectangles-based method. We present re…
Thompson Sampling for Dynamic Multi-armed Bandits
2011
The importance of multi-armed bandit (MAB) problems is on the rise due to their recent application in a large variety of areas such as online advertising, news article selection, wireless networks, and medicinal trials, to name a few. The most common assumption made when solving such MAB problems is that the unknown reward probability theta k of each bandit arm k is fixed. However, this assumption rarely holds in practice simply because real-life problems often involve underlying processes that are dynamically evolving. In this paper, we model problems where reward probabilities theta k are drifting, and introduce a new method called Dynamic Thompson Sampling (DTS) that facilitates Order St…
Learning spatial filters for multispectral image segmentation.
2010
International audience; We present a novel filtering method for multispectral satel- lite image classification. The proposed method learns a set of spatial filters that maximize class separability of binary support vector machine (SVM) through a gradient descent approach. Regularization issues are discussed in detail and a Frobenius-norm regularization is proposed to efficiently exclude uninformative filters coefficients. Experiments car- ried out on multiclass one-against-all classification and tar- get detection show the capabilities of the learned spatial fil- ters.
Support Vector Machine and Kernel Classification Algorithms
2018
This chapter introduces the basics of support vector machine (SVM) and other kernel classifiers for pattern recognition and detection. It also introduces the main elements and concept underlying the successful binary SVM. The chapter starts by introducing the main elements and concept underlying the successful binary SVM. Next, it introduces more advanced topics in SVM for classification, including large margin filtering (LMF), SSL, active learning, and large‐scale classification using SVMs. The LMF method performs both signal filtering and classification simultaneously by learning the most appropriate filters. SSL with SVMs exploits the information contained in both labeled and unlabeled e…
Learning by the Process of Elimination
2002
AbstractElimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning.While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for el…
Upport vector machines for nonlinear kernel ARMA system identification.
2006
Nonlinear system identification based on support vector machines (SVM) has been usually addressed by means of the standard SVM regression (SVR), which can be seen as an implicit nonlinear autoregressive and moving average (ARMA) model in some reproducing kernel Hilbert space (RKHS). The proposal of this letter is twofold. First, the explicit consideration of an ARMA model in an RKHS (SVM-ARMA 2k) is proposed. We show that stating the ARMA equations in an RKHS leads to solving the regularized normal equations in that RKHS, in terms of the autocorrelation and cross correlation of the (nonlinearly) transformed input and output discrete time processes. Second, a general class of SVM-based syste…
ORGANIZED LEARNING MODELS (PURSUER CONTROL OPTIMISATION)
1983
Abstract The concept of Organized Learning is defined, and some random models are presented. For Not Transferable Learning, it is necessary to start from an instantaneous learning; by a discrete way, we must form a stochastic model considering the probability of each path; with a continue aproximation, we can study the evolution of the internal state through to consider the relative and absolute probabilities, by means of differential equations systems. For Transferable Learning, the instantaneous learning give us directly the System evolution. So, the Algoritmes for the different models are compared.
On the duality between mechanistic learners and what it is they learn
1993
All previous work in inductive inference and theoretical machine learning has taken the perspective of looking for a learning algorithm that successfully learns a collection of functions. In this work, we consider the perspective of starting with a set of functions, and considering the collection of learning algorithms that are successful at learning the given functions. Some strong dualities are revealed.
A Similarity Evaluation Technique for Cooperative Problem Solving with a Group of Agents
1999
Evaluation of distance or similarity is very important in cooperative problem solving with a group of agents. Distance between problems is used by agents to recognize nearest solved problems for a new problem, distance between solutions is necessary to compare and evaluate the solutions made by different agents, and distance between agents is useful to evaluate weights of the agents to be able to integrate them by weighted voting. The goal of this paper is to develop a similarity evaluation technique to be used for cooperative problem solving with a group of agents. Virtual training environment used for this goal is represented by predicates that define relationships within three sets: prob…