Search results for "recursive"
showing 10 items of 64 documents
Ridge-line optimal detector
2000
Image processing techniques have seen many developments in recent years. Starting from the pioneering work of Canny, Deriche developed a second order recursive filter capable of detecting stepped contours. However, there are other contour shapes that those filters struggle to detect. We describe a new optimal filter sensu Canny for detecting ridge-line contours. This is a third order recursive and even filter. It is dependent on three parameters by which detection accuracy is adjusted. The results obtained by applying this filter to (possibly noise- affected) images are compared with those in the work by Ziou. © 2000 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(00)00706-6)
Enumerable classes of total recursive functions: Complexity of inductive inference
1994
This paper includes some results on complexity of inductive inference for enumerable classes of total recursive functions, where enumeration is considered in more general meaning than usual recursive enumeration. The complexity is measured as the worst-case mindchange (error) number for the first n functions of the given class. Three generalizations are considered.
Graph recursive least squares filter for topology inference in causal data processes
2017
In this paper, we introduce the concept of recursive least squares graph filters for online topology inference in data networks that are modelled as Causal Graph Processes (CGP). A Causal Graph Process (CGP) is an auto regressive process in the time series associated to different variables, and whose coefficients are the so-called graph filters, which are matrix polynomials with different orders of the graph adjacency matrix. Given the time series of data at different variables, the goal is to estimate these graph filters, hence the associated underlying adjacency matrix. Previously proposed algorithms have focused on a batch approach, assuming implicitly stationarity of the CGP. We propose…
Updating strategies for distance based classification model with recursive least squares
2022
Abstract. The idea is to create a self-learning Minimal Learning Machine (MLM) model that is computationally efficient, easy to implement and performs with high accuracy. The study has two hypotheses. Experiment A examines the possibilities of introducing new classes with Recursive Least Squares (RLS) updates for the pre-trained self learning-MLM model. The idea of experiment B is to simulate the push broom spectral imagers working principles, update and test the model based on a stream of pixel spectrum lines on a continuous scanning process. Experiment B aims to train the model with a significantly small amount of labelled reference points and update it continuously with (RLS) to reach ma…
Algorithms for rational discrete least squares approximation
1975
In this paper an algorithm for the computation of a locally optimal polefree solution to the discrete rational least squares problem under a mild regularity condition is presented. It is based on an adaptation of projection methods [8], [12], [13], [14], [18], [19] to the modified Gaus-Newton method [4], [10]. A special device makes possible the direct handling of the infinitely many linear constraints present in this problem.
Measure, category and learning theory
1995
Measure and category (or rather, their recursion theoretical counterparts) have been used in Theoretical Computer Science to make precise the intuitive notion “for most of the recursive sets.” We use the notions of effective measure and category to discuss the relative sizes of inferrible sets, and their complements. We find that inferrible sets become large rather quickly in the standard hierarchies of learnability. On the other hand, the complements of the learnable sets are all large.
Stochastic algorithms for robust statistics in high dimension
2016
This thesis focus on stochastic algorithms in high dimension as well as their application in robust statistics. In what follows, the expression high dimension may be used when the the size of the studied sample is large or when the variables we consider take values in high dimensional spaces (not necessarily finite). In order to analyze these kind of data, it can be interesting to consider algorithms which are fast, which do not need to store all the data, and which allow to update easily the estimates. In large sample of high dimensional data, outliers detection is often complicated. Nevertheless, these outliers, even if they are not many, can strongly disturb simple indicators like the me…
On the Intrinsic Complexity of Learning
1995
AbstractA new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.
Non-linear RLS-based algorithm for pattern classification
2006
A new non-linear recursive least squares (RLS) algorithm is presented in the context of pattern classification problems. The algorithm incorporates the non-linearity of the filter's output in the updating rules of the classical RLS algorithm. The proposed method yields lower stationary error levels when compared to the standard LMS and RLS algorithms in a classical application of pattern classification, such as the channel equalization problem.
Co-learnability and FIN-identifiability of enumerable classes of total recursive functions
1994
Co-learnability is an inference process where instead of producing the final result, the strategy produces all the natural numbers but one, and the omitted number is an encoding of the correct result. It has been proved in [1] that co-learnability of Goedel numbers is equivalent to EX-identifiability. We consider co-learnability of indices in recursively enumerable (r.e.) numberings. The power of co-learnability depends on the numberings used. Every r.e. class of total recursive functions is co-learnable in some r.e. numbering. FIN-identifiable classes are co-learnable in all r.e. numberings, and classes containing a function being accumulation point are not co-learnable in some r.e. number…