Search results for "recursive"
showing 10 items of 64 documents
Cut-off method for endogeny of recursive tree processes
2016
Given a solution to a recursive distributional equation, a natural (and non-trivial) question is whether the corresponding recursive tree process is endogenous. That is, whether the random environment almost surely defines the tree process. We propose a new method of proving endogeny, which applies to various processes. As explicit examples, we establish endogeny of the random metrics on non-pivotal hierarchical graphs defined by multiplicative cascades and of mean-field optimization problems as the mean-field matching and travelling salesman problems in pseudo-dimension q>1.
Learning by the Process of Elimination
2002
AbstractElimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning.While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for el…
Online topology estimation for vector autoregressive processes in data networks
2017
An important problem in data sciences pertains to inferring causal interactions among a collection of time series. Upon modeling these as a vector autoregressive (VAR) process, this paper deals with estimating the model parameters to identify the underlying causality graph. To exploit the sparse connectivity of causality graphs, the proposed estimators minimize a group-Lasso regularized functional. To cope with real-time applications, big data setups, and possibly time-varying topologies, two online algorithms are presented to recover the sparse coefficients when observations are received sequentially. The proposed algorithms are inspired by the classic recursive least squares (RLS) algorit…
Adaptive Feed-Forward Neural Network for Wind Power Delivery
2022
This paper describes a grid connected wind energy conversion system. The interconnecting filter is a simple inductor with a series resistor to minimize three-phase current Total Harmonic Distortion (THD). Using the Recursive Least Squares (RLS) Estimator, an online grid impedance technique is proposed in the stationary reference frame using the Recursive Least Squares (RLS) Estimator. An Adaptive Feedforward Neural (AFN) Controller has also been developed using the inverse of the system to improve the performance of the current Proportional-Integral controller under dynamical conditions and provide better DC link voltage stability. The neural network weights are computed in real-time using …
Online Estimation of the Mechanical Parameters of an Induction Machine Using Speed Loop characteristics and Recursive Least Square Technique
2022
This paper presents a novel approach for estimation of mechanical parameters, inertia and friction coefficient of an Induction Machine (IM) using speed loop characteristics and Recursive Least Square (RLS) estimator. Using the 5th order dynamic equation for Induction Machine and the forgetting factor based RLS algorithm the technique herein proposed employs the speed of the machine and the torque as the inputs for the estimator. Results obtained compares the estimated parameters with the actual parameters under multiple step varying and exponentially varying scenarios. Upon analyzing the results, the validity and the effectiveness of the proposed identification technique is confirmed
Co-learning of recursive languages from positive data
1996
The present paper deals with the co-learnability of enumerable families L of uniformly recursive languages from positive data. This refers to the following scenario. A family L of target languages as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from L. The target language L is successfully co-learned iff the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L.
General inductive inference types based on linearly-ordered sets
1996
In this paper, we reconsider the definitions of procrastinating learning machines. In the original definition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate the possibility of using arbitrary linearly ordered sets to bound mindchanges in a similar way. It turns out that using certain ordered sets it is possible to define inductive inference types more general than the previously known ones. We investigate properties of the new inductive inference types and compare them to other types.
A fast and recursive algorithm for clustering large datasets with k-medians
2012
Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics. Borrowing ideas from MacQueen (1967) who introduced a sequential version of the $k$-means algorithm, a new class of recursive stochastic gradient algorithms designed for the $k$-medians loss criterion is proposed. By their recursive nature, these algorithms are very fast and are well adapted to deal with large samples of data that are allowed to arrive sequentially. It is proved that the stochastic gradient algorithm converges almost surely to the set of stationary points of the underlying loss criterion. A particular attention is paid to the averaged versions, which…
Space-Efficient 1.5-Way Quantum Turing Machine
2001
1.5QTM is a sort of QTM (Quantum Turing Machine) where the head cannot move left (it can stay where it is and move right). For computations is used other - work tape. In this paper will be studied possibilities to economize work tape space more than the same deterministic Turing Machine can do (for some of the languages). As an example language (0i1i|i ≥ 0) is chosen, and is proved that this language could be recognized by deterministic Turing machine using log(i) cells on work tape , and 1.5QTM can recognize it using constant cells quantity.
Pareto or log-normal? A recursive-truncation approach to the distribution of (all) cities
2012
Traditionally, it is assumed that the population size of cities in a country follows a Pareto distribution. This assumption is typically supported by finding evidence of Zipf's Law. Recent studies question this finding, highlighting that, while the Pareto distribution may fit reasonably well when the data is truncated at the upper tail, i.e. for the largest cities of a country, the log-normal distribution may apply when all cities are considered. Moreover, conclusions may be sensitive to the choice of a particular truncation threshold, a yet overlooked issue in the literature. In this paper, then, we reassess the city size distribution in relation to its sensitivity to the choice of truncat…