Search results for "recursive functions"
showing 10 items of 22 documents
Learning with confidence
1996
Herein we investigate learning in the limit where confidence in the current conjecture accrues with time. Confidence levels are given by rational numbers between 0 and 1. The traditional requirement that for learning in the limit is that a device must converge (in the limit) to a correct answer. We further demand that the associated confidence in the answer (monotonically) approach 1 in the limit. In addition to being a more realistic model of learning, our new notion turns out to be a more powerful as well. In addition, we give precise characterizations of the classes of functions that are learnable in our new model(s).
On the Intrinsic Complexity of Learning
1995
AbstractA new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.
Error detecting in inductive inference
1995
Several well-known inductive inference strategies change the actual hypothesis only when they discover that it “provably misclassifies” an example seen so far. This notion is made mathematically precise and its general power is characterized. In spite of its strength it is shown that this approach is not of universal power. Consequently, then hypotheses are considered which “unprovably misclassify” examples and the properties of this approach are studied. Among others it turns out that this type is of the same power as monotonic identification. Then it is shown that universal power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is allowed. Fi…
Dual types of hypotheses in inductive inference
2006
Several well-known inductive inference strategies change the actual hypothesis only when they discover that it “provably misclassifies” an example seen so far. This notion is made mathematically precise and its general power is characterized. In spite of its strength it is shown that this approach is not of “universal” power. Consequently, then hypotheses are considered which “unprovably misclassify” examples and the properties of this approach are studied. Among others it turns out that this type is of the same power as monotonic identification. Finally, it is shown that “universal” power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is all…
Topological considerations in composing teams of learning machines
1995
Classes of total recursive functions may be identifiable by a team of strategies, but not by a single strategy, in accordance with a certain identification type (EX, FIN, etc.). Qualitative aspects in composing teams are considered. For each W ∉ EX all recursive strategies can be split into several families so that any team identifying W contains strategies from all the families. For W ∉ FIN the possibility of such splitting depends upon W. The relation between these phenomena and “voting” properties for types EX, FIN, etc. is revealed.
Vertical representation of C∞-words
2015
We present a new framework for dealing with C ∞ -words, based on their left and right frontiers. This allows us to give a compact representation of them, and to describe the set of C ∞ -words through an infinite directed acyclic graph G. This graph is defined by a map acting on the frontiers of C ∞ -words. We show that this map can be defined recursively and with no explicit reference to C ∞ -words. We then show that some important conjectures on C ∞ -words follow from analogous statements on the structure of the graph G.
Co-learning of recursive languages from positive data
1996
The present paper deals with the co-learnability of enumerable families L of uniformly recursive languages from positive data. This refers to the following scenario. A family L of target languages as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from L. The target language L is successfully co-learned iff the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L.
On the intrinsic complexity of learning
1995
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.
Probabilistic versus deterministic memory limited learning
1995
Learning small programs with additional information
1997
This paper was inspired by [FBW 94]. An arbitrary upper bound on the size of some program for the target function suffices for the learning of some program for this function. In [FBW 94] it was discovered that if “learning” is understood as “identification in the limit,” then in some programming languages it is possible to learn a program of size not exceeding the bound, while in some other programming languages this is not possible.