0000000000011306

AUTHOR

Carl Smith

A Logic of Discovery

A logic of discovery is introduced. In this logic, true sentences are discovered over time based on arriving data. A notion of expectation is introduced to reflect the growing certainty that a universally quantified sentence is true as more true instances are observed. The logic is shown to be consistent and complete. Monadic predicates are considered as a special case

research product

Distribution patterns of epiphytic reed-associated macroinvertebrate communities across European shallow lakes

So far, research on plant-associated macroinvertebrates, even if conducted on a large number of water bodies, has mostly focused on a relatively small area, permitting limited conclusions to be drawn regarding potentially broader geographic effects, including climate. Some recent studies have shown that the composition of epiphytic communities may differ considerably among climatic zones. To assess this phenomenon, we studied macroinvertebrates associated with the common reed Phragmites australis (Cav.) Trin. ex Steud in 46 shallow lakes using a common protocol. The lakes, located in nine countries, covered almost the entire European latitudinal range (from <48°N to 61°N) and captured much …

research product

Measure, category and learning theory

Measure and category (or rather, their recursion theoretical counterparts) have been used in Theoretical Computer Science to make precise the intuitive notion “for most of the recursive sets.” We use the notions of effective measure and category to discuss the relative sizes of inferrible sets, and their complements. We find that inferrible sets become large rather quickly in the standard hierarchies of learnability. On the other hand, the complements of the learnable sets are all large.

research product

On the impact of forgetting on learning machines

People tend not to have perfect memories when it comes to learning, or to anything else for that matter. Most formal studies of learning, however, assume a perfect memory. Some approaches have restricted the number of items that could be retained. We introduce a complexity theoretic accounting of memory utilization by learning machines. In our new model, memory is measured in bits as a function of the size of the input. There is a hierarchy of learnability based on increasing memory allotment. The lower bound results are proved using an unusual combination of pumping and mutual recursion theorem arguments. For technical reasons, it was necessary to consider two types of memory : long and sh…

research product

On the Intrinsic Complexity of Learning

AbstractA new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.

research product

Co-learnability and FIN-identifiability of enumerable classes of total recursive functions

Co-learnability is an inference process where instead of producing the final result, the strategy produces all the natural numbers but one, and the omitted number is an encoding of the correct result. It has been proved in [1] that co-learnability of Goedel numbers is equivalent to EX-identifiability. We consider co-learnability of indices in recursively enumerable (r.e.) numberings. The power of co-learnability depends on the numberings used. Every r.e. class of total recursive functions is co-learnable in some r.e. numbering. FIN-identifiable classes are co-learnable in all r.e. numberings, and classes containing a function being accumulation point are not co-learnable in some r.e. number…

research product

Learning by the Process of Elimination

AbstractElimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning.While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for el…

research product

General inductive inference types based on linearly-ordered sets

In this paper, we reconsider the definitions of procrastinating learning machines. In the original definition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate the possibility of using arbitrary linearly ordered sets to bound mindchanges in a similar way. It turns out that using certain ordered sets it is possible to define inductive inference types more general than the previously known ones. We investigate properties of the new inductive inference types and compare them to other types.

research product

On Duality in Learning and the Selection of Learning Teams

AbstractPrevious work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn collections of functions. Herein we start with a class of functions and considerthe learner setof all IIMs that are successful at learning the given class. Applying this perspective to the case of team inference leads to the notion ofdiversificationfor a class of functions. This enable us to distinguish between several flavours of IIMs all of which must be represented in a team learning the given class.

research product

Probabilistic versus deterministic memory limited learning

research product

Inductive Inference with Procrastination: Back to Definitions

In this paper, we reconsider the definition of procrastinating learning machines. In the original definition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate possibility of using arbitrary linearly ordered sets to bound mindchanges in similar way. It turns out that using certain ordered sets it is possible to define inductive inference types different from the previously known ones. We investigate properties of the new inductive inference types and compare them to other types.

research product

Memory limited inductive inference machines

The traditional model of learning in the limit is restricted so as to allow the learning machines only a fixed, finite amount of memory to store input and other data. A class of recursive functions is presented that cannot be learned deterministically by any such machine, but can be learned by a memory limited probabilistic leaning machine with probability 1.

research product

On the intrinsic complexity of learning

A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.

research product

On the inductive inference of recursive real-valued functions

AbstractWe combine traditional studies of inductive inference and classical continuous mathematics to produce a study of learning real-valued functions. We consider two possible ways to model the learning by example of functions with domain and range the real numbers. The first approach considers functions as represented by computable analytic functions. The second considers arbitrary computable functions of recursive real numbers. In each case we find natural examples of learnable classes of functions and unlearnable classes of functions.

research product

Towards Axiomatic Basis of Inductive Inference

The language for the formulation of the interesting statements is, of course, most important. We use first order predicate logic. Our main achievement in this paper is an axiom system which we believe to be more powerful than any other natural general purpose discovery axiom system. We prove soundness of this axiom system in this paper. Additionally we prove that if we remove some of the requirements used in our axiom system, the system becomes not sound. We characterize the complexity of the quantifier prefix which guaranties provability of a true formula via our system. We prove also that if a true formula contains only monadic predicates, our axiom system is capable to prove this formula…

research product

Choosing a learning team

research product

On the role of procrastination for machine learning

research product

Category, Measure, Inductive Inference: A Triality Theorem and Its Applications

The famous Sierpinski-Erdos Duality Theorem [Sie34b, Erd43] states, informally, that any theorem about effective measure 0 and/or first category sets is also true when all occurrences of "effective measure 0" are replaced by "first category" and vice versa. This powerful and nice result shows that "measure" and "category" are equally useful notions neither of which can be preferred to the other one when making formal the intuitive notion "almost all sets." Effective versions of measure and category are used in recursive function theory and related areas, and resource-bounded versions of the same notions are used in Theory of Computation. Again they are dual in the same sense.We show that in…

research product

Learning small programs with additional information

This paper was inspired by [FBW 94]. An arbitrary upper bound on the size of some program for the target function suffices for the learning of some program for this function. In [FBW 94] it was discovered that if “learning” is understood as “identification in the limit,” then in some programming languages it is possible to learn a program of size not exceeding the bound, while in some other programming languages this is not possible.

research product

Team learning as a game

A machine FIN-learning machine M receives successive values of the function f it is learning; at some point M outputs conjecture which should be a correct index of f. When n machines simultaneously learn the same function f and at least k of these machines outut correct indices of f, we have team learning [k,n]FIN. Papers [DKV92, DK96] show that sometimes a team or a robabilistic learner can simulate another one, if its probability p (or team success ratio k/n) is close enough. On the other hand, there are critical ratios which mae simulation o FIN(p2) by FIN(p1) imossible whenever p2 _< r < p1 or some critical ratio r. Accordingly to [DKV92] the critical ratio closest to 1/2 rom the let is…

research product

Learning with confidence

Herein we investigate learning in the limit where confidence in the current conjecture accrues with time. Confidence levels are given by rational numbers between 0 and 1. The traditional requirement that for learning in the limit is that a device must converge (in the limit) to a correct answer. We further demand that the associated confidence in the answer (monotonically) approach 1 in the limit. In addition to being a more realistic model of learning, our new notion turns out to be a more powerful as well. In addition, we give precise characterizations of the classes of functions that are learnable in our new model(s).

research product

Hierarchies of probabilistic and team FIN-learning

AbstractA FIN-learning machine M receives successive values of the function f it is learning and at some moment outputs a conjecture which should be a correct index of f. FIN learning has two extensions: (1) If M flips fair coins and learns a function with certain probability p, we have FIN〈p〉-learning. (2) When n machines simultaneously try to learn the same function f and at least k of these machines output correct indices of f, we have learning by a [k,n]FIN team. Sometimes a team or a probabilistic learner can simulate another one, if their probabilities p1,p2 (or team success ratios k1/n1,k2/n2) are close enough (Daley et al., in: Valiant, Waranth (Eds.), Proc. 5th Annual Workshop on C…

research product

Co-learning of total recursive functions

research product

On the duality between mechanistic learners and what it is they learn

All previous work in inductive inference and theoretical machine learning has taken the perspective of looking for a learning algorithm that successfully learns a collection of functions. In this work, we consider the perspective of starting with a set of functions, and considering the collection of learning algorithms that are successful at learning the given functions. Some strong dualities are revealed.

research product

On the relative sizes of learnable sets

Abstract Measure and category (or rather, their recursion-theoretical counterparts) have been used in theoretical computer science to make precise the intuitive notion “for most of the recursive sets”. We use the notions of effective measure and category to discuss the relative sizes of inferrible sets, and their complements. We find that inferable sets become large rather quickly in the standard hierarchies of learnability. On the other hand, the complements of the learnable sets are all large.

research product

Self-learning inductive inference machines

Self-knowledge is a concept that is present in several philosophies. In this article, we consider the issue of whether or not a learning algorithm can in some sense possess self-knowledge. The question is answered affirmatively. Self-learning inductive inference algorithms are taken to be those that learn programs for their own algorithms, in addition to other functions. La connaissance de soi est un concept qui se retrouve dans plusieurs philosophies. Dans cet article, les auteurs s'interrogent a savoir si un algorithme d' apprentissage peut dans une certaine mesure posseder la connaissance de soi. lis apportent une reponse positive a cette question. Les algorithmes d'inference inductive a…

research product