0000000000520244

AUTHOR

Rusins Freivalds

showing 54 related works from this author

Probabilities to Accept Languages by Quantum Finite Automata

1999

We construct a hierarchy of regular languages such that the current language in the hierarchy can be accepted by 1-way quantum finite automata with a probability smaller than the corresponding probability for the preceding language in the hierarchy. These probabilities converge to 1/2.

Discrete mathematicsTheoretical computer scienceNested wordFinite-state machineHierarchy (mathematics)Computer scienceComputer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing)Turing machinesymbols.namesakeNonlinear Sciences::Exactly Solvable and Integrable SystemsRegular languageProbabilistic automatonAnalytical hierarchysymbolsComputer Science::Programming LanguagesQuantum finite automataQuantum algorithmNondeterministic finite automaton
researchProduct

A Logic of Discovery

1998

A logic of discovery is introduced. In this logic, true sentences are discovered over time based on arriving data. A notion of expectation is introduced to reflect the growing certainty that a universally quantified sentence is true as more true instances are observed. The logic is shown to be consistent and complete. Monadic predicates are considered as a special case

TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESTheoretical computer scienceComputer sciencebusiness.industrymedia_common.quotation_subjectArtificial intelligenceSpecial caseCertaintyMonad (functional programming)businessPredicate (grammar)Sentencemedia_common
researchProduct

Running time to recognize nonregular languages by 2-way probabilistic automata

1991

R. Freivalds proved that the language {0m1m} can be recognized by 2-way probabilistic finite automata (2pfa) with arbitrarily high probability 1-ɛ. A.G.Greenberg and A.Weiss proved that no 2pfa can recognize this language in expected time \(T(n) = c^\circ{(n)}\). For arbitrary languages C.Dwork and L.Stockmeyer showed somewhat less: if a language L is recognized by a 2pfa in expected time \(T(n) = c^{n^\circ{(1)} }\), then L is regular. First, we improve this theorem replacing the expected time by the time with probability 1-ɛ. On the other hand, time bound by C.Dwork and L.Stockmeyer cannot be improved: for arbitrary k≥2 we exhibit a specific nonregular language that can be recognized by 2…

CombinatoricsNested wordRegular languageProbabilistic automatonContinuous spatial automatonQuantum finite automataAutomata theoryNondeterministic finite automatonω-automatonMathematics
researchProduct

Measure, category and learning theory

1995

Measure and category (or rather, their recursion theoretical counterparts) have been used in Theoretical Computer Science to make precise the intuitive notion “for most of the recursive sets.” We use the notions of effective measure and category to discuss the relative sizes of inferrible sets, and their complements. We find that inferrible sets become large rather quickly in the standard hierarchies of learnability. On the other hand, the complements of the learnable sets are all large.

Preference learningRecursionTheoretical computer scienceLearnabilitySample exclusion dimensionComputer scienceConcept learningAlgorithmic learning theoryMeasure (mathematics)Recursive tree
researchProduct

Quantum Query Complexity for Some Graph Problems

2004

The paper [4] by H. Buhrman and R. de Wolf contains an impressive survey of solved and open problems in quantum query complexity, including many graph problems. We use recent results by A.Ambainis [1] to prove higher lower bounds for some of these problems. Some of our new lower bounds do not close the gap between the best upper and lower bounds. We prove in these cases that it is impossible to provide a better application of Ambainis’ technique for these problems.

CombinatoricsQuantum queryGraph (abstract data type)Computer Science::Computational ComplexityUpper and lower boundsMathematics
researchProduct

On the Intrinsic Complexity of Learning

1995

AbstractA new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.

HierarchyTheoretical computer scienceBasis (linear algebra)business.industryMachine learningcomputer.software_genreComputer Science ApplicationsTheoretical Computer ScienceReduction (complexity)Computational Theory and MathematicsCompleteness (order theory)Concept learningRecursive functionsNatural (music)Artificial intelligencebusinesscomputerInformation SystemsMathematicsInformation and Computation
researchProduct

Weak and strong recognition by 2-way randomized automata

1997

Languages weakly recognized by a Monte Carlo 2-way finite automaton with n states are proved to be strongly recognized by a Monte Carlo 2-way finite automaton with no(n) states. This improves dramatically over the previously known result by M.Karpinski and R.Verbeek [10] which is also nontrivial since these languages can be nonregular [5]. For tally languages the increase in the number of states is proved to be only polynomial, and these languages are regular.

Deterministic pushdown automatonCombinatoricsDeterministic automatonProbabilistic automatonPushdown automatonQuantum finite automataBüchi automatonTwo-way deterministic finite automatonNondeterministic finite automatonComputer Science::Computational ComplexityComputer Science::Formal Languages and Automata TheoryMathematics
researchProduct

Quantum inductive inference by finite automata

2008

AbstractFreivalds and Smith [R. Freivalds, C.H. Smith Memory limited inductive inference machines, Springer Lecture Notes in Computer Science 621 (1992) 19–29] proved that probabilistic limited memory inductive inference machines can learn with probability 1 certain classes of total recursive functions, which cannot be learned by deterministic limited memory inductive inference machines. We introduce quantum limited memory inductive inference machines as quantum finite automata acting as inductive inference machines. These machines, we show, can learn classes of total recursive functions not learnable by any deterministic, nor even by probabilistic, limited memory inductive inference machin…

Finite-state machineGeneral Computer Sciencebusiness.industryProbabilistic logicInductive inferenceInductive reasoningAutomataTheoretical Computer ScienceAutomatonTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESQuantum computationLearningQuantum finite automataProbability distributionArtificial intelligencebusinessQuantumComputer Science(all)Quantum computerMathematicsTheoretical Computer Science
researchProduct

Co-learnability and FIN-identifiability of enumerable classes of total recursive functions

1994

Co-learnability is an inference process where instead of producing the final result, the strategy produces all the natural numbers but one, and the omitted number is an encoding of the correct result. It has been proved in [1] that co-learnability of Goedel numbers is equivalent to EX-identifiability. We consider co-learnability of indices in recursively enumerable (r.e.) numberings. The power of co-learnability depends on the numberings used. Every r.e. class of total recursive functions is co-learnable in some r.e. numbering. FIN-identifiable classes are co-learnable in all r.e. numberings, and classes containing a function being accumulation point are not co-learnable in some r.e. number…

CombinatoricsClass (set theory)TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESTheoryofComputation_COMPUTATIONBYABSTRACTDEVICESConjectureRecursively enumerable languageLimit pointIdentifiabilityNatural numberFunction (mathematics)NumberingMathematics
researchProduct

An inductive inference approach to classification

1992

In this paper, we introduce a formal framework for investigating the relationship of inductive inference and the task of classification. We give the first results on the relationship between functions that can be identified in the limit and functions that can be acquired from unclassified objects only. Moreover, we present results on the complexity of classification functions and the preconditions necessary in order to allow the computation of such functions.

Theoretical computer scienceComputer scienceOrder (business)ComputationLimit (mathematics)Inductive reasoningTask (project management)
researchProduct

Lower space bounds for randomized computation

1994

It is a fundamental problem in the randomized computation how to separate different randomized time or randomized space classes (c.f., e.g., [KV87, KV88]). We have separated randomized space classes below log n in [FK94]. Now we have succeeded to separate small randomized time classes for multi-tape 2-way Turing machines. Surprisingly, these “small” bounds are of type n+f(n) with f(n) not exceeding linear functions. This new approach to “sublinear” time complexity is a natural counterpart to sublinear space complexity. The latter was introduced by considering the input tape and the work tape as separate devices and distinguishing between the space used for processing information and the spa…

Discrete mathematicsCombinatoricsTuring machinesymbols.namesakeSublinear functionKolmogorov complexitysymbolsType (model theory)Binary logarithmSpace (mathematics)Time complexityWord (computer architecture)Mathematics
researchProduct

Models of Computation, Riemann Hypothesis, and Classical Mathematics

1998

Classical mathematics is a source of ideas used by Computer Science since the very first days. Surprisingly, there is still much to be found. Computer scientists, especially, those in Theoretical Computer Science find inspiring ideas both in old notions and results, and in the 20th century mathematics. The latest decades have brought us evidence that computer people will soon study quantum physics and modern biology just to understand what computers are doing.

Classical mathematicsFinite-state machineComputer sciencebusiness.industryModel of computationEpistemologyPhilosophy of computer sciencePhilosophy of languageTuring machinesymbols.namesakeRiemann hypothesisFormal languagesymbolsArtificial intelligencebusiness
researchProduct

Learning formulae from elementary facts

1997

Since the seminal paper by E.M. Gold [Gol67] the computational learning theory community has been presuming that the main problem in the learning theory on the recursion-theoretical level is to restore a grammar from samples of language or a program from its sample computations. However scientists in physics and biology have become accustomed to looking for interesting assertions rather than for a universal theory explaining everything.

Computational learning theoryGrammarSample exclusion dimensionmedia_common.quotation_subjectAlgorithmic learning theoryMathematics educationLearning theoryReinforcement learningSample (statistics)Inductive reasoningmedia_commonMathematics
researchProduct

Co-learning of recursive languages from positive data

1996

The present paper deals with the co-learnability of enumerable families L of uniformly recursive languages from positive data. This refers to the following scenario. A family L of target languages as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from L. The target language L is successfully co-learned iff the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L.

Recursive data typeRecursive languageComputer scienceProgramming languageRecursive functionsAbstract family of languagesPositive dataInductive reasoningSpace (commercial competition)computer.software_genreLanguage acquisitioncomputer
researchProduct

Learning from good examples

1995

The usual information in inductive inference for the purposes of learning an unknown recursive function f is the set of all input /output examples (n,f(n)), n ∈ ℕ. In contrast to this approach we show that it is considerably more powerful to work with finite sets of “good” examples even when these good examples are required to be effectively computable. The influence of the underlying numberings, with respect to which the learning problem has to be solved, to the capabilities of inference from good examples is also investigated. It turns out that nonstandard numberings can be much more powerful than Godel numberings.

AlgebraTransduction (machine learning)Inductive transferComputational learning theoryInductive biasbusiness.industryAlgorithmic learning theoryUnsupervised learningMulti-task learningArtificial intelligenceInstance-based learningbusinessMathematics
researchProduct

General inductive inference types based on linearly-ordered sets

1996

In this paper, we reconsider the definitions of procrastinating learning machines. In the original definition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate the possibility of using arbitrary linearly ordered sets to bound mindchanges in a similar way. It turns out that using certain ordered sets it is possible to define inductive inference types more general than the previously known ones. We investigate properties of the new inductive inference types and compare them to other types.

Discrete mathematicsOrdered setRecursive functionsInductive reasoningConstructiveMaximal elementMathematics
researchProduct

On block pumpable languages

2016

Ehrenfeucht, Parikh and Rozenberg gave an interesting characterisation of the regular languages called the block pumping property. When requiring this property only with respect to members of the language but not with respect to nonmembers, one gets the notion of block pumpable languages. It is shown that these block pumpable are a more general concept than regular languages and that they are an interesting notion of their own: they are closed under intersection, union and homomorphism by transducers; they admit multiple pumping; they have either polynomial or exponential growth.

Discrete mathematicsGeneral Computer ScienceAbstract family of languagesComputer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing)0102 computer and information sciences02 engineering and technology01 natural sciencesCone (formal languages)Pumping lemma for regular languagesTheoretical Computer ScienceCombinatoricsRegular languageIntersection010201 computation theory & mathematicsBlock (programming)0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingHomomorphismPumping lemma for context-free languagesComputer Science::Formal Languages and Automata TheoryMathematicsTheoretical Computer Science
researchProduct

On Duality in Learning and the Selection of Learning Teams

1996

AbstractPrevious work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn collections of functions. Herein we start with a class of functions and considerthe learner setof all IIMs that are successful at learning the given class. Applying this perspective to the case of team inference leads to the notion ofdiversificationfor a class of functions. This enable us to distinguish between several flavours of IIMs all of which must be represented in a team learning the given class.

Class (computer programming)business.industryPerspective (graphical)Duality (mathematics)InferenceInductive reasoningMachine learningcomputer.software_genreTheoretical Computer ScienceComputer Science ApplicationsTeam learningComputational Theory and MathematicsSelection (linguistics)Artificial intelligencebusinesscomputerMathematicsInformation SystemsInformation and Computation
researchProduct

Probabilistic versus deterministic memory limited learning

1995

Theoretical computer scienceComputer scienceDeterministic memoryTerm memoryProbabilistic logicRecursive functionsShort-term memoryString representation
researchProduct

Quantum Finite State Transducers

2001

We introduce quantum finite state transducers (qfst), and study the class of relations which they compute. It turns out that they share many features with probabilistic finite state transducers, especially regarding undecidability of emptiness (at least for low probability of success). However, like their 'little brothers', the quantum finite automata, the power of qfst is incomparable to that of their probabilistic counterpart. This we show by discussing a number of characteristic examples.

Discrete mathematicsPure mathematicsFinite-state machineDeterministic finite automatonComputer scienceComputer Science::Logic in Computer ScienceProbabilistic logicQuantum finite automataNondeterministic finite automatonState diagramQuantumComputer Science::Formal Languages and Automata TheoryQuantum computer
researchProduct

Kolmogorov numberings and minimal identification

1995

Identification of programs for computable functions from their graphs by algorithmic devices is a well studied problem in learning theory. Freivalds and Chen consider identification of ‘minimal’ and ‘nearly minimal’ programs for functions from their graphs. To address certain problems in minimal identification for Godel numberings, Freivalds later considered minimal identification in Kolmogorov Numberings. Kolmogorov numberings are in some sense optimal numberings and have some nice properties. We prove certain hierarchy results for minimal identification in every Kolmogorov numbering. In addition we also compare minimal identification in Godel numbering versus minimal identification in Kol…

Discrete mathematicsIdentification (information)Computable functionHierarchy (mathematics)Gödel numberingRecursive functionsInductive reasoningNumberingMathematics
researchProduct

Memory limited inductive inference machines

1992

The traditional model of learning in the limit is restricted so as to allow the learning machines only a fixed, finite amount of memory to store input and other data. A class of recursive functions is presented that cannot be learned deterministically by any such machine, but can be learned by a memory limited probabilistic leaning machine with probability 1.

Computer Science::Machine LearningClass (set theory)Computer scienceInductive biasProbabilistic logicRecursive functionsLimit (mathematics)Inductive reasoningAlgorithm
researchProduct

On the intrinsic complexity of learning

1995

A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.

Reduction (complexity)HierarchyTheoretical computer scienceBasis (linear algebra)Computer scienceCompleteness (order theory)Recursive functionsRecursive operatorNatural (music)Inductive reasoning
researchProduct

Effects of Kolmogorov complexity present in inductive inference as well

1997

For all complexity measures in Kolmogorov complexity the effect discovered by P. Martin-Lof holds. For every infinite binary sequence there is a wide gap between the supremum and the infimum of the complexity of initial fragments of the sequence. It is assumed that that this inevitable gap is characteristic of Kolmogorov complexity, and it is caused by the highly abstract nature of the unrestricted Kolmogorov complexity.

PHAverage-case complexityDiscrete mathematicsStructural complexity theoryKolmogorov complexityKolmogorov structure functionChain rule for Kolmogorov complexityDescriptive complexity theoryMathematicsQuantum complexity theory
researchProduct

Towards Axiomatic Basis of Inductive Inference

2001

The language for the formulation of the interesting statements is, of course, most important. We use first order predicate logic. Our main achievement in this paper is an axiom system which we believe to be more powerful than any other natural general purpose discovery axiom system. We prove soundness of this axiom system in this paper. Additionally we prove that if we remove some of the requirements used in our axiom system, the system becomes not sound. We characterize the complexity of the quantifier prefix which guaranties provability of a true formula via our system. We prove also that if a true formula contains only monadic predicates, our axiom system is capable to prove this formula…

SoundnessDiscrete mathematicsPredicate logicSMorse–Kelley set theoryComputer scienceNon-well-founded set theoryZermelo–Fraenkel set theoryConstructive set theoryInductive reasoningAxiom schemaUrelementScott's trickMonad (functional programming)First-order logicAxiom of extensionalityMathematics::LogicTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESTheoryofComputation_LOGICSANDMEANINGSOFPROGRAMSCalculusAxiom of projective determinacyAxiom of choiceKripke–Platek set theoryAction axiomAxiom
researchProduct

Closedness Properties in EX-Identification of Recursive Functions

1998

In this paper we investigate in which cases unions of identifiable classes of recursive functions are also necessarily identifiable. We consider identification in the limit with bounds on mindchanges and anomalies. Though not closed under the set union, these identification types still have features resembling closedness. For each of them we find such n that 1) if every union of n - 1 classes out of U1;, . . ., Un is identifiable, so is the union of all n classes; 2) there are such classes U1;, . . ., Un-1 that every union of n-2 classes out of them is identifiable, while the union of n - 1 classes is not. We show that by finding these n we can distinguish which requirements put on the iden…

Set (abstract data type)Discrete mathematicsIdentification (information)Limit (category theory)AlgorithmicsInferenceIdentifiabilityInductive reasoningBoolean functionMathematics
researchProduct

On the Amount of Nonconstructivity in Learning Recursive Functions

2011

Nonconstructive proofs are a powerful mechanism in mathematics. Furthermore, nonconstructive computations by various types of machines and automata have been considered by e.g., Karp and Lipton [17] and Freivalds [11]. They allow to regard more complicated algorithms from the viewpoint of much more primitive computational devices. The amount of nonconstructivity is a quantitative characterization of the distance between types of computational devices with respect to solving a specific problem. In the present paper, the amount of nonconstructivity in learning of recursive functions is studied. Different learning types are compared with respect to the amount of nonconstructivity needed to lea…

μ operatorClass (set theory)TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESTheoretical computer sciencePrimitive recursive functionData_CODINGANDINFORMATIONTHEORYInductive reasoningMathematical proofUpper and lower boundsAlgorithmμ-recursive functionAutomatonMathematics
researchProduct

Minimal nontrivial space complexity of probabilistic one- way turing machines

2005

Languages recognizable in o(log log n) space by probabilistic one — way Turing machines are proved to be regular. This solves an open problem in [4].

Discrete mathematicsTheoryofComputation_COMPUTATIONBYABSTRACTDEVICESSuper-recursive algorithmProbabilistic Turing machineLinear speedup theoremNSPACEDescription numberCombinatoricsTuring machinesymbols.namesakeTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESNon-deterministic Turing machinesymbolsTime hierarchy theoremComputer Science::Formal Languages and Automata TheoryMathematics
researchProduct

Nonstochastic languages as projections of 2-tape quasideterministic languages

1998

A language L (n) of n-tuples of words which is recognized by a n-tape rational finite-probabilistic automaton with probability 1-e, for arbitrary e > 0, is called quasideterministic. It is proved in [Fr 81], that each rational stochastic language is a projection of a quasideterministic language L (n) of n-tuples of words. Had projections of quasideterministic languages on one tape always been rational stochastic languages, we would have a good characterization of the class of the rational stochastic languages. However we prove the opposite in this paper. A two-tape quasideterministic language exists, the projection of which on the first tape is a nonstochastic language.

AlgebraClass (set theory)TheoryofComputation_COMPUTATIONBYABSTRACTDEVICESFinite-state machineRegular languageProjection (mathematics)Deterministic automatonComputer scienceProbabilistic automatonCharacterization (mathematics)AlgorithmAutomaton
researchProduct

Category, Measure, Inductive Inference: A Triality Theorem and Its Applications

2002

The famous Sierpinski-Erdos Duality Theorem [Sie34b, Erd43] states, informally, that any theorem about effective measure 0 and/or first category sets is also true when all occurrences of "effective measure 0" are replaced by "first category" and vice versa. This powerful and nice result shows that "measure" and "category" are equally useful notions neither of which can be preferred to the other one when making formal the intuitive notion "almost all sets." Effective versions of measure and category are used in recursive function theory and related areas, and resource-bounded versions of the same notions are used in Theory of Computation. Again they are dual in the same sense.We show that in…

Discrete mathematicsCategoryConcrete categoryCategory of setsCategory theoryEnriched categoryPrevalent and shy setsMathematics2-categoryDual (category theory)
researchProduct

Quantum Finite Multitape Automata

1999

Quantum finite automata were introduced by C. Moore, J. P. Crutchfield [4], and by A. Kondacs and J. Watrous [3]. This notion is not a generalization of the deterministic finite automata. Moreover, in [3] it was proved that not all regular languages can be recognized by quantum finite automata. A. Ambainis and R. Freivalds [1] proved that for some languages quantum finite automata may be exponentially more concise rather than both deterministic and probabilistic finite automata. In this paper we introduce the notion of quantum finite multitape automata and prove that there is a language recognized by a quantum finite automaton but not by deterministic or probabilistic finite automata. This …

Discrete mathematicsProbabilistic finite automataFinite-state machineNested wordComputer scienceDeterministic context-free grammarTimed automatonω-automatonNonlinear Sciences::Cellular Automata and Lattice GasesAutomatonMobile automatonNondeterministic finite automaton with ε-movesDeterministic finite automatonDFA minimizationRegular languageDeterministic automatonProbabilistic automatonContinuous spatial automatonAutomata theoryQuantum finite automataTwo-way deterministic finite automatonNondeterministic finite automatonComputer Science::Formal Languages and Automata TheoryQuantum cellular automaton
researchProduct

Learning small programs with additional information

1997

This paper was inspired by [FBW 94]. An arbitrary upper bound on the size of some program for the target function suffices for the learning of some program for this function. In [FBW 94] it was discovered that if “learning” is understood as “identification in the limit,” then in some programming languages it is possible to learn a program of size not exceeding the bound, while in some other programming languages this is not possible.

Theoretical computer sciencebusiness.industryComputer sciencemedia_common.quotation_subjectInductive reasoningMachine learningcomputer.software_genreUpper and lower boundsIdentification (information)Recursive functionsArtificial intelligenceLimit (mathematics)businessFunction (engineering)computermedia_common
researchProduct

How to simulate free will in a computational device

1999

Since we believe that human brain is not a purely deterministic device merely reacting to the environment but rather it is capable to a free will, Theoretical Computer Science has also tried to develop a system of notions generalizing determinism. Nondeterministic and probabilistic algorithms were the first generalizations. Nondeterministic machines constitute an important part of the Theory of Computation. Nondeterminism is a useful way to describe possible choices. In real life there are many regulations restricting our behavior. These regulations nearly always leave some freedom for us how to react. Such regulations are best described in terms of nondeterministic algorithms. Nondetermini…

TheoryofComputation_COMPUTATIONBYABSTRACTDEVICESTheoretical computer scienceProperty (philosophy)General Computer ScienceComputer scienceProbabilistic logicDeterminismTheoretical Computer ScienceMoment (mathematics)Nondeterministic algorithmTuring machinesymbols.namesakeTheory of computationsymbolsProbabilistic analysis of algorithmsACM Computing Surveys
researchProduct

Team learning as a game

1997

A machine FIN-learning machine M receives successive values of the function f it is learning; at some point M outputs conjecture which should be a correct index of f. When n machines simultaneously learn the same function f and at least k of these machines outut correct indices of f, we have team learning [k,n]FIN. Papers [DKV92, DK96] show that sometimes a team or a robabilistic learner can simulate another one, if its probability p (or team success ratio k/n) is close enough. On the other hand, there are critical ratios which mae simulation o FIN(p2) by FIN(p1) imossible whenever p2 _< r < p1 or some critical ratio r. Accordingly to [DKV92] the critical ratio closest to 1/2 rom the let is…

Discrete mathematicsFinite-state machineConjectureTeam learningAlgorithm complexityFunction (mathematics)Critical ratioAlgorithmMathematics
researchProduct

Transformations that preserve learnability

1996

We consider transformations (performed by general recursive operators) mapping recursive functions into recursive functions. These transformations can be considered as mapping sets of recursive functions into sets of recursive functions. A transformation is said to be preserving the identification type I, if the transformation always maps I-identifiable sets into I-identifiable sets.

Computer scienceLearnabilityType (model theory)Inductive reasoningAlgebraTuring machinesymbols.namesakeIdentification (information)TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESTransformation (function)TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMSRecursive functionssymbolsInitial segment
researchProduct

Learning with confidence

1996

Herein we investigate learning in the limit where confidence in the current conjecture accrues with time. Confidence levels are given by rational numbers between 0 and 1. The traditional requirement that for learning in the limit is that a device must converge (in the limit) to a correct answer. We further demand that the associated confidence in the answer (monotonically) approach 1 in the limit. In addition to being a more realistic model of learning, our new notion turns out to be a more powerful as well. In addition, we give precise characterizations of the classes of functions that are learnable in our new model(s).

Discrete mathematicsRational numberConjectureCurrent (mathematics)Recursive functionsMonotonic functionLimit (mathematics)Inductive reasoningMathematics
researchProduct

Dual types of hypotheses in inductive inference

2006

Several well-known inductive inference strategies change the actual hypothesis only when they discover that it “provably misclassifies” an example seen so far. This notion is made mathematically precise and its general power is characterized. In spite of its strength it is shown that this approach is not of “universal” power. Consequently, then hypotheses are considered which “unprovably misclassify” examples and the properties of this approach are studied. Among others it turns out that this type is of the same power as monotonic identification. Finally, it is shown that “universal” power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is all…

Identification (information)Theoretical computer scienceComputer scienceRecursive functionsSpiteMonotonic functionInductive reasoningType (model theory)Dual (category theory)Power (physics)
researchProduct

Inductive inference of recursive functions: Qualitative theory

2005

This survey contains both old and very recent results in non-quantitative aspects of inductive inference of total recursive functions. The survey is not complete. The paper was written to stress some of the main results in selected directions of research performed at the University of Latvia rather than to exhaust all of the obtained results. We concentrated on the more explored areas such as the inference of indices in non-Goedel computable numberings, the inference of minimal Goedel numbers, and the specifics of inference of minimal indices in Kolmogorov numberings.

Turing machinesymbols.namesakeTheoretical computer scienceInductive biasInductive probabilitysymbolsRecursive functionsInferenceInductive reasoningGödel's incompleteness theoremsQualitative theoryMathematics
researchProduct

Unions of identifiable classes of total recursive functions

1992

J.Barzdin [Bar74] has proved that there are classes of total recursive functions which are EX-identifiable but their union is not. We prove that there are no 3 classes U1, U2, U3 such that U1∪U2,U1∪U3 and U2∪U3 would be in EX but U1∪U2∪U3∉ EX. For FIN-identification there are 3 classes with the above-mentioned property and there are no 4 classes U1, U2, U3, U4 such that all 4 unions of triples of these classes would be identifiable but the union of all 4 classes would not. For identification with no more than p minchanges a (2p+2−1)-tuple of such classes do exist but there is no (2p+2)-tuple with the above-mentioned properly.

CombinatoricsIdentification (information)Property (philosophy)Recursive functionsTupleMathematics
researchProduct

An inductive inference approach to classification

1994

Abstract In this paper we introduce a formal framework for investigating the relationship of inductive inference and the task of classification. We give the first results on the relationship between functions that can be identified in the limit and functions that can be acquired from unclassified objects only. Moreover, we present results on the complexity of classification functions and the preconditions necessary in order to allow the computation of such functions.

business.industryComputer scienceComputationInductive reasoningMachine learningcomputer.software_genreTheoretical Computer ScienceTask (project management)Order (biology)Artificial IntelligenceArtificial intelligenceLimit (mathematics)businesscomputerSoftwareJournal of Experimental &amp; Theoretical Artificial Intelligence
researchProduct

On the power of inductive inference from good examples

1993

Abstract The usual information in inductive inference available for the purposes of identifying an unknown recursive function f is the set of all input/output examples (x,f(x)),n eN. In contrast to this approach we show that it is considerably more powerful to work with finite sets of “good” examples even when these good examples are required to be effectively computable. The influence of the underlying numberings, with respect to which the identification has to be realized, to the capabilities of inference from good examples is also investigated. It turns out that nonstandard numberings can be much more powerful than Godel numberings.

Teaching dimensionSet (abstract data type)Identification (information)General Computer ScienceInferenceContrast (statistics)Inductive reasoningFinite setAlgorithmPower (physics)MathematicsTheoretical Computer ScienceComputer Science(all)Theoretical Computer Science
researchProduct

Error detecting in inductive inference

1995

Several well-known inductive inference strategies change the actual hypothesis only when they discover that it “provably misclassifies” an example seen so far. This notion is made mathematically precise and its general power is characterized. In spite of its strength it is shown that this approach is not of universal power. Consequently, then hypotheses are considered which “unprovably misclassify” examples and the properties of this approach are studied. Among others it turns out that this type is of the same power as monotonic identification. Then it is shown that universal power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is allowed. Fi…

Identification (information)Computer scienceSpiteRecursive functionsMonotonic functionInductive reasoningType (model theory)AlgorithmDual (category theory)Power (physics)
researchProduct

Quantum Computation With Devices Whose Contents Are Never Read

2010

In classical computation, a "write-only memory" (WOM) is little more than an oxymoron, and the addition of WOM to a (deterministic or probabilistic) classical computer brings no advantage. We prove that quantum computers that are augmented with WOM can solve problems that neither a classical computer with WOM nor a quantum computer without WOM can solve, when all other resource bounds are equal. We focus on realtime quantum finite automata, and examine the increase in their power effected by the addition of WOMs with different access modes and capacities. Some problems that are unsolvable by two-way probabilistic Turing machines using sublogarithmic amounts of read/write memory are shown to…

FOS: Computer and information sciencesQuantum sortQuantum PhysicsTheoretical computer scienceQuantum Turing machineComputer scienceFormal Languages and Automata Theory (cs.FL)ComputationQuantum simulatorFOS: Physical sciencesComputer Science - Formal Languages and Automata TheoryComputational Complexity (cs.CC)Computer Science - Computational ComplexityQuantum algorithmQuantum informationComputational problemQuantum Physics (quant-ph)Quantum computer
researchProduct

On the duality between mechanistic learners and what it is they learn

1993

All previous work in inductive inference and theoretical machine learning has taken the perspective of looking for a learning algorithm that successfully learns a collection of functions. In this work, we consider the perspective of starting with a set of functions, and considering the collection of learning algorithms that are successful at learning the given functions. Some strong dualities are revealed.

Computer Science::Machine Learningbusiness.industryPerspective (graphical)Duality (mathematics)Multi-task learningInductive reasoningMachine learningcomputer.software_genreRecursive functionsStrong dualityArtificial intelligenceSet (psychology)businesscomputerMathematics
researchProduct

Kolmogorov numberings and minimal identification

1997

Abstract Identification of programs for computable functions from their graphs by algorithmic devices is a well studied problem in learning theory. Freivalds and Chen consider identification of ‘minimal’ and ‘nearly minimal’ programs for functions from their graphs. To address certain problems in minimal identification for Godel numberings, Freivalds later considered minimal identification in Kolmogorov numberings. Kolmogorov numberings are in some sense optimal numberings and have some nice properties. We prove certain separation results for minimal identification in every Kolmogorov numbering. In addition we also compare minimal identification in Godel numberings versus minimal identifica…

Discrete mathematicsCombinatoricsIdentification (information)Computable functionGeneral Computer ScienceNumberingComputer Science(all)Theoretical Computer ScienceMathematicsTheoretical Computer Science
researchProduct

Unions of identifiable families of languages

1996

This paper deals with the satisfiability of requirements put on the identifiability of unions of language families. We consider identification in the limit from a text with bounds on mindchanges and anomalies. We show that, though these identification types are not closed under the set union, some of them still have features that resemble closedness. To formalize this, we generalize the notion of closedness. Then by establishing “how closed” these identification types are we solve the satisfiability problem.

Set (abstract data type)Discrete mathematicsIdentification (information)Limit (category theory)IdentifiabilityLanguage familyInductive reasoningBoolean satisfiability problemMathematical economicsSatisfiabilityMathematics
researchProduct

Complexity of probabilistic versus deterministic automata

2005

Finite-state machineNested wordTheoretical computer scienceDFA minimizationDeterministic automatonComputer scienceDeterministic context-free grammarAutomata theoryQuantum finite automataProbabilistic analysis of algorithms
researchProduct

Tally languages accepted by Monte Carlo pushdown automata

1997

Rather often difficult (and sometimes even undecidable) problems become easily decidable for tally languages, i.e. for languages in a single-letter alphabet. For instance, the class of languages recognizable by 1-way nondeterministic pushdown automata equals the class of the context-free languages, but the class of the tally languages recognizable by 1-way nondeterministic pushdown automata, contains only regular languages [LP81]. We prove that languages over one-letter alphabet accepted by randomized one-way 1-tape Monte Carlo pushdown automata are regular. However Monte Carlo pushdown automata can be much more concise than deterministic 1-way finite state automata.

TheoryofComputation_COMPUTATIONBYABSTRACTDEVICESNested wordTheoretical computer scienceComputational complexity theoryComputer scienceDeterministic pushdown automatonTuring machinesymbols.namesakeRegular languageComputer Science::Logic in Computer ScienceQuantum finite automataNondeterministic finite automatonDiscrete mathematicsFinite-state machineDeterministic context-free languageComputabilityDeterministic context-free grammarContext-free languagePushdown automatonAbstract family of languagesComputer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing)Cone (formal languages)Embedded pushdown automatonUndecidable problemNondeterministic algorithmTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESDeterministic finite automatonsymbolsComputer Science::Programming LanguagesAlphabetComputer Science::Formal Languages and Automata Theory
researchProduct

Inductive inference of recursive functions: Complexity bounds

2005

This survey includes principal results on complexity of inductive inference for recursively enumerable classes of total recursive functions. Inductive inference is a process to find an algorithm from sample computations. In the case when the given class of functions is recursively enumerable it is easy to define a natural complexity measure for the inductive inference, namely, the worst-case mindchange number for the first n functions in the given class. Surely, the complexity depends not only on the class, but also on the numbering, i.e. which function is the first, which one is the second, etc. It turns out that, if the result of inference is Goedel number, then complexity of inference ma…

PHAverage-case complexityDiscrete mathematicsStructural complexity theoryTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESRecursively enumerable languageWorst-case complexityInferenceDescriptive complexity theoryNumberingMathematics
researchProduct

Knot Theory, Jones Polynomial and Quantum Computing

2005

Knot theory emerged in the nineteenth century for needs of physics and chemistry as these needs were understood those days. After that the interest of physicists and chemists was lost for about a century. Nowadays knot theory has made a comeback. Knot theory and other areas of topology are no more considered as abstract areas of classical mathematics remote from anything of practical interest. They have made deep impact on quantum field theory, quantum computation and complexity of computation.

Classical mathematicsPure mathematicsComputer scienceComputationCalculusJones polynomialQuantum field theoryMathematics::Geometric TopologyTime complexityPhysics::History of PhysicsTopology (chemistry)Quantum computerKnot theory
researchProduct

Quantum versus Probabilistic One-Way Finite Automata with Counter

2001

The paper adds the one-counter one-way finite automaton [6] to the list of classical computing devices having quantum counterparts more powerful in some cases. Specifically, two languages are considered, the first is not recognizable by deterministic one-counter one-way finite automata, the second is not recognizable with bounded error by probabilistic one-counter one-way finite automata, but each recognizable with bounded error by a quantum one-counter one-way finite automaton. This result contrasts the case of one-way finite automata without counter, where it is known [5] that the quantum device is actually less powerful than its classical counterpart.

TheoryofComputation_COMPUTATIONBYABSTRACTDEVICESNested wordComputer scienceTimed automatonBüchi automatonω-automatonNondeterministic finite automaton with ε-movesTuring machinesymbols.namesakeDFA minimizationDeterministic automatonContinuous spatial automatonQuantum finite automataDeterministic system (philosophy)Two-way deterministic finite automatonNondeterministic finite automatonDiscrete mathematicsFinite-state machineQuantum dot cellular automatonNonlinear Sciences::Cellular Automata and Lattice GasesMobile automatonTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESDeterministic finite automatonProbabilistic automatonsymbolsAutomata theoryComputer Science::Formal Languages and Automata TheoryQuantum cellular automaton
researchProduct

Quantum finite multitape automata

1999

Quantum finite automata were introduced by C.Moore, J.P. Crutchfield, and by A.Kondacs and J.Watrous. This notion is not a generalization of the deterministic finite automata. Moreover, it was proved that not all regular languages can be recognized by quantum finite automata. A.Ambainis and R.Freivalds proved that for some languages quantum finite automata may be exponentially more concise rather than both deterministic and probabilistic finite automata. In this paper we introduce the notion of quantum finite multitape automata and prove that there is a language recognized by a quantum finite automaton but not by a deterministic or probabilistic finite automata. This is the first result on …

FOS: Computer and information sciencesQuantum PhysicsComputer Science - Computational ComplexityTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESTheoryofComputation_COMPUTATIONBYABSTRACTDEVICESFormal Languages and Automata Theory (cs.FL)FOS: Physical sciencesComputer Science - Formal Languages and Automata TheoryComputational Complexity (cs.CC)Quantum Physics (quant-ph)Nonlinear Sciences::Cellular Automata and Lattice GasesComputer Science::Formal Languages and Automata Theory
researchProduct

On computation in the limit by non-deterministic Turing machines

1974

Turing machinenon-deterministic:MATHEMATICS [Research Subject Categories]computation in the limitTuring machineslimit computations
researchProduct

Inductive inference of recursive functions: complexity bounds

1991

This survey includes principal results on complexity of inductive inference for recursively enumerable classes of total recursive functions. Inductive inference is a process to find an algorithm from sample computations. In the case when the given class of functions is recursively enumerable it is easy to define a natural complexity measure for the inductive inference, namely, the worst-case mindchange number for the first n functions in the given class. Surely, the complexity depends not only on the class, but also on the numbering, i.e. which function is the first, which one is the second, etc. It turns out that, if the result of inference is Goedel number, then complexity of inference ma…

deterministicTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESinductive inferencecomplexity boundspredictioncomplexityprobabilistic
researchProduct