Search results for "Inductive reasoning"

showing 10 items of 54 documents

Learning with confidence

1996

Herein we investigate learning in the limit where confidence in the current conjecture accrues with time. Confidence levels are given by rational numbers between 0 and 1. The traditional requirement that for learning in the limit is that a device must converge (in the limit) to a correct answer. We further demand that the associated confidence in the answer (monotonically) approach 1 in the limit. In addition to being a more realistic model of learning, our new notion turns out to be a more powerful as well. In addition, we give precise characterizations of the classes of functions that are learnable in our new model(s).

Discrete mathematicsRational numberConjectureCurrent (mathematics)Recursive functionsMonotonic functionLimit (mathematics)Inductive reasoningMathematics
researchProduct

Probabilistic and team PFIN-type learning: General properties

2008

We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p_1 and p_2 and answers whether PFIN-type learning with the probability of success p_1 is equivalent to PFIN-type learning with the probability of success p_2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is well-ordered in descending ordering and order-equivalent to ordinal epsilon_0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFIN-type learning…

FOS: Computer and information sciencesComputer Science::Machine LearningTheoretical computer scienceComputer Networks and CommunicationsExistential quantificationStructure (category theory)DecidabilityType (model theory)Learning in the limitTheoretical Computer ScienceMachine Learning (cs.LG)Probability of successFinite limitsMathematicsOrdinalsDiscrete mathematicsHierarchybusiness.industryApplied MathematicsAlgorithmic learning theoryProbabilistic logicF.1.1 I.2.6Inductive inferenceInductive reasoningDecidabilityComputer Science - LearningTeam learningComputational Theory and MathematicsArtificial intelligencebusinessJournal of Computer and System Sciences
researchProduct

Quantum inductive inference by finite automata

2008

AbstractFreivalds and Smith [R. Freivalds, C.H. Smith Memory limited inductive inference machines, Springer Lecture Notes in Computer Science 621 (1992) 19–29] proved that probabilistic limited memory inductive inference machines can learn with probability 1 certain classes of total recursive functions, which cannot be learned by deterministic limited memory inductive inference machines. We introduce quantum limited memory inductive inference machines as quantum finite automata acting as inductive inference machines. These machines, we show, can learn classes of total recursive functions not learnable by any deterministic, nor even by probabilistic, limited memory inductive inference machin…

Finite-state machineGeneral Computer Sciencebusiness.industryProbabilistic logicInductive inferenceInductive reasoningAutomataTheoretical Computer ScienceAutomatonTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESQuantum computationLearningQuantum finite automataProbability distributionArtificial intelligencebusinessQuantumComputer Science(all)Quantum computerMathematicsTheoretical Computer Science
researchProduct

On the relative sizes of learnable sets

1998

Abstract Measure and category (or rather, their recursion-theoretical counterparts) have been used in theoretical computer science to make precise the intuitive notion “for most of the recursive sets”. We use the notions of effective measure and category to discuss the relative sizes of inferrible sets, and their complements. We find that inferable sets become large rather quickly in the standard hierarchies of learnability. On the other hand, the complements of the learnable sets are all large.

General Computer Science0102 computer and information sciencesMachine learningcomputer.software_genre01 natural sciencesMeasure (mathematics)Theoretical Computer ScienceTuring machinesymbols.namesake0101 mathematicsMathematicsBinary treeLearnabilitybusiness.industry010102 general mathematicsInductive inferenceCategoryInductive reasoningMeasureAbstract machine010201 computation theory & mathematicssymbolsArtificial intelligencebusinesscomputerComputer Science(all)Theoretical Computer Science
researchProduct

Frequency Prediction of Functions

2012

Prediction of functions is one of processes considered in inductive inference. There is a "black box" with a given total function f in it. The result of the inductive inference machine F( ) is expected to be f(n+1). Deterministic and probabilistic prediction of functions has been widely studied. Frequency computation is a mechanism used to combine features of deterministic and probabilistic algorithms. Frequency computation has been used for several types of inductive inference, especially, for learning via queries. We study frequency prediction of functions and show that that there exists an interesting hierarchy of predictable classes of functions.

Hierarchy (mathematics)ComputationExistential quantificationBlack boxProbabilistic logicProbabilistic analysis of algorithmsInductive reasoningAlgorithmMathematicsRandomized algorithm
researchProduct

Error detecting in inductive inference

1995

Several well-known inductive inference strategies change the actual hypothesis only when they discover that it “provably misclassifies” an example seen so far. This notion is made mathematically precise and its general power is characterized. In spite of its strength it is shown that this approach is not of universal power. Consequently, then hypotheses are considered which “unprovably misclassify” examples and the properties of this approach are studied. Among others it turns out that this type is of the same power as monotonic identification. Then it is shown that universal power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is allowed. Fi…

Identification (information)Computer scienceSpiteRecursive functionsMonotonic functionInductive reasoningType (model theory)AlgorithmDual (category theory)Power (physics)
researchProduct

INDUCTIVE INFERENCE OF LIMITING PROGRAMS WITH BOUNDED NUMBER OF MIND CHANGES

1996

We consider inductive inference of total recursive functions in the case, when produced hypotheses are allowed some finite number of times to change “their mind” about each value of identifiable function. Such type of identification, which we call inductive inference of limiting programs with bounded number of mind changes, by its power lies somewhere between the traditional criteria of inductive inference and recently introduced inference of limiting programs. We consider such model of inductive inference for EX and BC types of identification, and we study • tradeoffs between the number of allowed mind changes and the number of anomalies, and • relations between classes of functions ident…

Identification (information)Theoretical computer scienceBounded functionComputer Science (miscellaneous)Fiducial inferenceProbabilistic logicInferenceFunction (mathematics)Inductive reasoningFinite setAlgorithmMathematicsInternational Journal of Foundations of Computer Science
researchProduct

Dual types of hypotheses in inductive inference

2006

Several well-known inductive inference strategies change the actual hypothesis only when they discover that it “provably misclassifies” an example seen so far. This notion is made mathematically precise and its general power is characterized. In spite of its strength it is shown that this approach is not of “universal” power. Consequently, then hypotheses are considered which “unprovably misclassify” examples and the properties of this approach are studied. Among others it turns out that this type is of the same power as monotonic identification. Finally, it is shown that “universal” power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is all…

Identification (information)Theoretical computer scienceComputer scienceRecursive functionsSpiteMonotonic functionInductive reasoningType (model theory)Dual (category theory)Power (physics)
researchProduct

Towards efficient inductive synthesis of expressions from input/output examples

1993

Our goal through several years has been the development of efficient search algorithm for inductive inference of expressions using only input/output examples. The idea is to avoid exhaustive search by means of taking full advantage of semantic equality of many considered expressions. This might be the way that people avoid too big search when finding proof strategies for theorems, etc. As a formal model for the development of the method we use arithmetic expressions over the domain of natural numbers. A new approach for using weights associated with the functional symbols for restricting search space is considered. This allows adding constraints like the frequency of particular symbols in t…

Input/outputQuadratic equationTheoretical computer scienceSearch algorithmBeam searchBrute-force searchInductive reasoningComputer experimentAlgorithmExpression (mathematics)Mathematics
researchProduct

Majority and minority influence in inductive reasoning: A preliminary study

1991

Ninety-three students were exposed to majority and minority influence in an inductive reasoning task. The former induced convergent thinking processes, though its effects were not reducible to mere compliance. The latter activated more divergent constructive processes, supporting the predictions of Conversion Theory.

Interpersonal relationshipSocial PsychologyConvergent thinkingCognitionMinority influenceInductive reasoningPsychologySocial psychologyConstructiveSocial influenceCompliance (psychology)European Journal of Social Psychology
researchProduct