Search results for "Learning automata"

showing 10 items of 76 documents

Achieving Fair Load Balancing by Invoking a Learning Automata-Based Two-Time-Scale Separation Paradigm.

2020

Author's accepted manuscript. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. In this article, we consider the problem of load balancing (LB), but, unlike the approaches that have been proposed earlier, we attempt to resolve the problem in a fair manner (or rather, it would probably be more appropriate to describe it as an ε-fair manner because, although the LB…

Mathematical optimizationLearning automataComputer Networks and Communicationsbusiness.industryStochastic processComputer scienceQuality of serviceResource allocationsCloud computingLoad balancing (computing)Continuous learning automatonsComputer Science ApplicationsArtificial IntelligenceServerResource allocationFair load balancingbusinessVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550SoftwareIEEE transactions on neural networks and learning systems
researchProduct

The design of absorbing Bayesian pursuit algorithms and the formal analyses of their ε-optimality

2016

The fundamental phenomenon that has been used to enhance the convergence speed of learning automata (LA) is that of incorporating the running maximum likelihood (ML) estimates of the action reward probabilities into the probability updating rules for selecting the actions. The frontiers of this field have been recently expanded by replacing the ML estimates with their corresponding Bayesian counterparts that incorporate the properties of the conjugate priors. These constitute the Bayesian pursuit algorithm (BPA), and the discretized Bayesian pursuit algorithm. Although these algorithms have been designed and efficiently implemented, and are, arguably, the fastest and most accurate LA report…

Mathematical optimizationLearning automataDiscretizationbusiness.industryBayesian probability02 engineering and technologyMathematical proof01 natural sciencesConjugate priorField (computer science)010104 statistics & probabilityArtificial IntelligenceConvergence (routing)0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionArtificial intelligence0101 mathematicsbusinessBeta distributionMathematics
researchProduct

Learning Automata-Based Solutions to Stochastic Nonlinear Resource Allocation Problems

2009

“Computational Intelligence” is an extremely wide-ranging and all-encompassing area. However, it is fair to say that the strength of a system that possesses “Computational Intelligence” can be quantified by its ability to solve problems that are intrinsically hard. One such class of NP-Hard problems concerns the so-called family of Knapsack Problems, and in this Chapter, we shall explain how a sub-field of Artificial Intelligence, namely that which involves “Learning Automata”, can be used to produce fast and accurate solutions to “difficult” and randomized versions of the Knapsack problem (KP).

Mathematical optimizationNonlinear systemClass (computer programming)Learning automataKnapsack problemContinuous knapsack problemResource allocationStochastic optimizationComputational intelligenceMathematics
researchProduct

A novel technique for stochastic root-finding: Enhancing the search with adaptive d-ary search

2017

The most fundamental problem encountered in the field of stochastic optimization, is the Stochastic Root Finding (SRF) problem where the task is to locate an unknown point x∗ for which g(x∗) = 0 for a given function g that can only be observed in the presence of noise [15]. The vast majority of the state-of-the-art solutions to the SRF problem involve the theory of stochastic approximation. The premise of the latter family of algorithms is to oper ate by means of so-called “small-step”processesthat explorethe search space in a conservative manner. Using this paradigm, the point investigated at any time instant is in the proximity of the point investigated at the previous time instant, render…

Mathematical optimizationStochastic point location problemsInformation Systems and ManagementLearning automataComputer scienceStochastic root finding problemsLearning Automata020206 networking & telecommunications02 engineering and technologyInterval (mathematics)Function (mathematics)Stochastic approximationComputer Science ApplicationsTheoretical Computer ScienceArtificial IntelligenceControl and Systems Engineering0202 electrical engineering electronic engineering information engineeringSearch problem020201 artificial intelligence & image processingStochastic optimizationAlgorithmRoot-finding algorithmSoftwareInformation Sciences
researchProduct

The Power of the “Pursuit” Learning Paradigm in the Partitioning of Data

2019

Traditional Learning Automata (LA) work with the understanding that the actions are chosen purely based on the “state” in which the machine is. This modus operandus completely ignores any estimation of the Random Environment’s (RE’s) (specified as \(\mathbb {E}\)) reward/penalty probabilities. To take these into consideration, Estimator/Pursuit LA utilize “cheap” estimates of the Environment’s reward probabilities to make them converge by an order of magnitude faster. This concept is quite simply the following: Inexpensive estimates of the reward probabilities can be used to rank the actions. Thereafter, when the action probability vector has to be updated, it is done not on the basis of th…

Mathematical optimizationTheoretical computer scienceLearning automataBasis (linear algebra)Computer scienceRank (computer programming)Object PartitioningPartitioning-based learningEstimatorLearning Automata02 engineering and technologyProbability vectorField (computer science)AutomatonRanking0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processing[INFO]Computer Science [cs]Object Migration Automaton
researchProduct

On optimizing firewall performance in dynamic networks by invoking a novel swapping window-based paradigm

2018

Designing and implementing efficient firewall strategies in the age of the Internet of Things (IoT) is far from trivial. This is because, as time proceeds, an increasing number of devices will be connected, accessed and controlled on the Internet. Additionally, an everincreasingly amount of sensitive information will be stored on various networks. A good and efficient firewall strategy will attempt to secure this information, and to also manage the large amount of inevitable network traffic that these devices create. The goal of this paper is to propose a framework for designing optimized firewalls for the IoT. This paper deals with two fundamental challenges/problems encountered in such firewalls…

Non-stationary environmentsFirewall optimizationsMatching timesWeak estimatorsBatch updatesLearning automata
researchProduct

Object Migration Automata for Non-equal Partitioning Problems with Known Partition Sizes

2021

Part 4: Automated Machine Learning; International audience; Solving partitioning problems in random environments is a classic and challenging task, and has numerous applications. The existing Object Migration Automaton (OMA) and its proposed enhancements, which include the Pursuit and Transitivity phenomena, can solve problems with equi-sized partitions. Currently, these solutions also include one where the partition sizes possess a Greatest Common Divisor (GCD). In this paper, we propose an OMA-based solution that can solve problems with both equally and non-equally-sized groups, without restrictions on their sizes. More specifically, our proposed approach, referred to as the Partition Siz…

Object partitioning with non-equal sizesScheme (programming language)Object Migration AutomataLearning automataComputer scienceLearning Automata0102 computer and information sciences01 natural sciencesPartition (database)Field (computer science)AutomatonTask (computing)010201 computation theory & mathematicsGreatest common divisorA priori and a posteriori[INFO]Computer Science [cs]computerAlgorithmComputer Science::Databasescomputer.programming_language
researchProduct

A Bayesian Learning Automaton for Solving Two-Armed Bernoulli Bandit Problems

2008

The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information. In the last decades, several computationally efficient algorithms for tackling this problem have emerged, with learning automata (LA) being known for their ?-optimality, and confidence interval based for logarithmically growing regret. Applications include treatment selection in clinical trials, route selection in …

Optimization problemLearning automatabusiness.industryComputer scienceMaximum likelihoodBayesian probabilitySampling (statistics)RegretBayesian inferenceConfidence intervalAutomatonAlgorithm designArtificial intelligencebusinessBeta distribution2008 Seventh International Conference on Machine Learning and Applications
researchProduct

Distributed learning automata-based scheme for classification using novel pursuit scheme

2020

Learning Automata (LA) is a popular decision making mechanism to “determine the optimal action out of a set of allowable actions” (Agache and Oommen, IEEE Trans Syst Man Cybern-Part B Cybern 2002(6): 738–749, 2002). The distinguishing characteristic of automata-based learning is that the search for the optimising parameter vector is conducted in the space of probability distributions defined over the parameter space, rather than in the parameter space itself (Thathachar and Sastry, IEEE Trans Syst Man Cybern-Part B Cybern 32(6): 711–722, 2002). Recently, Goodwin and Yazidi pioneered the use of Ant Colony Optimisation (ACO) for solving classification problems (Goodwin and Yazidi 2016). In th…

PolynomialOptimization problemLearning automataComputer sciencePolygonsFeature vector02 engineering and technologyAnt colonyParameter spaceRandom walkLearning automataSupport vector machineKernel methodArtificial IntelligenceKernel (statistics)Polygon0202 electrical engineering electronic engineering information engineeringProbability distribution020201 artificial intelligence & image processingClassificationsVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550AlgorithmApplied Intelligence
researchProduct

On Using the Theory of Regular Functions to Prove the ε-Optimality of the Continuous Pursuit Learning Automaton

2013

Published version of a chapter in the book: Recent Trends in Applied Artificial Intelligence. Also available from the publisher at: http://dx.doi.org/10.1007/978-3-642-38577-3_27 There are various families of Learning Automata (LA) such as Fixed Structure, Variable Structure, Discretized etc. Informally, if the environment is stationary, their ε-optimality is defined as their ability to converge to the optimal action with an arbitrarily large probability, if the learning parameter is sufficiently small/large. Of these LA families, Estimator Algorithms (EAs) are certainly the fastest, and within this family, the set of Pursuit algorithms have been considered to be the pioneering schemes. The…

Property (philosophy)Learning automataComputer scienceVDP::Mathematics and natural science: 400::Information and communication science: 420::Algorithms and computability theory: 422Structure (category theory)Monotonic functionMathematical proofAutomatonArbitrarily largeε-optimalityContinuous Pursuit AlgorithmCalculuspursuit algorithmsAlgorithmVariable (mathematics)
researchProduct