0000000000223662

AUTHOR

Kuruge Darshana Abeyrathna

showing 8 related works from this author

Adaptive sparse representation of continuous input for tsetlin machines based on stochastic searching on the line

2021

This paper introduces a novel approach to representing continuous inputs in Tsetlin Machines (TMs). Instead of using one Tsetlin Automaton (TA) for every unique threshold found when Booleanizing continuous input, we employ two Stochastic Searching on the Line (SSL) automata to learn discriminative lower and upper bounds. The two resulting Boolean features are adapted to the rest of the clause by equipping each clause with its own team of SSLs, which update the bounds during the learning process. Two standard TAs finally decide whether to include the resulting features as part of the clause. In this way, only four automata altogether represent one continuous feature (instead of potentially h…

Stochastic Searching on the Line automatonBoosting (machine learning)decision support systemTK7800-8360Computer Networks and CommunicationsComputer scienceDiscriminative modelFeature (machine learning)Electrical and Electronic EngineeringArtificial neural networkrule-based learninginterpretable machine learninginterpretable AISparse approximationAutomatonRandom forestSupport vector machineVDP::Teknologi: 500Tsetlin MachineXAIHardware and ArchitectureControl and Systems EngineeringSignal ProcessingElectronicsTsetlin automataAlgorithm
researchProduct

Training Artificial Neural Networks With Improved Particle Swarm Optimization

2020

Particle Swarm Optimization (PSO) is popular for solving complex optimization problems. However, it easily traps in local minima. Authors modify the traditional PSO algorithm by adding an extra step called PSO-Shock. The PSO-Shock algorithm initiates similar to the PSO algorithm. Once it traps in a local minimum, it is detected by counting stall generations. When stall generation accumulates to a prespecified value, particles are perturbed. This helps particles to find better solutions than the current local minimum they found. The behavior of PSO-Shock algorithm is studied using a known: Schwefel's function. With promising performance on the Schwefel's function, PSO-Shock algorithm is util…

Electricity demand forecastingMathematical optimizationArtificial neural networkComputer science020209 energyComputer Science::Neural and Evolutionary ComputationMathematicsofComputing_NUMERICALANALYSIS0202 electrical engineering electronic engineering information engineeringTraining (meteorology)Particle swarm optimization020201 artificial intelligence & image processing02 engineering and technology
researchProduct

Hybrid Particle Swarm Optimization With Genetic Algorithm to Train Artificial Neural Networks for Short-Term Load Forecasting

2019

This research proposes a new training algorithm for artificial neural networks (ANNs) to improve the short-term load forecasting (STLF) performance. The proposed algorithm overcomes the so-called training issue in ANNs, where it traps in local minima, by applying genetic algorithm operations in particle swarm optimization when it converges to local minima. The training ability of the hybridized training algorithm is evaluated using load data gathered by Electricity Generating Authority of Thailand. The ANN is trained using the new training algorithm with one-year data to forecast equal 48 periods of each day in 2013. During the testing phase, a mean absolute percentage error (MAPE) is used …

Artificial neural networkComputer sciencebusiness.industry020209 energyLoad forecastingTraining (meteorology)Particle swarm optimization02 engineering and technologyBackpropagationComputer Science ApplicationsTerm (time)Computational Theory and MathematicsArtificial IntelligenceGenetic algorithm0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingArtificial intelligencebusinessInternational Journal of Swarm Intelligence Research
researchProduct

A Scheme for Continuous Input to the Tsetlin Machine with Applications to Forecasting Disease Outbreaks

2019

In this paper, we apply a new promising tool for pattern classification, namely, the Tsetlin Machine (TM), to the field of disease forecasting. The TM is interpretable because it is based on manipulating expressions in propositional logic, leveraging a large team of Tsetlin Automata (TA). Apart from being interpretable, this approach is attractive due to its low computational cost and its capacity to handle noise. To attack the problem of forecasting, we introduce a preprocessing method that extends the TM so that it can handle continuous input. Briefly stated, we convert continuous input into a binary representation based on thresholding. The resulting extended TM is evaluated and analyzed…

Learning automataArtificial neural networkComputer scienceDecision tree02 engineering and technologycomputer.software_genreThresholdingField (computer science)020202 computer hardware & architectureAutomatonSupport vector machine0202 electrical engineering electronic engineering information engineeringPreprocessor020201 artificial intelligence & image processingData miningcomputer
researchProduct

On the Convergence of Tsetlin Machines for the XOR Operator.

2022

The Tsetlin Machine (TM) is a novel machine learning algorithm with several distinct properties, including transparent inference and learning using hardware-near building blocks. Although numerous papers explore the TM empirically, many of its properties have not yet been analyzed mathematically. In this article, we analyze the convergence of the TM when input is non-linearly related to output by the XOR-operator. Our analysis reveals that the TM, with just two conjunctive clauses, can converge almost surely to reproducing XOR, learning from training data over an infinite time horizon. Furthermore, the analysis shows how the hyper-parameter T guides clause construction so that the clauses c…

FOS: Computer and information sciencesComputer Science - Machine LearningComputer Science - Logic in Computer ScienceVDP::Teknologi: 500Artificial Intelligence (cs.AI)Computational Theory and MathematicsArtificial IntelligenceComputer Science - Artificial IntelligenceApplied MathematicsComputer Vision and Pattern RecognitionSoftwareMachine Learning (cs.LG)Logic in Computer Science (cs.LO)IEEE transactions on pattern analysis and machine intelligence
researchProduct

A multi-step finite-state automaton for arbitrarily deterministic Tsetlin Machine learning

2021

Computational Theory and MathematicsArtificial IntelligenceControl and Systems EngineeringVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550Theoretical Computer Science
researchProduct

Integer Weighted Regression Tsetlin Machines

2020

The Regression Tsetlin Machine (RTM) addresses the lack of interpretability impeding state-of-the-art nonlinear regression models. It does this by using conjunctive clauses in propositional logic to capture the underlying non-linear frequent patterns in the data. These, in turn, are combined into a continuous output through summation, akin to a linear regression function, however, with non-linear components and binary weights. However, the resolution of the RTM output is proportional to the number of clauses employed. This means that computation cost increases with resolution. To address this problem, we here introduce integer weighted RTM clauses. Our integer weighted clause is a compact r…

Computer scienceComputationBinary numberResolution (logic)Representation (mathematics)Nonlinear regressionUnit-weighted regressionAlgorithmComputer Science::Formal Languages and Automata TheoryInteger (computer science)Interpretability
researchProduct

Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling

2021

Using logical clauses to represent patterns, Tsetlin Machine (TM) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using a majority vote. While the evaluation of clauses is fast, being based on binary operators, the voting makes it necessary to synchronize the clause evaluation, impeding parallelization. In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. In brief, every clause runs in its own thread for massive native parallelism. For each training e…

TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550
researchProduct