0000000000343341

AUTHOR

Rupsa Saha

0000-0002-3006-5249

showing 6 related works from this author

Using Tsetlin Machine to discover interpretable rules in natural language processing applications

2021

Tsetlin Machines (TM) use finite state machines for learning and propositional logic to represent patterns. The resulting pattern recognition approach captures information in the form of conjunctive clauses, thus facilitating human interpretation. In this work, we propose a TM-based approach to three common natural language processing (NLP) tasks, namely, sentiment analysis, semantic relation categorization and identifying entities in multi-turn dialogues. By performing frequent itemset mining on the TM-produced patterns, we show that we can obtain a global and a local interpretation of the learning, one that mimics existing rule-sets or lexicons. Further, we also establish that our TM base…

Artificial intelligenceComputer sciencebusiness.industryNatural language processingRule miningcomputer.software_genreInterpretable AITheoretical Computer ScienceSemantic analysesComputational Theory and MathematicsMulti-turn dialogue analysesArtificial IntelligenceControl and Systems EngineeringArtificial intelligencebusinesscomputerVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550Natural language processing
researchProduct

A Relational Tsetlin Machine with Applications to Natural Language Understanding

2021

TMs are a pattern recognition approach that uses finite state machines for learning and propositional logic to represent patterns. In addition to being natively interpretable, they have provided competitive accuracy for various tasks. In this paper, we increase the computing power of TMs by proposing a first-order logic-based framework with Herbrand semantics. The resulting TM is relational and can take advantage of logical structures appearing in natural language, to learn rules that represent how actions and consequences are related in the real world. The outcome is a logic program of Horn clauses, bringing in a structured view of unstructured data. In closed-domain question-answering, th…

FOS: Computer and information sciencesComputer Science - Machine LearningComputer Science - Logic in Computer ScienceComputer Science - Computation and LanguageI.2.4Computer Science - Artificial IntelligenceComputer Networks and CommunicationsI.2.7Machine Learning (cs.LG)Logic in Computer Science (cs.LO)Artificial Intelligence (cs.AI)Artificial IntelligenceHardware and ArchitectureComputation and Language (cs.CL)I.2.7; I.2.4SoftwareInformation Systems
researchProduct

Road Detection for Reinforcement Learning Based Autonomous Car

2020

Human mistakes in traffic often have terrible consequences. The long-awaited introduction of self-driving vehicles may solve many of the problems with traffic, but much research is still needed before cars are fully autonomous.In this paper, we propose a new Road Detection algorithm using online supervised learning based on a Neural Network architecture. This algorithm is designed to support a Reinforcement Learning algorithm (for example, the standard Proximal Policy Optimization or PPO) by detecting when the car is in an adverse condition. Specifically, the PPO gets a penalty whenever the virtual automobile gets stuck or drives off the road with any of its four wheels.Initial experiments …

Artificial neural networkComputer sciencebusiness.industrySupervised learningNeural network architectureReinforcement learningArtificial intelligenceReinforcement learning algorithmbusinessProceedings of the 2020 The 3rd International Conference on Information Science and System
researchProduct

Mining Interpretable Rules for Sentiment and Semantic Relation Analysis Using Tsetlin Machines

2020

Tsetlin Machines (TMs) are an interpretable pattern recognition approach that captures patterns with high discriminative power from data. Patterns are represented as conjunctive clauses in propositional logic, produced using bandit-learning in the form of Tsetlin Automata. In this work, we propose a TM-based approach to two common Natural Language Processing (NLP) tasks, viz. Sentiment Analysis and Semantic Relation Categorization. By performing frequent itemset mining on the patterns produced, we show that they follow existing expert-verified rule-sets or lexicons. Further, our comparison with other widely used machine learning techniques indicates that the TM approach helps maintain inter…

Computer sciencebusiness.industrySemantic analysis (machine learning)Sentiment analysiscomputer.software_genrePropositional calculusAutomatonComputingMethodologies_PATTERNRECOGNITIONDiscriminative modelCategorizationPattern recognition (psychology)Artificial intelligencebusinesscomputerNatural language processingInterpretability
researchProduct

Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling

2020

Using logical clauses to represent patterns, Tsetlin Machines (TMs) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using a majority vote. While the evaluation of clauses is fast, being based on binary operators, the voting makes it necessary to synchronize the clause evaluation, impeding parallelization. In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. In brief, every clause runs in its own thread for massive native parallelism. For each training…

FOS: Computer and information sciencesComputer Science - Machine LearningTheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESArtificial Intelligence (cs.AI)Computer Science - Artificial IntelligenceMachine Learning (cs.LG)
researchProduct

Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling

2021

Using logical clauses to represent patterns, Tsetlin Machine (TM) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using a majority vote. While the evaluation of clauses is fast, being based on binary operators, the voting makes it necessary to synchronize the clause evaluation, impeding parallelization. In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. In brief, every clause runs in its own thread for massive native parallelism. For each training e…

TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGESVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550
researchProduct