6533b81ffe1ef96bd1277213
RESEARCH PRODUCT
A Relational Tsetlin Machine with Applications to Natural Language Understanding
Rupsa SahaOle-christoffer GranmoVladimir I. ZadorozhnyMorten Goodwinsubject
FOS: Computer and information sciencesComputer Science - Machine LearningComputer Science - Logic in Computer ScienceComputer Science - Computation and LanguageI.2.4Computer Science - Artificial IntelligenceComputer Networks and CommunicationsI.2.7Machine Learning (cs.LG)Logic in Computer Science (cs.LO)Artificial Intelligence (cs.AI)Artificial IntelligenceHardware and ArchitectureComputation and Language (cs.CL)I.2.7; I.2.4SoftwareInformation Systemsdescription
TMs are a pattern recognition approach that uses finite state machines for learning and propositional logic to represent patterns. In addition to being natively interpretable, they have provided competitive accuracy for various tasks. In this paper, we increase the computing power of TMs by proposing a first-order logic-based framework with Herbrand semantics. The resulting TM is relational and can take advantage of logical structures appearing in natural language, to learn rules that represent how actions and consequences are related in the real world. The outcome is a logic program of Horn clauses, bringing in a structured view of unstructured data. In closed-domain question-answering, the first-order representation produces 10x more compact KBs, along with an increase in answering accuracy from 94.83% to 99.48%. The approach is further robust towards erroneous, missing, and superfluous information, distilling the aspects of a text that are important for real-world understanding.
year | journal | country | edition | language |
---|---|---|---|---|
2021-02-22 |