6533b836fe1ef96bd12a0738

RESEARCH PRODUCT

Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines

Ole-christoffer GranmoChristian D. Blakely

subject

Range (mathematics)Interpretation (logic)Theoretical computer scienceScale (ratio)Process (engineering)Computer scienceFeature (machine learning)Value (computer science)Propositional calculusInterpretability

description

Tsetlin Machines (TMs) capture patterns using conjunctive clauses in propositional logic, thus facilitating interpretation. However, recent TM-based approaches mainly rely on inspecting the full range of clauses individually. Such inspection does not necessarily scale to complex prediction problems that require a large number of clauses. In this paper, we propose closed-form expressions for understanding why a TM model makes a specific prediction (local interpretability). Additionally, the expressions capture the most important features of the model overall (global interpretability). We further introduce expressions for measuring the importance of feature value ranges for continuous features making it possible to capture the role of features in real-time as well as during the learning process as the model evolves. We compare our proposed approach against SHAP and state-of-the-art interpretable machine learning techniques.

https://doi.org/10.1007/978-3-030-79457-6_14