Interpretable Machine Learning via Linear Temporal Logic
DOI:
https://doi.org/10.11576/dataninja-1176Keywords:
Explainable AI, Learning of logic formulas, Linear Temporal LogicAbstract
In recent years, deep neural networks have shown excellent performance, outperforming even human experts in various tasks. However, their inherent complexity and black-box nature often make it hard, if not impossible, to understand the decisions made by these models, hindering their practical application in high-stakes scenarios.
We propose a framework for learning LTL formulas as inherently interpretable machine learning models. These models can be trained both in a supervised and unsupervised setting. Furthermore, they can easily be extended to handle noisy data and to incorporate expert knowledge.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Simon Lutz, Daniel Neider
This work is licensed under a Creative Commons Attribution 4.0 International License.