Albert Nossig1,2, Tobias Hell2 and Georg Moser1, 1University of Innsbruck, Austria, 2Data Lab Hell GmbH, Austria
In this paper, we propose an innovative iterative approach to rule learning, specifically designed for (though not limited to) text-based data. Our method focuses on progressively expanding the vocabulary used in each iteration, resulting in a significant reduction in memory consumption. Additionally, we introduce a Value of Confidence, which quantifies the reliability of the generated rules. By leveraging the Value of Confidence, our approach ensures that only the most robust and trustworthy rules are retained, thereby enhancing the overall quality of the rule learning process. We demonstrate the effectiveness of our method through extensive experiments on both textual and non-textual datasets, including a case study of significant interest to the insurance industry, highlighting its potential for real-world applications.
Rule Learning, Explainable Artificial Intelligence, Text Categorization, Reliability of Rules