ID: cs/0011044

Scaling Up Inductive Logic Programming by Learning from Interpretations

November 29, 2000

View on ArXiv

Similar papers 4

Efficient Learning of Interpretable Classification Rules

May 14, 2022

87% Match
Bishwamittra Ghosh, Dmitry Malioutov, Kuldeep S. Meel
Machine Learning
Artificial Intelligence

Machine learning has become omnipresent with applications in various safety-critical domains such as medical, law, and transportation. In these domains, high-stake decisions provided by machine learning necessitate researchers to design interpretable models, where the prediction is understandable to a human. In interpretable machine learning, rule-based classifiers are particularly effective in representing the decision boundary through a set of rules comprising input feature...

Find SimilarView on arXiv

Learn to Explain Efficiently via Neural Logic Inductive Learning

October 6, 2019

87% Match
Yuan Yang, Le Song
Artificial Intelligence

The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art methods, we fin...

Find SimilarView on arXiv

Best-Effort Inductive Logic Programming via Fine-grained Cost-based Hypothesis Generation

July 10, 2017

87% Match
Peter Schüller, Mishal Benz
Artificial Intelligence
Machine Learning
Logic in Computer Science

We describe the Inspire system which participated in the first competition on Inductive Logic Programming (ILP). Inspire is based on Answer Set Programming (ASP). The distinguishing feature of Inspire is an ASP encoding for hypothesis space generation: given a set of facts representing the mode bias, and a set of cost configuration parameters, each answer set of this encoding represents a single rule that is considered for finding a hypothesis that entails the given examples....

Find SimilarView on arXiv

Learning logic programs by discovering where not to search

February 20, 2022

87% Match
Andrew Cropper, Céline Hocquette
Machine Learning
Artificial Intelligence
Logic in Computer Science

The goal of inductive logic programming (ILP) is to search for a hypothesis that generalises training examples and background knowledge (BK). To improve performance, we introduce an approach that, before searching for a hypothesis, first discovers where not to search. We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd. We use the constraints to bootstrap a constraint-driven ILP system. Our experiments on multiple domains (...

Find SimilarView on arXiv

Incremental and Iterative Learning of Answer Set Programs from Mutually Distinct Examples

February 22, 2018

87% Match
Arindam Mitra, Chitta Baral
Artificial Intelligence
Machine Learning
Logic in Computer Science

Over the years the Artificial Intelligence (AI) community has produced several datasets which have given the machine learning algorithms the opportunity to learn various skills across various domains. However, a subclass of these machine learning algorithms that aimed at learning logic programs, namely the Inductive Logic Programming algorithms, have often failed at the task due to the vastness of these datasets. This has impacted the usability of knowledge representation and...

Find SimilarView on arXiv

Explainable Models via Compression of Tree Ensembles

June 16, 2022

87% Match
Siwen Yan, Sriraam Natarajan, Saket Joshi, ... , Tadepalli Prasad
Machine Learning

Ensemble models (bagging and gradient-boosting) of relational decision trees have proved to be one of the most effective learning methods in the area of probabilistic logic models (PLMs). While effective, they lose one of the most important aspect of PLMs -- interpretability. In this paper we consider the problem of compressing a large set of learned trees into a single explainable model. To this effect, we propose CoTE -- Compression of Tree Ensembles -- that produces a sing...

Find SimilarView on arXiv

Scaling Inference for Markov Logic with a Task-Decomposition Approach

August 1, 2011

87% Match
Feng Niu, Ce Zhang, ... , Shavlik Jude
Artificial Intelligence
Databases

Motivated by applications in large-scale knowledge base construction, we study the problem of scaling up a sophisticated statistical inference framework called Markov Logic Networks (MLNs). Our approach, Felix, uses the idea of Lagrangian relaxation from mathematical programming to decompose a program into smaller tasks while preserving the joint-inference property of the original MLN. The advantage is that we can use highly scalable specialized algorithms for common tasks su...

Find SimilarView on arXiv

Learning Weak Constraints in Answer Set Programming

July 23, 2015

87% Match
Mark Law, Alessandra Russo, Krysia Broda
Artificial Intelligence

This paper contributes to the area of inductive logic programming by presenting a new learning framework that allows the learning of weak constraints in Answer Set Programming (ASP). The framework, called Learning from Ordered Answer Sets, generalises our previous work on learning ASP programs without weak constraints, by considering a new notion of examples as ordered pairs of partial answer sets that exemplify which answer sets of a learned hypothesis (together with a given...

Find SimilarView on arXiv

Learning Models over Relational Data: A Brief Tutorial

November 15, 2019

87% Match
Maximilian Schleich, Dan Olteanu, Mahmoud Abo-Khamis, ... , Nguyen XuanLong
Databases

This tutorial overviews the state of the art in learning models over relational databases and makes the case for a first-principles approach that exploits recent developments in database research. The input to learning classification and regression models is a training dataset defined by feature extraction queries over relational databases. The mainstream approach to learning over relational data is to materialize the training dataset, export it out of the database, and the...

Find SimilarView on arXiv

Hybrid Probabilistic Logic Programming: Inference and Learning

February 1, 2023

87% Match
Nitesh Kumar
Artificial Intelligence
Logic in Computer Science

This thesis focuses on advancing probabilistic logic programming (PLP), which combines probability theory for uncertainty and logic programming for relations. The thesis aims to extend PLP to support both discrete and continuous random variables, which is necessary for applications with numeric data. The first contribution is the introduction of context-specific likelihood weighting (CS-LW), a new sampling algorithm that exploits context-specific independencies for computatio...

Find SimilarView on arXiv