ID: cs/0011044

Scaling Up Inductive Logic Programming by Learning from Interpretations

November 29, 2000

View on ArXiv

Similar papers 5

Inductive Learning of Answer Set Programs from Noisy Examples

August 25, 2018

87% Match
Mark Law, Alessandra Russo, Krysia Broda
Artificial Intelligence

In recent years, non-monotonic Inductive Logic Programming has received growing interest. Specifically, several new learning frameworks and algorithms have been introduced for learning under the answer set semantics, allowing the learning of common-sense knowledge involving defaults and exceptions, which are essential aspects of human reasoning. In this paper, we present a noise-tolerant generalisation of the learning from answer sets framework. We evaluate our ILASP3 system,...

Find SimilarView on arXiv

Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks

December 6, 2021

87% Match
Prithviraj Sen, Carvalho Breno W. S. R. de, ... , Gray Alexander
Artificial Intelligence
Machine Learning
Logic in Computer Science
Symbolic Computation

Recent work on neuro-symbolic inductive logic programming has led to promising approaches that can learn explanatory rules from noisy, real-world data. While some proposals approximate logical operators with differentiable operators from fuzzy or real-valued logic that are parameter-free thus diminishing their capacity to fit the data, other approaches are only loosely based on logic making it difficult to interpret the learned "rules". In this paper, we propose learning rule...

Find SimilarView on arXiv

An Empirical Investigation into Deep and Shallow Rule Learning

June 18, 2021

87% Match
Florian Beck, Johannes Fürnkranz
Machine Learning
Artificial Intelligence

Inductive rule learning is arguably among the most traditional paradigms in machine learning. Although we have seen considerable progress over the years in learning rule-based theories, all state-of-the-art learners still learn descriptions that directly relate the input features to the target concept. In the simplest case, concept learning, this is a disjunctive normal form (DNF) description of the positive class. While it is clear that this is sufficient from a logical poin...

Find SimilarView on arXiv

SkILL - a Stochastic Inductive Logic Learner

June 2, 2015

87% Match
Joana Côrte-Real, Theofrastos Mantadelis, ... , Rocha Ricardo
Artificial Intelligence

Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored area of Statistical Relational Learning which extends classic Inductive Logic Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic Learner, which takes probabilistic annotated data and produces First Order Logic theories. Data in several domains such as medicine and bioinformatics have an inherent degree of uncer- tainty, that can be used to produce models closer to reality. S...

Find SimilarView on arXiv

ILP Modulo Data

April 23, 2014

87% Match
Panagiotis Manolios, Vasilis Papavasileiou, Mirek Riedewald
Logic in Computer Science

The vast quantity of data generated and captured every day has led to a pressing need for tools and processes to organize, analyze and interrelate this data. Automated reasoning and optimization tools with inherent support for data could enable advancements in a variety of contexts, from data-backed decision making to data-intensive scientific research. To this end, we introduce a decidable logic aimed at database analysis. Our logic extends quantifier-free Linear Integer Ari...

Find SimilarView on arXiv

Learning logic programs by finding minimal unsatisfiable subprograms

January 29, 2024

87% Match
Andrew Cropper, Céline Hocquette
Machine Learning
Logic in Computer Science

The goal of inductive logic programming (ILP) is to search for a logic program that generalises training examples and background knowledge. We introduce an ILP approach that identifies minimal unsatisfiable subprograms (MUSPs). We show that finding MUSPs allows us to efficiently and soundly prune the search space. Our experiments on multiple domains, including program synthesis and game playing, show that our approach can reduce learning times by 99%.

Find SimilarView on arXiv

Incremental Learning of Event Definitions with Inductive Logic Programming

February 24, 2014

86% Match
Nikos Katzouris, Alexander Artikis, George Paliouras
Machine Learning
Artificial Intelligence

Event recognition systems rely on properly engineered knowledge bases of event definitions to infer occurrences of events in time. The manual development of such knowledge is a tedious and error-prone task, thus event-based applications may benefit from automated knowledge construction techniques, such as Inductive Logic Programming (ILP), which combines machine learning with the declarative and formal semantics of First-Order Logic. However, learning temporal logical formali...

Find SimilarView on arXiv

Inductive Logic Programming in Databases: from Datalog to DL+log

March 12, 2010

86% Match
Francesca A. Lisi
Logic in Computer Science
Artificial Intelligence
Databases
Machine Learning

In this paper we address an issue that has been brought to the attention of the database community with the advent of the Semantic Web, i.e. the issue of how ontologies (and semantics conveyed by them) can help solving typical database problems, through a better understanding of KR aspects related to databases. In particular, we investigate this issue from the ILP perspective by considering two database problems, (i) the definition of views and (ii) the definition of constrai...

Find SimilarView on arXiv

Predicate Logic as a Modeling Language: Modeling and Solving some Machine Learning and Data Mining Problems with IDP3

September 26, 2013

86% Match
Maurice Bruynooghe, Hendrik Blockeel, Bart Bogaerts, Cat Broes De, Pooter Stef De, Joachim Jansen, Anthony Labarre, Jan Ramon, ... , Verwer Sicco
Logic in Computer Science
Artificial Intelligence

This paper provides a gentle introduction to problem solving with the IDP3 system. The core of IDP3 is a finite model generator that supports first order logic enriched with types, inductive definitions, aggregates and partial functions. It offers its users a modeling language that is a slight extension of predicate logic and allows them to solve a wide range of search problems. Apart from a small introductory example, applications are selected from problems that arose within...

Find SimilarView on arXiv

Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules

September 24, 2024

86% Match
Jonathan Feldstein, Dominic Phillips, Efthymia Tsamoura
Artificial Intelligence

Probabilistic logical models are a core component of neurosymbolic AI and are important models in their own right for tasks that require high explainability. Unlike neural networks, logical models are often handcrafted using domain expertise, making their development costly and prone to errors. While there are algorithms that learn logical models from data, they are generally prohibitively expensive, limiting their applicability in real-world settings. In this work, we introd...

Find SimilarView on arXiv