## Programme

## Keynote speakers

Dov M. GabbayUniversity of Luxembourg, Luxembourg Title: TBAAbstract: TBA |
||

Simon HuttegerUniversity of California, USA Title: Bayesian Convergence to the Truth and Algorithmic RandomnessAbstract: One of the philosophically significant implications of the martingale convergence theorem is that, under quite general conditions, the conditional expectations of a random variable converge to the true value of the random variable with probability one. In this paper we connect the probability-one set to concepts of algorithmic randomness. Schnorr randomness and other notions of randomness are tightly connected to convergence to the truth within different frameworks depending on how one defines effective random variables. |
||

Ute SchmidUniversity of Bamberg, Germany Title: Making Humans and Machines Learn from Each OtherAbstract: Inductive Logic Programming (ILP) is introduced as a highly expressive approach to machine learning (ML). Together with regression models and decision tree algorithms, ILP belongs to the class of interpretable ML approaches -- that is, the classification hypothesis (called a model in the context of ML) induced from examples is expressed in a symbolic format. In contrast to classic ML where instances mostly represented by feature vectors, ILP can be applied to relational data. In contrast to end-to-end learning, learning from examples represented by features or relations is data parsimonious. After a short introduction into the history of ML, I will show how ILP can be applied to interesting real-world domains to learn complex rules involving variables and recursion. Furthermore, I will give examples how ILP can be used as surrogate model to explain blackbox classifiers which are learned with (deep) neural network approaches. I will argue that presenting either visualisations or rules to a user will often not suffice as a helpful explanation. Instead, I will propose a variety of textual, visual, and example-based explanations. Furthermore, I will discuss that explanations are not "one size fits all" but that it depends on the user, the problem, and the current situation which explanation is most helpful. Finally, I will present a new method which allows the machine learning system to exploit not only class corrections but also explanations from the user to correct and adapt learned models in interactive learning scenarios. |
||

Luc De RaedtKU Leuven, Belgium Title: From Probabilistic Logics to Neuro-Symbolic Artificial IntelligenceAbstract: A central challenge to contemporary AI is to integrate learning and reasoning. The integration of learning and reasoning has been studied for decades already in the fields of statistical relational artificial intelligence and probabilistic programming. StarAI has focussed on unifying logic and probability, the two key frameworks for reasoning, and has extended this probabilistic logics machine learning principles. I will argue that StarAI and Probabilistic Logics form an ideal basis for developing neuro-symbolic artificial intelligence techniques. Thus neuro-symbolic computation = StarAI + Neural Networks. Many parallels will be drawn between these two fields and will be illustrated using the Deep Probabilistic Logic Programming language DeepProbLog. |
||

Claudia D'AmatoUniversità degli Studi di Bari, Italy Title: TBAAbstract: TBA |

## Accepted papers

TBA

## Detailed programme

TBA