Program > Accepted papers and talks

 

Invited Talks

 Gabriele Kern-Isberner

Title: Cognitive Logics, and the Relevance of Nonmontonic Formal Logics for Human-Centred AI 

Abstract: Classical logics like propositional or predicate logic have been considered as the gold standard for rational human reasoning, and hence as a solid, desirable norm on which all human knowledge and decision making should be based, ideally. For instance, Boolean logic was set up as kind of an arithmetic framework that should help make rational reasoning computable in an objective way, similar to the arithmetics of numbers. Computer scientists adopted this view to (literally) implement objective knowledge and rational deduction, in particular for AI applications. Psychologists have used classical logics as norms to assess the rationality of human commonsense reasoning. However, both disciplines could not ignore the severe limitations of classical logics, e.g., computational complexity and undecidedness, failures of logic-based AI systems in practice, and lots of psychological paradoxes. Many of these problems are caused by the inability of classical logics to deal with uncertainty in an adequate way. Both disciplines have used probabilities as a way out of this dilemma, hoping that numbers and the Kolmogoroff axioms can do the job (somehow). However, psychologists have been observing also lots of paradoxes here (maybe even more).

So then, are humans hopelessly irrational? Is human reasoning incompatible with formal, axiomatic logics? In the end, should computer-based knowledge and information processing be considered as superior in terms of objectivity and rationality?

Cognitive logics aim at overcoming the limitations of classical logics and resolving the observed paradoxes by proposing logic-based approaches that can model human reasoning consistently and coherently in benchmark examples. The basic idea is to reverse the normative way of assessing human reasoning in terms of logics resp. probabilities, and to use typical human reasoning patterns as norms for assessing the cognitive quality of logics. Cognitive logics explore the broad field of logic-based approaches between the extreme points marked by classical logics and probability theory with the goal to find more suitable logics for AI applications, on the one hand, and to gain more insights into the rational structures of human reasoning, on the other.

 

Silja Renooij

Title: Surfing the waves of explanation 

Abstract: The need for explaining black box machine learning models has revived the interest in explainability in AI more in general. One of the ideas underlying explainable AI is to use (new) models that are inherently explainable to replace or complement black box models in machine learning. Explainable or interpretable models exist and have existed for quite some time, and different aspects of these models and their outputs can be explained. In this talk I will focus on explanation of probabilistic graphical models, with an emphasis on Bayesian networks. From their very first introduction in the late 1980s, explanation of Bayesian networks has been a topic of interest: sometimes receiving a lot of attention, sometimes a seemingly forgotten topic, but now resurfacing again, riding on the waves of explainable AI.

 

Francesca Toni

Title: Learning Argumentation Frameworks

Abstract: Argumentation frameworks are well studied as concerns their support for various forms of reasoning. Amongst these, abstract argumentation and assumption-based argumentation framworks can be used to support various forms of defeasible, non-monotonic reasoning.  In this talk I will focus on methods for learning these frameworks automatically from data. Specifically, I will overview two recent methods to obtain, respectively, abstract argumentation frameworks from past cases and assumption-based argumentation frameworks from examples of concepts. In both cases, the learnt frameworks can be naturally used to obtain argumentative explanations for predictions drawn from the data (past cases or examples) in the form of disputes,  thus supporting the vision of data-centric explainable AI.

 

Tutorials

Denis Bouyssou

TitleHow to use bibliometric indices (if you really must)

Abstract: Higher education and research are often seen as affecting in a crucial way the economic performances of nations. Indeed, most countries devote a significant part of their resources to finance higher education and research institutions. Hence, we should expect that there is a growing tendency to evaluate and monitor their performances. Obviously, their very nature makes this task difficult and complex. We have recently witnessed a flourishing of evaluation agencies and a growing use of bibliometric indices of various kinds to evaluate individual scholars, departments, projects or universities. The aim of this tutorial is twofold. We will first outline the type of problems that may be encountered when evaluating research activities using standard bibliometric indices. We will then show how the classical tools provided by decision theory may be useful to analyze the theoretical properties of such indices. Our conclusion will be that some frequently used indices, such as the h-index, have rather undesirable properties.

 

Tanya Braun 

Title: A Glimpse into Statistical Relational AI: The Power of Indistinguishability

Abstract: Statistical relational artificial intelligence, StaRAI for short, focuses on combining reasoning in uncertain environments with reasoning about individuals and relations in those environments. An important concept in StaRAI is indistinguishability, where groups of individuals behave indistinguishably in relation to each other in an environment. This indistinguishability manifests itself in symmetries in a propositional model and can be encoded compactly using logical constructs in relational models. Lifted inference then exploits indistinguishability for efficiency gains. This article showcases how to encode indistinguishability in models using logical constructs and highlights various ways of using indistinguishability during probabilistic inference.


Christophe Gonzales

Title: Decision under uncertainty

Abstract: Decision under uncertainty is pervasive in artificial intelligence. The  goal of this tutorial is to review some popular decision models and  highlight their connections and the situations in which they can be  applied. We will start with the expected utility model (EU) and show the  properties it relies on. Then, we will present decision trees, that  represent sequential decision problems, and show that the aforementioned  properties allow for an efficient algorithm to solve them. Interpreting  these trees differently will lead to another more efficient model called  an influence diagram. The EU model is known to have severe limitations and, in some situations, more general decision models are needed. Based  on a rephrasing of EU, we will present the more general rank dependent  utility model (RDU). We will also show the issues of RDU w.r.t. sequential decision making. Another path to generalize EU is to  substitute probabilities by other models, notably belief functions. We  will show that the EU's properties presented at the beginning of the  talk can also be applied to belif functions, hence resulting in the  belief expected utility (BEU) model. We will conclude this talk, mentioning briefly other popular decision models.


Anne Laurent

Title: Data Lakes: a new Paradigm for Data Platforms and current Challenges

Abstract: Databases are well known and have evolved in the last decades to embed  more and more decisional visions. Initially proposed as storage systems  such as relational databases mainly meant to guarantee data consistency,  they have been extended to other paradigms, from object databases to  data warehouses and have also been extended to take into account  semi-structured data and No-SQL architectures. More recently, data lakes have emerged as a proposition to address the  question of managing big volumes of heterogeneous data without precise  and targeted analytical goals, as it is for instance the case with IoT  and/or with the aim of crossing multiple data sources. For this purpose, the architectures and processes have been rethought,  resulting to the so-called “schema on read” architectures instead of  “schema on write” existing ones.

This tutorial will introduce the main characteristics of data lakes in  order to explore and compare the various data platforms paradigms, and  to discuss the associated current challenges of data lakes.

Jean-Guy Mailly

Title: On Incompleteness in Abstract Argumentation: Complexity and Expressiveness

Abstract: One of the recent trends in research about abstract argumentation is the study of how incomplete knowledge can be integrated to argumentation frameworks (AFs). In this paper, we survey main results on Incomplete AFs (IAFs), following two directions: how hard is it to reason with IAFs? And what can be expressed with IAFs? We show that two generalizations of IAFs, namely Rich IAFs and Constrained IAFs, despite having a higher expressive power than IAFs, have the same complexity regarding classical reasoning tasks.

 

Michael Poss

Title: An introduction to discrete robust optimization

Abstract: Robust optimization (RO) has become a central framework to handle the  uncertainty that arises in the parameters of optimization problems. The  success of RO arises from its tractability since, unlike stochastic  optimization, it does not suffer from the curse of dimensionality. This  key property of RO leverages the structure of the uncertainty sets, which are described by small number of constraints constraints, often  linear ones. In this tutorial, we will review these key aspects and  cover two fundamental tractability results in discrete robust  optimization. We will also illustrate these results on the knapsack problem and on the vehicle routing problem.

 

Jeremy Rohmer

Title: Dealing with imperfect knowledge in natural hazard assessments: beyond classical probabilities and challenges

Abstract: The distinction between two origins of uncertainty has become standard practice in risk analysis, namely random uncertainty (representing variability) and epistemic uncertainty (related to imperfect knowledge). While the former can be adequately represented using classical probabilities, there is no simple, single answer for the latter. New theories of uncertainty based on "imprecise probabilities" have been developed in recent years to go beyond the systematic use of a single probabilistic law. In this tutorial we analyse the advantages and disadvantages for the assessment of natural hazards (e.g. earthquakes, marine floods, landslides, etc.) with a comparison to the traditional probabilistic approach. We discuss the problems that have been solved and the interesting open questions and challenges that remain, in particular how to appropriately support decision making under uncertainty, how to provide guidance for future actions and how to deal with multiple forms of uncertainty along the assessment chain

 

Diedrich Wolter

Title: Faithful Geometric Models for Integrating Learning and Reasoning

Abstract: Knowledge graph embeddings are a direction in AI research that gained popularity due to its prospects of linking machine learning and concept-level logic reasoning. Linkage can be achieved by identifying geometric structures that a machine learning algorithm can construct and which then can serve as input to a geometrically grounded reasoning procedure. Already, knowledge graph embeddings have proven useful for link prediction, sometimes using geometric operations as simple as vector translations. However, the semantics of geometric models obtained by machine learning are fundamentally different from models in classical logics. This challenges the integration of learning and reasoning since the semantic gap needs to be bridged.  

In this presentation I review existing embedding techniques from the perspective of ontological and logical commitments they (implicitly) make, shedding some light on the semantic gap. I will then detail an approach based on the geometry of cones. Cones, if combined with a certain set of geometric operations, exhibit several interesting features. For example, cones allow us to build faithful geometric models for semi-expressive concept languages that retain uncertainty present in training data. In the light of these findings I motivate further investigations of geometric structures for learning, representation, and reasoning.

 

Accepted papers :

  • Marvin Lasserre, Régis Lebrun and Pierre-Henri Wuillemin. Learning Non-Parametric Copula Bayesian Networks using Mutual Information
  • Peiqi Sun, Michel Grabisch and Christophe Labreuche. An improvement of Random Node Generator for capacity generator

  • Myriam Bounhas and Henri Prade. Logical proportions-related classification methods beyond analogy

  • Julian RodemannDominik KreissEyke Hüllermeier and Thomas Augustin. Levelwise Data Disambiguation by Cautious Superset Classification

  • Tommaso Flaminio, Lluis Godo and Sara Ugolini. An approach to inconsistency-tolerant reasoning about probability based on Lukasiewicz logic

  • Fares Grina, Zied Elouedi and Eric Lefevre. Learning from imbalanced data using an evidential undersampling-based ensemble

  • Jonas Philipp Haldimann and Christoph Beierle. Characterizing Multipreference Closure with System W

  • Sébastien Konieczny, Stefano Moretti, Ariane Ravier and Paolo Viappiani. Selecting the most relevant elements from a ranking over sets

  • Loïc Adam and Sébastien Destercke. Identifying and repairing inconsistencies in preference elicitation

  • Nawapon Nakharutai and Sébastien Destercke. Decision making under severe uncertainty on a budget

  • Isabelle Kuhlmann, Anna Gessler, Vivien Laszlo and Matthias Thimm. A Comparison of ASP-Based and SAT-Based Algorithms for the Contension Inconsistency Measure

  • Suryani Lim, Henri Prade and Gilles Richard. Using analogical proportions for explanations

  • Hénoïk Willot, Sébastien Destercke and Khaled Belahcene. Explaining robust classification through prime implicants

  • Leila Amgoud and Vivien Beuselinck. Towards a Principle-based Approach for Case-based Reasoning

  • Didier Dubois and Henri Prade. A capacity-based semantics for inconsistency-tolerant inferences

  • Kai Sauerwald, Gabriele Kern-Isberner, Alexander Becker and Christoph Beierle. From Forgetting Signature Elements to Forgetting Formulas in Epistemic States

  • Esther Anna Corsi, Tommaso Flaminio and Hykel Hosni. Towards a unified view on logics for uncertainty

  • Hmidy Yassine, Rico Agnès and Strauss Olivier. Extending the macsum aggregation to interval-valued inputs

  • Christophe Labreuche. Explanation of Pseudo-Boolean Functions using Cooperative Game Theory and Prime Implicants

  • Sebastian Link, Henri Prade and Gilles Richard. Analogical proportions, multivalued dependencies and explanations

  • Sébastien Destercke, Agnes Rico and Olivier Strauss. Using atomic bounds to get sub-modular approximations

  • Ilyes Jenhani, Ghaith Khlifi and Panagiotis Sidiropoulos. Non-specificity-based Supervised Discretization for Possibilistic Classification

  • Martin Durand, Fanny Pascual and Olivier Spanjaard. A Non-Utilitarian Discrete Choice Model for Preference Aggregation

  • Lydia Castronovo and Giuseppe Sanfilippo. Iterated conditionals, trivalent logics, and conditional random quantities

  • Emanuele AlbiniAntonio RagoPietro Baroni and Francesca Toni. Defining and Enforcing Descriptive Accuracy in Explanations: the Case of Probabilistic Classifiers

  • Jean-Paul Doignon, Stefano Moretti and Meltem Özturk. Tackling Uncertainty in Coalitional Games

Online user: 3 Privacy
Loading...