Keynote speakers

Fosca Giannotti

ISTI-CNR Pisa, Italy

Title of the talk

Explainable Machine Learning for Trustworthy Artificial Intelligence


Abstract of the talk

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artefacts hidden in the training data, which may lead to unfair or wrong decisions.

The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity, and understanding. Explainable AI addresses such challenges, and for years different AI communities have studied such topics, leading to different definitions, evaluation protocols, motivations, and results.

This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date and a quick survey of the literature on machine learning and symbolic AI related approaches. I focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML black-box decision systems, introducing our early results on the local-to-global framework as a way towards explainable AI.

Katie Atkinson

University of Liverpool, UK

Title of the talk

Automating Argumentation for Transparently Deciding Legal Cases


Abstract of the talk

Legal systems around the world are struggling to handle the volume of cases waiting to be processed, hampering access to justice for many citizens. This situation opens the door for the increased use of AI tools to support processing of legal cases.  The development of AI tools for legal work is not in itself a new endeavour and indeed the field of LegalTech has blossomed in recent years to provide automation for a variety of legal tasks. However, trust in the tools produced must be a central feature to ensure that they can be deployed confidently and effectively in practice.  In particular, for the task of deciding legal cases, explanations are of central concern for ensuring trust; AI support tools that are being developed to aid legal decision making should be able to explain why a particular outcome of a decision has been reached, and not an alternative outcome, akin to what is expected of a human judge to ensure that sound legal reasoning is being undertaken.  In this talk I will review the growing body of literature on the development of AI approaches for predicting outcomes of legal cases. I will then focus in more detail on the use of techniques from the field of computational models of argument for capturing legal reasoning to enable both accurate and explainable outcomes to be produced by decision-support tools that are built on these techniques.  The use of the tools will be illustrated through application to real world legal domains.

Short bio

Katie Atkinson is Professor of Computer Science and Dean of the School of Electrical Engineering, Electronics and Computer Science at the University of Liverpool, UK. Katie is recognised internationally for her research contributions within the field of AI and Law over the past 18 years.  Within this area, her specialism is on computational models of argument for modelling legal reasoning. She has published over one hundred and fifty articles in peer-reviewed conference proceedings and journals, and has also applied her work in a variety of collaborative projects with law firms.  Her current research is focussed on explainable AI for legal applications.  Katie was Program Chair of the fifteenth edition of the International Conference on Artificial Intelligence and Law held in San Diego, USA in 2015 and she served as President of the International Association for Artificial Intelligence and Law (IAAIL) in 2016 and 2017. In 2020 Katie was appointed to serve as a member of the Lawtech UK Panel, a government-backed initiative to help transform the UK legal sector through technology.  Katie is also currently serving on the Computer Science and Informatics sub-panel in the UK Research Excellence Framework (REF) 2021.

 


Carles Sierra

IIIA of CSIC, Spain

Title of the talk

Value engineering


Abstract of the talk

Ethics in Artificial Intelligence is a wide-ranging field which encompasses many open questions regarding the moral, legal and technical issues that arise with the use and design of ethically-compliant autonomous agents. Under this umbrella, the computational ethics area is concerned with the formulation and codification of ethical principles into software components. In this talk, I will take a look at a particular problem in computational ethics: the engineering of moral values into autonomous agents. I will present some results on this area and a vision for future research.

Manuela Naveau

Kunstuniversität Linz, Austria

Title of the talk

CRITICAL DATA and the arts – artistic works in a calculated world


Abstract of the talk

A calculated world is based on the life cycle of data: collecting, storing, analyzing, reproducing, sharing and deleting. Activities initiated by ourselves or others – but always with the actually deeply human goal of generating more knowledge about ourselves and sharing it with others. But something has gone wrong in the optimization processes, however, because the representation of our world in digital data, a thinking in calculations, algorithms and self-learning models and a generally accepted trust in technology prevents us from thinking and understanding our world noetically. When technological developments act so far away and disconnected from people, when economic optimization processes come before societal values: how can we understand and engage? How can moments of intuition, of gut feeling arise, or what does intuition mean in relation to artificial intelligence? What is critical data in the context of a healthy society and environmentally friendly strategies, in the context of a time-critical art?
    Using very recent artistic works from the Interface Cultures department at the University of Art and Design Linz, Austria and based on curatorial engagements with international artists in the current exhibition, Manuela Naveau attempts to explore the forms of knowledge / non-knowledge and social engagement in relation to AI.

Short bio
Manuela Naveau, PhD is an Austrian artist, researcher, scientist and curator at Ars Electronica, where she developed the Ars Electronica Export department together with Artistic Director Gerfried Stocker and led it operationally for almost 18 years. Since 2020, Manuela Naveau has been a university professor for Critical Data at the Interface Cultures Department of the University of Art and Design Linz and has held teaching positions at the Paris Lodron University in Salzburg, the Technical University in Vienna and the Danube University Krems. Her book Crowd and Art – Kunst und Partizipation im Internet (Crowd and Art ñ Art and Participation in the Internet) was published in 2017 by transcript Verlag, Germany. The book is based on her dissertation, for which she received the Award of Excellence from the Federal Ministry of Science, Research and Economy in 2016.

Julie Bernauer

NVIDIA Corporation, USA

Title of the talk

Building and operating a Top10 supercomputer: efficient at scale performance with SuperPODs


Abstract of the talk

In a competitive datacenter environment, where AI is key and research teams are racing for the best models, fast deployment, combined with reliable operations and efficiency is key. It provides an invaluable competitive advantage. AI Supercomputers allow for efficient versatility as the platform can be set up in available datacenters with minimal design changes.
In this work, we will show how an AI supercomputer can be optimally designed for AI as well as HPC performance, and be built with an efficient modular design to fit into any infrastructure. We will also show how datacenter software and operations can be set up to monitor and interact with the deployment, leading to a fast, efficient and versatile supercomputer ranked high in Top500, HPL-AI, MLPerf, and Green500 benchmarks.

Short bio
Julie Bernauer is Director for Dacenter Systems Engineering and Applied Systems at NVIDIA Corporation. Her team focuses on several aspects of deep learning systems including performance and large scale deep learning and deployments for hyperscale and cloud services. The team builds and operates High Performance and Deep Learning platforms. One example is the Selene SuperPod who was ranked 5th in the top500 supercomputers recently and is amongst the most energy efficient. She joined NVIDIA in 2015 after fifteen years in academia as an expert in machine learning for computational structural biology.

Iris von der Tuin

Utrecht University

Title of the talk

AI and Art in the Algorithmic Condition: Interdisciplinarity and Procedural Thinking


Abstract of the talk

The geological era we currently live in has been termed the ‘Anthropocene’. Technoculturally, our times can be called the ‘algorithmic condition’ (Colman et al. 2018). Working with networked computers and algorithmic media, we have transgressed the limits of the ‘postmodern condition’ (Lyotard [1979] 1984) as questions of translation into computer languages and of commodified exchange have now been superseded by those questioning automated text generation (not copying), ‘contingent computation’ (Fazi 2018), and the commodification of affect. This talk opens up the terrain between AI, the humanities, and art by using the algorithmic condition as a common ground to start from. Our current-day condition is complex and requires an interdisciplinary approach (cf. Repko & Szostak 2021). Methodologically, both interhuman, human-computer, and computer-computer interaction takes a shape procedurally whereby procedures are step-by-step, i.e., algorithmic methods (Verhoeff and Van der Tuin 2020, Van der Tuin and Verhoeff 2022). This talk unpacks two artistic projects of Zambia-born and South Africa-based artist Nolan Oswald Dennis through the above set of concerns: Biko.Dialogues from 2020 in which machinically-mediated, ‘impossible’ conversations are generated by an algorithm and No Conciliation is Possible in its 2021 incarnation which diagrams procedurally across the Anthropocene, the algorithmic condition, and (de)colonization strategies. Both works were exhibited at ARoS, a museum in Aarhus, DK in March-October 2021.

Short bio
Iris van der Tuin is Professor of Theory of Cultural Inquiry in the Department of Philosophy and Religious Studies at Utrecht University, where she is also university-wide Dean for Interdisciplinary Education. In 2021-22 Iris van der Tuin is Novo Nordisk Foundation guest professor in the Laboratory for Art Research, The Royal Danish Academy of Fine Arts, Copenhagen and at Aarhus University. Iris is interested in humanities scholarship that traverses the ‘two cultures’ and reaches beyond the boundaries of academia. As such, she coordinates the special interest group AI in Cultural Inquiry and Art: Thinking and Making in the Algorithmic Condition, a SIG of the focus area Human-Centered Artificial Intelligence at her university. This Fall Critical Concepts for the Creative Humanities will come out, a book Iris co-authored with Nanna Verhoeff.

FACt speakers

Benoit Macq

Polytechnic School of UCLouvain

Short bio

Benoit Macq is a professor at the Polytechnic School of UCLouvain. He is now the head of the PILAB Laboratory (Pixels and Interaction Lab) at UCLouvain involved in Artificial Intelligence and Signal Processing applied to image processing.

Benoit Macq has been a researcher at Philips RLB, visiting professor at McGill University in Montreal, Ecole Polytechnique Fédérale de Lausanne, MIT in Boston and Telecom Paris Tech.

Benoit Macq was Prorector of UCL from 2009 to 2014. He was in charge of “Service to Society” and international relations. He was a technological advisor to the Walloon government for the digital transition and co-designed the Digital Wallonia plan. He is co-founder of 11 spin-off companies. He is co-founder of the TRAIL institute with Thierry Dutoit (U-Mons).

Benoit Macq is a Fellow Member of the IEEE, Senior Associate Editor of the IEEE Transactions on Image Processing and Member of the Royal Belgian Academy of Sciences.

Gilles Louppe

University of Liège (Belgium)

Title of the talk

LEGO® Deep Learning


Short bio

Gilles Louppe is an Associate Professor in artificial
intelligence and deep learning at the University of Liège (Belgium).
Previously, he held positions as a Research Fellow at CERN and as a
Postdoctoral Associate at New York University. His research is at the
intersection of machine learning, artificial intelligence and physical
sciences. Together with collaborators, he initiated and developed a
new generation of simulation-based inference algorithms based on deep
learning, with several applications to inference problems from
particle physics, astrophysics, astronomy and gravitational wave
astronomy.

Christoph Schommer

Université du Luxembourg

Title of the talk

Future Living with AI and IA


Short bio

I studied Artificial Intelligence at the German Research Center for Artificial Intelligence in Saarbrücken before working for 8 years at IBM R&D as an IT architect in worldwide service projects in the field of Business Intelligence. At the same time, I completed my doctorate in medical informatics (summa cum laude) at the Goethe University in Frankfurt/Main before being appointed associate professor at the University of Luxembourg in 2003. Today, I lead a research group conducting interdisciplinary research using AI and machine learning technologies. I am a scientific reviewer for Dutch Research Council, Leibniz, Springer, IEEE and for more than 100 conferences (IJCAI, AAMAS, ACM, CogSci, ECML, DHH, and others). I regularly organise lecture series and am the author of about 100 scientific papers. I have (co-)supervised 29 PhD students in Luxembourg, Bologna, Turin and London and given about 150 courses at universities in Luxembourg, Frankfurt, Berlin, Potsdam, Beijing and Singapore. I am also constantly present in newspapers, magazines, radio, television and at schools and do research in numerous projects with industry.

List of accepted submissions

Regular papers

  • Benjamin Kap, Marharyta Aleksandrova and Thomas Engel. The Effect of Noise Level on Causal Identification with Additive Noise Models
  • Tycho Atsma, Koen van der Zwet and Tom M. van Engers. The effect of group roles on the development of online vaccination Twitter communities
  • Johannes Scholtes, Giorgia Nidia Carranza Tejada and Gerasimos Spanakis. An analysis of BERT negation handling in sentiment analysis
  • Gaoyuan Liu, Joris De Winter, Bram Vanderborght, Ann Nowé and Denis Steckelmacher. MoveRL: To A Safer Robotic Reinforcement Learning Environment
  • Emmanuel Kieffer, Frédéric Pinel, Thomas Meyer, Georges Gloukoviezoff, Hakan Lucius and Pascal Bouvry. Proximal Policy Optimisation for a PrivateEquity Recommitment System
  • Ramon Petri, Eugenio Bargiacchi, Huib Aldewereld and Diederik M. Roijers. Heuristic Coordination in Cooperative Multi-Agent Reinforcement Learning
  • Pieter Floris Jacobs, Gideon Maillette de Buy Wenniger, Marco Wiering and Lambert Schomaker. Active learning for reducing labeling effort in text classification tasks
  • Abdolrahman Khoshrou and Eric J. Pauwels. Matrix Completion using Regularised Matrix Factorisation
  • Martijn Oldenhof, Adam Arany, Yves Moreau and Jaak Simm. Self-Labeling of Fully Mediating Representations by Graph Alignment
  • Xander Vankwikelberge, Bo Kang, Edith Heiter and Jefrey Lijffijt. ExClus: Explainable Clustering on Low-dimensional Data Representations
  • Aras Yurtman, Wannes Meert and Hendrik Blockeel. COBRAS+: Reusing Previously Obtained Constraints in Active Semi-Supervised Clustering
  • Nina Hosseini Kivanani, Roberto Gretter, Marco Matassoni and Giuseppe Daniele Falavigna. Experiments of ASR-based mispronunciation detection for children and adult English learners
  • Bram De Cooman, Johan Suykens and Andreas Ortseifen. Improving temporal smoothness of deterministic reinforcement learning policies with continuous actions
  • Jonas Bei, David Pomerenke, Lukas Schreiner, Sepideh Sharbaf, Pieter Collins and Nico Roos. Explainable AI through the Learning of Arguments
  • Paweł Maka, Jelle Jansen, Theodor Antoniou, Thomas Bahne, Kevin Müller, Can Türktas, Nico Roos and Kurt Driessens. Combining Mental Models with Neural Networks
  • Bart Bogaerts, Maxime Jakubowski and Jan Van den Bussche. SHACL: A Description Logic in Disguise
  • André Mertens and Stylianos Asteriadis. Explainable and Interpretable Features of Emotion in Human Body Expressions
  • Mariia Pliusnova and Alexia Briassouli. Deep Learning Techniques for Detection and Diagnosis of Brain Metastases
  • Maxime De Bruyn, Ehsan Lotfi, Buhmann Jeska and Walter Daelemans. ConveRT for FAQ Answering
  • Nele Albers, Miguel Suau and Frans A. Oliehoek. Using Bisimulation Metrics to Analyze and Evaluate Latent State Representations
  • Elizaveta Nekrasova, Tibor Neugebauer, Te Bao and Yohanes Eko Riyanto. Algorithmic Trading in Experimental Markets with Human Traders: A Literature Survey
  • Simon Vandevelde and Joost Vennekens. ProbLife: a Probabilistic Game of Life
  • Miroslav Kárný and Daniel Karlík. Trust Estimation in Forecasting-Based Knowledge Fusion
  • Vinu Ellampallil Venugopal and Sreenivasa Kumar P. Verbalizing but not just Verbatim Translations of Ontology Axioms
  • Simona Capponi, Andrew I. Cooper, John Fearnley and Vladimir Gusev. Simple and Fast Methods for Integrating Predicted Data into Bayesian Optimization
  • Yu Liuwen, Mirko Zichichi, Réka Markovich and Amro Najjar. Argumentation in Trust Services within a Blockchain Environment
  • Rachele Carli. Social robotics and deception: beyond the ethical approach
  • Zhao Yang, Mike Preuss and Aske Plaat. Transfer Learning and Curriculum Learning in Sokoban
  • Zhao Yang, Mike Preuss and Aske Plaat. Potential-based Reward Shaping in Sokoban
  • Timo Kats, Peter van der Putten and Jasper Schelling. Distinguishing Commercial from Editorial Content in News
  • Jianing Wang, Matthias Müller-Brockhausen and Aske Plaat. Accelerating Multi-Agent Learning via Centralized Counting and Efficient Hashing
  • Nicky Lenaers and Martijn Van Otterlo. Regular Decision Processes for Grid Worlds
  • Victoria Bosch, Arne Diehl, Daphne Smits, Akke Toeter and Johan Kwisthout. Implementation of a Distributed Minimum Dominating Set Approximation Algorithm in a Spiking Neural Network
  • François Robinet and Raphaël Frank. Refining Weakly-Supervised Free Space Estimation through Data Augmentation and Recursive Training
  • Mattias Billast, Tom De Schepper, Kevin Mets, Peter Hellinckx, José Oramas and Steven Latré. Object detection with semi-supervised adversarial domain adaptation for real-time edge devices
  • Akash Singh, Kevin Mets, Tom De Schepper, Peter Hellinckx, José Oramas and Steven Latre. Task Independent Capsule-based Agents for Deep Q-Learning
  • Augustijn de Boer, Ron Hommelsheim and David Leeftink. A Bayesian Framework for Evaluating Evolutionary Art
  • Ouren Kuiper, Martin van den Berg, Joost van der Burgt and Stefan Leijnen. Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities
  • Niels Rouws, Svitlana Vakulenko and Sophia Katrenko. Dutch SQuAD and Ensemble Learning for Question Answering from Labour Agreements

Encore abstracts

  • Sudhanshu Chouhan, Anna Wilbik and Remco Dijkman. A Real-Time Method to Detect Temporal Anomalies in Event Log Data
  • Oliver Urs Lenz, Daniel Peralta and Chris Cornelis. Average Localised Proximity: A new data descriptor with good default one-class classification performance
  • Marjolein Deryck, Nuno Comenda, Bart Coppens and Joost Vennekens. Combining Logic and Natural LanguageProcessing to Support Investment Management
  • Anna Wilbik and Paul Grefen. Towards a Federated Fuzzy Learning System
  • Pieter Delobelle, Thomas Winters and Bettina Berendt. RobBERT: a Dutch RoBERTa-based Language Model
  • Gonzalo Nápoles, Agnieszka Jastrzebska and Yamisleydi Salgueiro. A Note on Pattern Classification with Evolving Long-term Cognitive Networks
  • Azqa Nadeem, Sicco Verwer, Stephen Moskal and Shanchieh Jay Yang. SAGE: Intrusion Alert-driven Attack Graph Extractor
  • Hans van Ditmarsch, Malvin Gattinger and Rahim Ramezanian. Everyone knows that everyone knows (abstract)
  • Felipe Kenji Nakano, Konstantinos Pliakos and Celine Vens. Deep tree-ensembles for multi-output prediction
  • Leandra Fichtel, Jan-Christoph Kalo and Wolf-Tilo Balke. Prompt Tuning or Fine-Tuning -Investigating Relational Knowledge in Pre-Trained Language Models
  • Yihe Dong, Jean-Baptiste Cordonnier and Andreas Loukas. Attention is not all you need: pure attention loses rank doubly exponentially with depth
  • Isel Grau, Ann Nowe and Wim Vranken. Encore Abstract: Interpreting a Black-Box Predictor to Gain Insights into Early Folding Mechanisms
  • Kylian Van Dessel, Jo Devriendt and Joost Vennekens. FOLASP: FO(.) as Input Language for Answer Set Solvers
  • Victor Contreras, Reyhan Aydogan, Amro Najjar and Davide Calvaresi. On Explainable Negotiations via Argumentation
  • Luisa Ebner, Malte Nalenz, Annette ten Teije, Frank van Harmelen and Thomas Augustin. Expert RuleFit: Complementing Rule Ensembles with Expert Knowledge
  • Anna Lukina, Christian Schilling and Thomas Henzinger. Active Monitoring of Neural Networks
  • V. Javier Traver, Judith Zorío and Luis A. Leiva. A Gaze-Based Measure of Temporal Salience
  • Reza Refaei Afshar, Jason Rhuggenaath, Yingqian Zhang and Uzay Kaymak. Optimizing Reserve Price using Deep Reinforcement Learning and Shaped Reward
  • Yazan Mualla, Igor Tchappi, Timotheus Kampik, Amro Najjar, Davide Calvaresi, Abdeljalil Abbas-Turki, Stéphane Galland and Christophe Nicolle. A Human-Agent Architecture for Explanation Formulation (An extended abstract)
  • Johan Kwisthout. Explainable AI using MAP-independence
  • Eugenio Bargiacchi, Timothy Verstraeten and Diederik M. Roijers. Scalable Multi-Agent Reinforcement Learning with Cooperative Prioritized Sweeping
  • Daniël Vos and Sicco Verwer. Efficient Training of Robust Decision Trees Against Adversarial Examples
  • Zahra Atashgahi, Ghada Sokar, Tim van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Ramond Veldhuis and Mykola Pechenizkiy. Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders (Extended Abstract)
  • Davide Ceolin, Giuseppe Primiero, Jan Wielemaker and Michael Soprano. Assessing the Quality of Online Reviews using Formal Argumentation Theory
  • Neil Yorke-Smith. Agent-Based Simulation of Short-Term Peer-to-Peer Rentals: Evidence from the Amsterdam Housing Market
  • Paulo Roberto de Oliveira da Costa, Yingqian Zhang, Alp Akcay and Uzay Kaymak. Learning 2-opt Local Search from Demonstrations
  • Ghada Sokar, Decebal Constantin Mocanu and Mykola Pechenizkiy. SpaceNet: Make Free Space For Continual Learning (Extended Abstract)
  • Oliver Roesler and Elahe Bagheri. Unsupervised Online Grounding for Social Robots (Extended Abstract)

Posters and demos

  • Hélène Plisnier, Alessandro Fasano and Ann Nowé. Play the Reinforcement Learning Agent
  • Mani Tajaddini, Willem-Paul Brinkman, Annette ten Teije and Mark Neerincx. A Design Pattern Language for Hybrid Intelligent Teams
  • Hélène Plisnier, Denis Steckelmacher and Ann Nowé. Shepherd: Reinforcement Learning as a Service with Distributed Execution
  • Nele Albers, Mark A. Neerincx and Willem-Paul Brinkman. Reinforcement Learning-Based Persuasion by a Conversational Agent for Behavior Change
  • Kristina Kudryavtseva and Sviatlana Hoehn. SafeTraveller – A conversational assistant for BeNeLux travellers
  • Marjolein Deryck, Nuno Comenda, Bart Coppens and Joost Vennekens. Logical Reasoning application with NLP interface to construct the Knowledge Base
  • Imen Chakroun, Tom Vander Aa, Roel Wuyts and Wilfried Verarcht. Using privacy preserving amalgamated machine learning for pedestrian safety in warehouses
  • Dimitra Anastasiou, Anders Ruge, Hoorieh Afkari, Patrick Gratz, Radu Ion, Verginica Barbu Mititelu, Olivier Pedretti, Svetlana Segarceanu and George Suciu. A Machine Translation powered AI Chatbot
  • Isel Grau, Luis Daniel Hernandez, Astrid Sierens, Simeon Michel, Nico Sergeyssels, Vicky Froyen, Catherine Middag and Ann Nowe. Talking to your Data: Interactive and interpretable data mining through a conversational agent
  • Roelant Ossewaarde, Stefan Leijnen and Thijs Van den Berg. An invariants based architecture for combining small and large data sets in neural networks.

Thesis abstracts

  • Wafaa Aljbawi. Automated Diagnostic System of Skin Cancer using Deep Convolutional Neural Networks on Dermoscopic Images
  • Sven van Asseldonk and Itir Onal Ertugrul. Deepfake Video Detection using Deep Convolutional and Hand-Crafted Facial Features with Long Short-Term Memory Network
  • Chris Slewe, Maaike de Boer and Tejaswini Deoskar. Generating common-sense scene graphs using a knowledge base BERT model
  • Martin Toman and Neil Yorke-Smith. Localised Reputation in the Prisoner’s Dilemma
  • Abigail Vella, Frankie Inguanez and Daren Scerri. Remote NO2 emissions assessment during COVID-19 lockdowns
  • Adel Magra, Peter Spreij, Tim Baarslag and Michael Kaisers. Automated Negotiation Under User Preference Uncertainty
  • Astrid Sierens, Isel Grau, Luis Daniel Hernandez, Simeon Michel, Vicky Froyen, Catherine Middag and Ann Nowe. Thesis Abstract: Interactive Subgroup Discovery for the conversational data governance platform “Talking to your Data”
  • Aleksandra Olczyk and Itir Onal Ertugrul. Pain recognition from thermal videos using deep neural networks
  • Domien Hennion, Timothy Verstraeten and Ann Nowé. Safe Fleet-Wide Policy Iteration
  • Lisa Koutsoviti Koumeri and Gonzalo Nápoles. Bias quantification measures based on fuzzy rough sets
  • Gregory Wullaert, Fabian Sanjines, Timothy Verstraeten and Ann Nowé. Learning Deep Coordination Graphs for Multi-Agent Systems
  • Julian Posch, Kurt Driessens and Jacques Verriet. Encoder-Decoder Approaches for Detection and Diagnosis of Anomalies in Machine Control Applications
  • Anna-Maria Angelova, Fernando P. Santos and Sandro Bjelogrlic. Enhancing Reject Inference in Credit Scoring with Selective Semi-Supervised Learning
  • Floris Doolaard and Neil Yorke-Smith. Online Learning of Deeper Variable Ordering Heuristics for Constraint Optimisation Problems
  • Yazan Mualla, Stéphane Galland and Christophe Nicolle. Explaining the Behavior of Remote Robots to Humans (Extended abstract)
  • Pietro Piccini. Identifying strong predictors of engagement in Facebook news posts
  • Songha Ban and Lee-Ling Sharon Ong. Producing “Open-Style” Choreography for K-Pop Music with Deep Learning
  • Valerie S. Sawirja and Peter Bloem. Fine-Tuning Pretrained Language Models for Controlled Text Generation with Adapters
  • Thomas Vaeyens, Youri Coppens, Timothy Verstraeten and Ann Nowe. Explainable Reinforcement Learning for Fleet Applications
  • Matthias Cami, Inês Terrucha, Yara Khaluf and Pieter Simoens. Bayesian Inverse Reinforcement Learning for strategy extraction in the iterated Prisoner’s Dilemma game
  • Michela Venturini and Giulia Barbati. Clinical Predictive Models: