Workshop Event

27th September 2021, 10 AM to 1 PM, virtual event
KI 2021 – 44th German Conference on Artificial Intelligence
Organizers: Barbara Hammer, Malte Schilling, Laurenz Wiskott

Trustworthy AI in the wild

Artificial Intelligence (AI) is entering more and more our lives with the goal of supporting and helping us as humans in our homes, at our workplace, and as a society as such. As we want to benefit from these technologies, at the same time we want to be able to trust how AI technologies operate and to be able to understand their decisions. The goal of trustworthy AI, therefore, is to offer intelligent methods and agents that, on the one hand, produce robust and adaptive behavior in real world scenarios, and, on the other hand, are transparent in their decision making as they are able to justify and explain their decisions.

Trustworthy AI is an increasingly important topic that lies at the intersection of Artificial Intelligence, Machine Learning, robotics, and Human-Machine-Interaction. It fits well in the focus of the KI 2021 on human-centered AI and explainable machine learning. The breadth of researchers present in Berlin at the KI provide an excellent basis for fruitful discussion and interchange of various viewpoints and current approaches on the topic of Trustworthy AI.

Schedule Workshop

Will be held in zoom webinar (Meeting ID: 962 7072 7489, Passcode: 037056) – contact Malte Schilling for link.
10:00-10:05 Welcome and Introduction
10:05-10:45 Keynote Talk: Isabel Valera “Algorithmic recourse: theory and practice”
10:45-11:00 Invited talk: Ulrike Kuhl “Towards an Empirical Analysis of Counterfactual Explanations for Machine Learning”
11:00-11:45 Keynote Talk: Marc Toussaint “Physical Reasoning: If I only could explain why it doesn’t work”
11:45-12:00 Poster pitches of 2 minutes

12:00 Poster session will be held in wonder.me .

Poster Session – Posters and Extended Abstracts

  1. Raphael Engelhardt, Moritz Lange, Laurenz Wiskott, and Wolfgang Konen: Shedding Light into the Black Box of Reinforcement LearningExtended Abstract and Poster
  2. Markus Vieth: Predicting Elbow Movement from Electromyography DataExtended Abstract and Poster
  3. Shamini Koravuna and Ulrich Rückert: Neuro-inspired and resource-efficient Hardware-Architectures for plastic SNNsExtended Abstract and Poster
  4. Andreas Besginow, Jan Hüwel, Markus Lange-Hegermann, and Christian Beecks: Exploring Methods to Apply Gaussian Processes in Industrial Anomaly DetectionExtended Abstract and Poster
  5. Jasmin Brandt, Elias Schede, Kevin Tierney, and Eyke Hüllermeier: EKAmBa: Realtime Configuration of Algorithms with Multi-armed BanditsExtended Abstract and Poster
  6. Patrick Kolpaczki, Viktor Benes, and Eyke Hüllermeier: Identifying Top-k Players in Cooperative Games via Shapley BanditsExtended Abstract and Poster
  7. Mareike Hartmann, Ivana Kruijff-Korbayova, and Daniel Sonntag: Interaction with Explanations in the XAINES ProjectExtended Abstract and Poster
  8. Chiara Balestra, Cooperative Game Theory for Unsupervised Feature Selection in Categorical DataExtended Abstract and Poster

Objectives

AI solutions start to have an enormous impact on our lives: they are key enabler of future digital industry, potential game-changer for experimentation and discovery in science, and prevalent technology in everyday services such as internet search or human-machine communication. Moreover, AI is involved in the solutions of humans’ grand challenges, examples being AI-based environment-friendly mobility concepts, augmentation of human capabilities by intelligent assistive systems in an ageing society, or support in developing medical therapies or vaccines.

Yet, the very nature of AI technologies includes a number of novel threats which need to be addressed for trustworthy AI: many machine learning models act as black boxes which can lead to unexpected behavior, for example, when human and machine perception differ considerably. As models are trained on real-life data, there is the risk that such AI models allow for unauthorized access to sensitive information that might be contained in the data. Furthermore, data biases —caused by spurious correlations in the data— can be captured by ML models and their predictions, leading to systematically disadvantages for specific individuals or (e.g. ethnic) groups. The ubiquity of AI in virtually every aspect of life, therefore, has an enormous impact on the way in which we as a society communicate, work, decide, and interact.

Hence novel concepts on how to guarantee security, safety, privacy, and fairness of AI and how to create AI systems which support humans rather than incapacitating them are of uttermost importance and are constituting the research area on Trustworthy AI. Trustworthy AI aims for technologies that not only provide solutions to an earlier defined task, but that as well allow for insight on the functioning of the underlying system. Why did the system acted in a certain way and did not choose a different solution? Which features were important for the decision and how sure is the system of its choice, i.e. can I trust this decision? The workshop aims, first, at understanding Machine Learning based approaches towards explainable AI solutions. Secondly, a focus of the workshop is on how we can make AI solutions more trustworthy.

The workshop will address the following topics:

  • Testable criteria for trustworthy AI technologies
  • How to guarantee the privacy and security of AI technologies?
  • Legal, ethical, and societal implications of AI solutions
  • How can systems be autonomous and safe? In interaction with humans, systems should be guaranteed to act safely and not endanger humans or other agents.
  • Human agency, autonomy, and oversight
  • Technical robustness and safety: reliability, resilience to attacks, reproducibility
  • Privacy and data governance
  • Transparency and explainability
  • Diversity, non-discrimination and fairness
  • Accountability

The goal of the workshop is to discuss existing concepts of trustworthy AI and provide a platform for the formulation of new ideas and proposals to overcome existing limitations. We see this as an important topic for AI and ML research in the future and expect continued workshops on this topic to build a platform where future research directions and strategies will be discussed.

The workshop aims at interested researchers with a background in machine-learning (supervised and unsupervised learning, reinforcement learning), traditional AI techniques (reasoning, planning), robotics (humanoids, probabilistic robotics), HMI/HRI, and multi-agent systems (coordination and cooperation).

Overview Program

Overall, the workshop aims at a multidisciplinary perspective on key aspects and challenges of Trustworthy AI and in particular when in interaction with humans. Therefore, the presentations will reflect the diversity of approaches and topics as well as there will be ample time for discussion. 

The half-day workshop will consist of invited talks from AI and Machine Learning and we are open for contributed talks. Overall, we plan with five talks. Two sessions of three talks (each half an hour). The first session will be concluded by short presentations for the posters (poster flashlight).

List of Confirmed Speakers:

  • Prof. Isabel Valera, Department of Computer Science of Saarland University, Saarbrücken: Algorithmic recourse: theory and practice, In this talk I will introduce the concept of algorithmic recourse, which aims to help individuals affected by an unfavorable algorithmic decision to recover from it. First, I will show that while the concept of algorithmic recourse is strongly related to counterfactual explanations, existing methods for the later do not directly provide practical solutions for algorithmic recourse, as they do not account for the causal mechanisms covering the world. Then, I will show theoretical results that prove the need of complete causal knowledge to guarantee recourse and show how algorithmic recourse can be useful to provide novel fairness definitions that short the focus from the algorithm to the data distribution. Such novel definition of fairness allows us to distinguish between situations where unfairness can be better addressed by societal intervention, as opposed to changes on the classifiers. Finally, I will show practical solutions for (fairness in) algorithmic recourse, in realistic scenarios where the causal knowledge is only limited.
  • Prof. Marc Toussaint, Head of the Learning & Intelligent Systems Lab at the EECS Faculty of TU Berlin: t.b.a.

For further information, please contact Malte Schilling ( mschilli@techfak.uni-bielefeld.de ).