Workshop Event

27th September 2021, morning session, to be held virtually
KI 2021 – 44th German Conference on Artificial Intelligence
Organizers: Barbara Hammer, Malte Schilling, Laurenz Wiskott

Trustworthy AI in the wild

Artificial Intelligence (AI) is entering more and more our lives with the goal of supporting and helping us as humans in our homes, at our workplace, and as a society as such. As we want to benefit from these technologies, at the same time we want to be able to trust how AI technologies operate and to be able to understand their decisions. The goal of trustworthy AI, therefore, is to offer intelligent methods and agents that, on the one hand, produce robust and adaptive behavior in real world scenarios, and, on the other hand, are transparent in their decision making as they are able to justify and explain their decisions.

Trustworthy AI is an increasingly important topic that lies at the intersection of Artificial Intelligence, Machine Learning, robotics, and Human-Machine-Interaction. It fits well in the focus of the KI 2021 on human-centered AI and explainable machine learning. The breadth of researchers present in Berlin at the KI provide an excellent basis for fruitful discussion and interchange of various viewpoints and current approaches on the topic of Trustworthy AI.


AI solutions start to have an enormous impact on our lives: they are key enabler of future digital industry, potential game-changer for experimentation and discovery in science, and prevalent technology in everyday services such as internet search or human-machine communication. Moreover, AI is involved in the solutions of humans’ grand challenges, examples being AI-based environment-friendly mobility concepts, augmentation of human capabilities by intelligent assistive systems in an ageing society, or support in developing medical therapies or vaccines.

Yet, the very nature of AI technologies includes a number of novel threats which need to be addressed for trustworthy AI: many machine learning models act as black boxes which can lead to unexpected behavior, for example, when human and machine perception differ considerably. As models are trained on real-life data, there is the risk that such AI models allow for unauthorized access to sensitive information that might be contained in the data. Furthermore, data biases —caused by spurious correlations in the data— can be captured by ML models and their predictions, leading to systematically disadvantages for specific individuals or (e.g. ethnic) groups. The ubiquity of AI in virtually every aspect of life, therefore, has an enormous impact on the way in which we as a society communicate, work, decide, and interact.

Hence novel concepts on how to guarantee security, safety, privacy, and fairness of AI and how to create AI systems which support humans rather than incapacitating them are of uttermost importance and are constituting the research area on Trustworthy AI. Trustworthy AI aims for technologies that not only provide solutions to an earlier defined task, but that as well allow for insight on the functioning of the underlying system. Why did the system acted in a certain way and did not choose a different solution? Which features were important for the decision and how sure is the system of its choice, i.e. can I trust this decision? The workshop aims, first, at understanding Machine Learning based approaches towards explainable AI solutions. Secondly, a focus of the workshop is on how we can make AI solutions more trustworthy.

The workshop will address the following topics:

  • Testable criteria for trustworthy AI technologies
  • How to guarantee the privacy and security of AI technologies?
  • Legal, ethical, and societal implications of AI solutions
  • How can systems be autonomous and safe? In interaction with humans, systems should be guaranteed to act safely and not endanger humans or other agents.
  • Human agency, autonomy, and oversight
  • Technical robustness and safety: reliability, resilience to attacks, reproducibility
  • Privacy and data governance
  • Transparency and explainability
  • Diversity, non-discrimination and fairness
  • Accountability

The goal of the workshop is to discuss existing concepts of trustworthy AI and provide a platform for the formulation of new ideas and proposals to overcome existing limitations. We see this as an important topic for AI and ML research in the future and expect continued workshops on this topic to build a platform where future research directions and strategies will be discussed.

The workshop aims at interested researchers with a background in machine-learning (supervised and unsupervised learning, reinforcement learning), traditional AI techniques (reasoning, planning), robotics (humanoids, probabilistic robotics), HMI/HRI, and multi-agent systems (coordination and cooperation).

Overview Program

Overall, the workshop aims at a multidisciplinary perspective on key aspects and challenges of Trustworthy AI and in particular when in interaction with humans. Therefore, the presentations will reflect the diversity of approaches and topics as well as there will be ample time for discussion. 

The half-day workshop will consist of invited talks from AI and Machine Learning and we are open for contributed talks. Overall, we plan with five talks. Two sessions of three talks (each half an hour). The first session will be concluded by short presentations for the posters (poster flashlight).

List of Confirmed Speakers:

  • Prof. Isabel Valera, Department of Computer Science of Saarland University, Saarbrücken: Algorithmic recourse: theory and practice, In this talk I will introduce the concept of algorithmic recourse, which aims to help individuals affected by an unfavorable algorithmic decision to recover from it. First, I will show that while the concept of algorithmic recourse is strongly related to counterfactual explanations, existing methods for the later do not directly provide practical solutions for algorithmic recourse, as they do not account for the causal mechanisms covering the world. Then, I will show theoretical results that prove the need of complete causal knowledge to guarantee recourse and show how algorithmic recourse can be useful to provide novel fairness definitions that short the focus from the algorithm to the data distribution. Such novel definition of fairness allows us to distinguish between situations where unfairness can be better addressed by societal intervention, as opposed to changes on the classifiers. Finally, I will show practical solutions for (fairness in) algorithmic recourse, in realistic scenarios where the causal knowledge is only limited.
  • Prof. Marc Toussaint, Head of the Learning & Intelligent Systems Lab at the EECS Faculty of TU Berlin: t.b.a.


Participants are invited to submit a contribution (via email: as an extended abstract (maximum 2 pages in length). Contributions will be reviewed and selected by the organizers. Contributions can be made in three categories:

  • for contributed talks (of around 15 minutes plus discussion);
  • for poster presentation (using a spatial chat platform for our virtual poster session);
  • for presentation of proposed or planned projects or starting initiatives in the area of trustworthy AI (this will be part of the poster session, in which an initiative can get one or multiple posters presenting their plans and goals in order to invite discussion).

The workshop contributions will appear as online proceedings on the workshop webpage. We want to give researchers a chance to present their (ongoing or planned) work. But we also want to provide a forum for relevant work that has recently been published in journals and other conferences.

Submission deadline (extended): September 10th, 2021

For further information, please contact Malte Schilling ( ).