Making Artificial Intelligence Explainable
Surely, you’ve admired someone’s special abilities before. Whether it’s the intuition of an athlete to “read” the game, a technician’s ability to instantly pinpoint a problem, or a socially skilled person who always finds the right words. When asked about the reasons for their decisions, such people often say: “I don’t know, I just had a feeling.” Even these skilled individuals sometimes can’t fully explain their own decisions.
As artificial intelligence (AI) becomes more similar to human abilities, it often suffers from the same “lack of explanation.” Naturally, we want to understand the reasons behind AI’s decisions: this builds trust and helps identify and solve problems.
Our research follows two approaches: First, we want to explore which data features the AI uses to accomplish its tasks. Everyone knows how important the “right” features are to solving a problem from their daily lives: many problems are easier to solve when viewed from a different perspective or when relevant information is separated from irrelevant details. Moritz Lange investigates, using various techniques, what kind of representation the AI creates from the wealth of available information. This can provide validation if the representations chosen by the AI seem plausible to us because we consider similar information important.
We are also trying to meet the desire for clear, simple rules. Raphael Engelhardt is researching methods to summarize the AI’s complex decision-making into simple rules. He uses techniques that track the conditions under which the AI reaches specific conclusions. From these “logs,” decision trees (a series of if-then rules) are created. It has been shown that these rule sets, while much simpler, are just as effective in decision-making. They can be used to understand the AI’s “thought process” or even replace it.
Additional resources
TBA
Cooperation
Project Publications
- Engelhardt, Raphael C. (2023). Finding the Relevant Samples for Decision Trees in Reinforcement Learning – Dataninja Spring-School. en-US. Poster Session. Abstract Submitted: 13.04.2023 Accepted: 21.04.2023 Presented at the DataNinja Spring School 2023 Poster Session: 09.05.2023 Published online: 20.06.2023. Bielefeld, Germany. url: https://dataninja.nrw/?page_id= 1251 (visited on 07/01/2023).
- Engelhardt, Raphael C., Moritz Lange, Laurenz Wiskott, and Wolfgang Ko- nen (2021). ‘‘Shedding Light into the Black Box of Reinforcement Learn- ing (poster)’’. In: KI 2021 – 44th German Conference on Artificial Intelli- gence. Workshop on Trustworthy AI in the Wild (Sept. 27, 2021). eprint:https://dataninja.nrw/wp-content/uploads/2021/09/1_Engelhardt_ SheddingLight.pdf. url: https://dataninja.nrw/?page_id=343.
- Engelhardt, Raphael C., Moritz Lange, Laurenz Wiskott, and Wolfgang Konen (2023). ‘‘Sample-Based Rule Extraction for Explainable Reinforcement Learn- ing’’. en. In: Machine Learning, Optimization, and Data Science. Ed. by Giuseppe Nicosia, Varun Ojha, Emanuele La Malfa, Gabriele La Malfa, Panos Pardalos, Giuseppe Di Fatta, Giovanni Giuffrida, and Renato Umeton. Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, pp. 330– 345. isbn: 978-3-031-25599-1. doi: 10.1007/978-3-031-25599-1_25.
- Engelhardt, Raphael C., Marc Oedingen, Moritz Lange, Laurenz Wiskott, and Wolfgang Konen (2023). ‘‘Iterative Oblique Decision Trees Deliver Explainable RL Models’’. en. In: Algorithms 16.6. Publisher: Multidisciplinary Digital Publishing Institute, p. 282. issn: 1999-4893. doi: 10.3390/a16060282. url: https://www.mdpi.com/1999-4893/16/6/282 (visited on 07/01/2023).
- Engelhardt, Raphael C., Ralitsa Raycheva, Moritz Lange, Laurenz Wiskott, and Wolfgang Konen (2024). ‘‘Ökolopoly: Case Study on Large Action Spaces in Reinforcement Learning’’. en. In: Machine Learning, Optimization, and Data Science. Ed. by Giuseppe Nicosia, Varun Ojha, Emanuele La Malfa, Gabriele La Malfa, Panos Pardalos, and Renato Umeton. Lecture Notes in Computer Science. To be published. Cham: Springer Nature Switzerland.
- Lange, Moritz, Noah Krystiniak, Raphael Engelhardt, Wolfgang Konen, and Laurenz Wiskott (2022). ‘‘Comparing Auxiliary Tasks for Learning Represen- tations for Reinforcement Learning’’. In: url: https://openreview.net/ forum?id=7Kf5_7-b7q.
- Lange, Moritz, Noah Krystiniak, Raphael C. Engelhardt, Wolfgang Konen, and Laurenz Wiskott (2024). ‘‘Improving Reinforcement Learning Efficiency with Auxiliary Tasks in Non-Visual Environments: A Comparison’’. en. In: Machine Learning, Optimization, and Data Science. Ed. by Giuseppe Nicosia, Varun Ojha, Emanuele La Malfa, Gabriele La Malfa, Panos Pardalos, and Renato Umeton. Lecture Notes in Computer Science. To be published. Cham: Springer Nature Switzerland.
- Melnik, Andrew, Robin Schiewer, Moritz Lange, Andrei Ioan Muresanu, mozhgan saeidi, Animesh Garg, and Helge Ritter (2023). ‘‘Benchmarks for Physical Reasoning AI’’. In: Transactions on Machine Learning Research. Survey Certification. issn: 2835-8856. url: https://openreview.net/forum?id= cHroS8VIyN.
- Schüler, Merlin and Moritz Lange (2022). GitHub Repository sklearn-sfa. https://github.com/wiskott- lab/sklearn-sfa.