Goal

While reinforcement learning methods are steadily gaining in popularity and relevance, complex models often lead to non-transparent behavior and their usefulness is tied to narrowly defined tasks. The (RL)3 project aims to improve both interpretability and transferability in reinforcement learning by using understandable data representations as well as rule-based simplifications of neural networks.

rl3_illustration
Caption: In (RL)3 representation, reinforcement and rule learning should complement each other. While representation learning allows for a low dimensional input space that facilitates reinforcement learning, rule-based learning will allow insight into the uncovered causalities and operation of the trained system. Illustration: Christoph J Kellner, Studio Animanova

Project Overview

Reinforcement learning is an approach to AI in which an agent learns to dynamically interact with its environment to achieve a certain goal. These agents, which are becoming impressively successful across applications from playing games to controlling industrial processes, are commonly based on deep neural networks. However, despite their usefulness, neural networks are notoriously seen as black-box models: their complexity makes them hard to understand and to reason about their decisions. Additionally, the resulting complex and highly specific reinforcement learning algorithms end up being heavily tailored towards specific tasks. (RL)3 will improve the interpretability and transferability of those algorithms to achieve more understandable, more predictable reinforcement learning approaches that are ultimately more secure and easier to apply.

Creating understandable data representations and formulating decision processes as simple rules are among the most efficient approaches to achieve interpretability. Our representation learning research will focus on unsupervised techniques that can learn interpretable data representations independent of specific tasks. Our rule-learning research will simultaneously investigate methods for transforming complex, high-performing reinforcement learning models into simple rules which can be interpreted and modified by domain experts. The developed approaches will be tested on games and in industrial applications.

​1–4​


Cooperation


References

    1. 1.
      N. Escalante A, Wiskott L. Improved graph-based SFA: information preservation complements the slowness principle. Machine Learning. 2019;109:999-1037.
    2. 2.
      S. Bagheri, M. Thill, P. Koch, W. Konen. Online Adaptable Learning Rates for the Game Connect-4. IEEE Transactions on Computational Intelligence and AI in Games. 2016;8:33-42. doi:10.1109/TCIAIG.2014.2367105
    3. 3.
      Konen W, Bagheri S. Reinforcement Learning for N-Player Games: The Importance of Final Adaptation. In: Vasile M, Filipic B, eds. 9th International Conference on Bioinspired Optimisation Methods and Their Applications (BIOMA) . ; 2020. http://www.gm.fh-koeln.de/ciopwebpub/Konen20b.d/bioma20-TDNTuple.pdf
    4. 4.
      Legenstein, Robert AND Wilbert, Niko AND Wiskott, Laurenz. Reinforcement Learning on Slow Features of High-Dimensional Input Streams. PLOS Computational Biology. 2010;6:1-13. doi:10.1371/journal.pcbi.1000894