Logo

Nichts gefunden?

Teilen Sie uns mit, welche Inhalte Sie auf unseren Seiten vermissen.

Interpretable Representations and Neuro-symbolic Methods in Deep Learning
Speaker: Jan Stühmer

Abstract:


Current state-of-the-art machine learning methods impress with their capabilities for prediction, classification, or when solving even complex analytical tasks in the case of large language models. However, these methods often appear as a “black box” from the outside which makes it hard to understand how a result was achieved. In this talk, I will discuss several approaches to interpretability in machine learning. First, I will describe a method for representation learning which leads to an interpretable latent representation. Second, I will present our work at the interface between symbolic and sub-symbolic representations, so-called neuro-symbolic methods, which enable a direct interpretation of a model’s intermediate output. The talk concludes with a discussion of the relationship between interpretability and causality.

Date : Mon, Jul 17

Time: 12:15

Place: SR C