Return to website

Security of XAI

Prof. Dr. Christian Wressnegger

 Learning-based systems successfully assist in various computer security challenges, such as preventing network intrusions, reverse engineering, vulnerability discovery and detecting malicious codes. However, modern (deep) learning methods often lack understandable reasoning in their decision process, making crucial decisions less trustworthy. Recent advances in explainable machine learning have turned the tables, enabling precise relevance attribution of input features for otherwise opaque models. This progression has raised expectations that these techniques can also benefit defence against attacks on computer systems and even machine learning models themselves. This talk explores the prospects and limits of XAI in computer security.