Safety of Neural Network

Safety of Neural Network

This area focuses on the safety of Neural Networks (NNs). We are interested in two areas; namely verification and runtime monitoring.

Verification of NN is crucial with their rise in safety-critical applications. However, due to their huge size, we face heavy scalability issues. To this end, we provide an abstraction framework to reduce the size of the NN while keeping guarantees. (DeepAbstract, LiNNA)

For monitoring, we mainly follow two directions. Firstly, we are interested in understanding the influence of neuron activation values on the output and the possibilities of using these information for monitoring. We aim to extend this work on more challenging tasks than classification, e.g. object detection (paper). Secondly, we investigate the possibilities of logics as description of interesting properties for monitoring and develop required monitoring techniques.

Team

Projects

MONITIZER

MONITIZER

We create a tool (Monitizer) that optimizes monitors on a NN for a specific task.

Read more »

Publications

2023
Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
Calvin Chau, Jan Kretìnskỳ, Stefanie Mohr
ATVA 2023
Runtime Monitoring for Out-of-Distribution Detection in Object Detection Neural Networks
Vahid Hashemi, Jan Křetìnskỳ, Sabine Rieder, Jessica Schmidt
FM 2023
2020
DeepAbstract: Neural Network Abstraction for Accelerating Verification
Pranav Ashok, Vahid Hashemi, Jan Kretìnskỳ, Stefanie Mohr
ATVA 2020