Runtime monitoring follows the idea that not all potentially dangerous situations can be detected before a system is employed. This is especially crucial for neural networks, as small perturbations of the input can fool them, and formal verification of their reliability is still out of reach. We mainly follow two directions for developing monitoring techniques to detect potential problems at runtime. Firstly, we are interested in understanding the influence of neuron activation values on the output and the possibilities of using this information for monitoring. We aim to extend this work to more challenging tasks than classification, e.g., object detection. Secondly, we investigate the possibilities of logic to describe interesting properties for monitoring and develop required monitoring techniques.
We create a tool (Monitizer) that optimizes monitors on a NN for a specific task.
2024 |
Monitizer: Automating Design and Evaluation of Neural Network Monitors
International Conference on Computer Aided Verification |
2023 |
Runtime Monitoring for Out-of-Distribution Detection in Object Detection Neural Networks
FM 2023 |