The Learning in Verification group (LiVe Lab) focuses on the interactions of machine learning and verification. Our research includes Explainable AI, Verification of Neural Networks, Stochastic Games and Control, Probabilistic Model Checking, Temporal Logics (mainly LTL, PCTL), and Automata Theory. Our research is applicable in the Robotics, Biomedical, and Automotive domains. The team is distributed between the Masaryk University Brno, Czech Republic, and the Technical University of Munich, Germany.
We develop techniques for the verification of concurrent stochastic games which extend turn-based stochastic games by allowing players to select actions simultaneously in each state, reflecting more realistic scenarios of interactive agents acting concurrently.
Finding compact and explainable representations of strategies for POMDPs using finite state controllers.
We develop a logic over signals in which it is easier to specify preferences using fuzzy paths. We developed the logic so that it is more amenable to learning.
In this project we develop learning-based exploration heuristics for LTL Synthesis that exploit the semantic labelling of the underlying Automaton/Game.
We create a tool (Monitizer) that optimizes monitors on a NN for a specific task.
Represent controllers as decision trees. Improve memory footprint, boost explainability while preserving guarantees.
Automata Tutor is an online teaching tool that aids instructors and students in large courses on automata and formal languages with many different exercise types.