Verification and Learning for Assured Autonomy

A track organized as part of AISOLA 2024

Organizers: Devdatt Dubhashi (Chalmers University of Technology, SE); Raúl Pardo (IT University of Copenhagen, DK); Gerardo Schneider (University of Gothenburg, SE); Hazem Torfah (Chalmers University of Technology, SE)

Track Description

In recent years, there has been a paradigm shift in the design of cyber-physical systems (CPS) towards using learning-enabled components to perform challenging tasks in perception, prediction, planning, and control. This transformation has created a gap between the implementation of this emerging class of learning-enabled cyber-physical systems and the guarantees that one can provide on their safety and reliability. To close this gap, there needs to be closer interaction between different research communities and a fundamental revision is required of how a combination of formal methods and machine learning theory can be applied in the analysis of such systems.

The goal of this workshop is to foster the exchange of ideas on the topic of assured autonomy, between researchers from the communities of formal methods, control, and AI. Within the track, we will address questions related to:

  • Runtime verification and monitoring of learning components
  • The relation between safety of autonomous systems and regulatory issues
  • Design of formal verification techniques and infrastructure for learning components and its application to real-world scenarios
  • Efficient shielding of autonomous systems
  • Enforcement of ethical norms in autonomous systems
  • Constraint optimization in multi-agent systems
  • Provably safe machine learning methods

Programme

Oct 31st, Thursday

Time Title Speaker
Session: Control/Robotics/CPS
11:00 – 11:30 Influence Without Authority: Convincing Artificially Intelligent Agents to Act Right Houssam Abbas
11:30 – 12:00 Efficient Shield Synthesis via State-Space Transformation Christian Schilling
12:00 – 12:30 Monitoring Safety and Reliability of Underwater Robots: A Case Study Mahsa Varshosaz
Session: ML/AI
14:30 – 15:00 DAGP: A Robust Decentralized Optimization Algorithm with Provable Speed of Convergence Ashkan Panahi
15:00 – 15:30 Achieving Safe Stabilization using Deep Learning Chiranjib Bhattacharyya
15:30 – 16:00 Discussion session
Session: FM/Probabilistic Programming
16:30 – 17:00 Conformal Quantitative Predictive Monitoring and Conditional Validity Francesca Cairoli
17:00 – 17:30 Runtime Verification and AI: Addressing Pragmatic Regulatory Gordon Pace
17:30 – 18:00 Mechanism Design for Multi-Agent Planning in Robotics Anna Gautier

Nov 1st, Friday

Time Title Speaker
Session: FM/Probabilistic Programming
11:00 – 11:30 It’s Safe to Play while Driving: From a Spatial Traffic Logic Towards Traffic Games Maike Schwammberger
11:30 – 12:00 A Game-Based Semantics for the Probabilistic Intermediate Verification Language HeyVL Christoph Matheja
12:00 – 12:30 Discussion session
Session: ML/AI
14:30 – 15:00 A Comparison of Monitoring Techniques for Deep Neural Networks Wasim Essbai
15:00 – 15:30 Stochastic Multi-Armed Bandits – A Brief Tutorial Agniv Bandyopadhyay
15:30 – 16:00 Multi-Armed Bandits for Efficient Decision Making and Active Learning Morteza Chehreghani
Session: FM/Probabilistic Programming
16:30 – 17:00 Shield Synthesis using LTL Modulo Theories César Sánchez
17:00 – 17:30 Sliding between Controller Synthesis and Runtime Verification Martin Leucker
17:30 – 18:00 Discussion session

Nov 2nd, Saturday

Time Title Speaker
Session: Control/Robotics/CPS
11:00 – 11:30 Models for Shielded Reinforcement Learning Bettina Konighofer
11:30 – 12:00 Systematic Translation from Natural Language Robot Task Descriptions to STL Jyo Deshmukh
12:00 – 12:30 Closing

Accepted contributions (paper presentations and contributed talks)

  • Houssam Abbas and Aven Sadighi. Influence Without Authority: Convincing Artificially Intelligent Agents to Act Right.
  • Ezio Bartocci and Wasim Essbai. A Comparison of Monitoring Techniques for Deep Neural Networks.
  • Chiranjib Bhattacharyya and Chaitanya Murti. Achieving Safe Stabilization using Deep Learning.
  • Asger Horn Brorholt, Andreas Holck Høeg-Petersen, Kim Guldstrand Larsen, and Christian Schilling. Efficient Shield Synthesis via State-Space Transformation.
  • Francesca Cairoli, Tom Kuipers, Luca Bortolussi, and Nicola Paoletti. Conformal Quantitative Predictive Monitoring and Conditional Validity.
  • Morteza Chehreghani. Multi-Armed Bandits for Efficient Decision Making and Active Learning.
  • Christian Colombo, Gordon Pace, and Dylan Seychell. Runtime Verification and AI: Addressing Pragmatic Regulatory.
  • Anna Gautier. Mechanism Design for Multi-Agent Planning in Robotics.
  • Sandeep Juneja. Stochastic Multi-Armed Bandits – A Brief Tutorial.
  • Bettina Konighofer. Models for Shielded Reinforcement Learning.
  • Martin Leucker. Sliding between Controller Synthesis and Runtime Verification.
  • Christoph Matheja. A Game-Based Semantics for the Probabilistic Intermediate Verification Language HeyVL.
  • Sara Mohammadinejad, Sheryl Paul, Yuan Xia, Vidisha Kudalkar, Jesse Thomason, and Jyotirmoy V. Deshmukh. Systematic Translation from Natural Language Robot Task Descriptions to STL.
  • Ashkan Panahi. DAGP: A Robust Decentralized Optimization Algorithm with Provable Speed of Convergence.
  • César Sánchez. Shield Synthesis using LTL Modulo Theories.
  • Maike Schwammberger and Qais Hamarneh. It’s Safe to Play while Driving: From a Spatial Traffic Logic Towards Traffic Games.
  • Mahsa Varshosaz and Andrzej Wąsowski. Monitoring Safety and Reliability of Underwater Robots: A Case Study.