These speakers will give an invited talk at SPIN 2024. More speakers and details about the talks will be announced on this page.
The Spin on Spin
The first Spin (then) Workshop was a small event that was organized by INRS-Telecommunications in Montreal, Quebec in 1995. Since then, the event was held 16 times in Europe, 10 times in the USA, twice in Canada, and once in each of China and South Africa. Today, the focus of the Symposium is much broader than a single verification tool or technology, but it is interesting to see how much progress has been made over the years. The development of Spin itself started in 1984 at Bell Labs, a respectable 40 years ago. That first version was 3,508 lines of code (which included a good amount of optional debugging code). The code has grown quite a bit since then, as all code inevitably does until it stops being useful. In this talk I’ll reflect a little on what specific problems Spin was originally trying to solve, and how our collective focus has changed over the years.
Gerard Holzmann is well known as the founder of the SPIN model checker in the 1980s. He has received multiple awards, including the ACM Software System Award 2001 for the creation of SPIN.
MoXI: An Intermediate Language for Symbolic Model Checking
Three progressive challenges stand in between the popular, “push-button”, industrially-valuable technique of symbolic model checking and the level of widespread adoption achieved by other verification techniques: (1) the specification bottleneck; (2) the state-space explosion problem; and (3) the lack of standardization and open-source implementations limiting the impact of advances in (1) and (2). We address this third challenge. Learning from past definitions of intermediate languages and common interfaces, as well as input from the international research community, we define a new, extensible intermediate language as a standard interface for hardware symbolic model checking. Our contributions include: (a) defining the syntax and semantics of MoXI, the new symbolic model-checking intermediate language designed to become a standard for the international research community; (b) demonstrating that an initial implementation of symbolic model checking through MoXI performs competitively with current state-of-the-art symbolic model checkers; (c) reframing the next symbolic model checking research challenges considering this new community standard.
Kristin Rozier is an Associate Professor in the faculty of the Aerospace Engineering and Computer Science Departments. She previously worked at the University of Cincinatti and NASA. Her research interests include various design-time formal methods for safety-critical (aerospace) systems, including model checking, runtime monitoring and LTL theory.
Taming the AI Monster: Monitoring of Individual Fairness for Effective Human Oversight
This presentation introduces a framework that is meant to assist in mitigating societal risks that software can pose. At its core, we discuss a black-box analysis technique for identifying undesired effects of software, based on a combination of runtime monitoring, probabilistic falsification, and software doping analysis.
We discuss the potential of this framework in the context of high-risk AI systems that are being used to evaluate humans in a possibly unfair or discriminating way, in a spectrum from credit approval to judicial decisions. We demonstrate how our approach can assist humans-in-the-loop to make better informed and more responsible decisions. This aims to promote effective human oversight, which for high-risk AI systems is a central concept in the European Union’s upcoming AI Act.
Holger Hermanns is Professor in the Dependable Systems and Software group at Saarland University. His research interests include modeling and verification of concurrent systems, resource-aware embedded systems, compositional performance and dependability evaluation, and their applications to energy informatics. He is an outspoken proponent of proactive algorithmic accountability.