logo
today local_bar
Poster Session 3 · Thursday, December 4, 2025 11:00 AM → 2:00 PM
#1409

Let a Neural Network be Your Invariant

NeurIPS Project Page Slides Poster OpenReview

Abstract

Safety verification ensures that a system avoids undesired behaviour. Liveness complements safety, ensuring that the system also achieves itsdesired objectives. A complete specification of functional correctness mustcombine both safety and liveness. Proving with mathematicalcertainty that a system satisfies a safety property demands presenting anappropriate inductive invariant of the system, whereas proving livenessrequires showing a measure of progress witnessed by a ranking function. Neural model checking has recently introduced a data-driven approach to theformal verification of reactive systems, albeit focusing on rankingfunctions and thus addressing liveness properties only.
our method extends and generalises neural model checking to additionally encompassinductive invariants and thus safety properties as well. Given a system anda linear temporal logic specification of safety and liveness, our approachalternates a learning and a checking component towards the construction of aprovably sound neural certificate. our new method introduces a neuralcertificate architecture that jointly represents inductive invariants asproofs of safety, and ranking functions as proofs of liveness. Moreover,our new architecture is amenable to training using constraint solvers, accelerating prior neural model checking work otherwise based on gradientdescent.
We experimentally demonstrate that our method is orders of magnitude faster than the state-of-the-art model checkers on pure livenessand combined safety and liveness verification tasks written inSystemVerilog, while enabling the verification of richer properties than waspreviously possible for neural model checking.
Poster