Poster Session 2 · Wednesday, December 3, 2025 4:30 PM → 7:30 PM
#1411
CLEVER: A Curated Benchmark for Formally Verified Code Generation
Amitayush Thakur, Jasper Lee, George Tsoukalas, Meghana Sistla, Matthew Zhao, Stefan Zetzsche, Greg Durrett, Yisong Yue, Swarat Chaudhuri
Formal VerificationProgram SynthesisLanguage ModelsFormal MethodsInteractive Theorem ProversNeural Theorem ProvingCode GenerationVerified Code GenerationProof-Guided GenerationFormal Specification MiningLean Theorem ProverBenchmark DesignEnd-to-End VerificationNatural Language to CodeAutomated Software Verification
Abstract
We introduce , a high-quality, manually curated benchmark of 161 problems for end-to-end verified code generation in Lean. Each problem consists of:
- the task of generating a specification that matches a held-out ground-truth specification, and
- the task of generating a Lean implementation that provably satisfies this specification.
Unlike prior benchmarks, avoids test-case supervision, LLM-generated annotations, and specifications that leak implementation logic or allow vacuous solutions. All outputs are verified post-hoc using Lean's type checker to ensure machine-checkable correctness.
We use to evaluate several few-shot and agentic approaches based on state-of-the-art language models. These methods all struggle to achieve full verification, establishing it as a challenging frontier benchmark for program synthesis and formal reasoning.
Our benchmark can be found on GitHub as well as HuggingFace. All our evaluation code is also available online.