RabbitSNARK is up to 2× faster than RapidSNARK, making it the fastest Groth16 CPU prover in the world today.
In the deep learning world, models are typically written in Python and then compiled at runtime into optimized code for the target machine. In contrast, Groth16 provers in the ZK domain have traditionally been implemented directly in languages like C++ (RapidSNARK, ICICLE-Snark), Go (Gnark), and Rust (circom-compat).
RabbitSNARK overcomes these limitations by being the first to apply compiler techniques from deep learning to ZK proving. This approach achieves state-of-the-art Groth16 CPU proving performance across a wide range of platforms—including Linux and macOS, as well as x86 (AMD) and ARM (Graviton, Apple Silicon).
The key differentiator of RabbitSNARK is its two-stage design: unlike traditional Groth16 provers that generate proofs in a single step given the zkey and wtns, RabbitSNARK separates this into a Compile phase and a Prove phase. This approach allows it to generate circuit-specific, highly optimized code that performs proof generation efficiently.
Our design is inspired by XLA and MLIR. For readers who may be unfamiliar, here’s a brief introduction to these concepts.
MLIR is a subproject of LLVM.
LLVM IR is a general-purpose intermediate representation, but it lacks domain-specific knowledge, making specialized optimizations difficult to express.
In the deep learning world, frameworks started developing their own domain-specific compilers or languages to speed up training and inference. However, this approach brings several problems:
To address these issues, Chris Lattner (creator of LLVM) proposed MLIR, which reuses the LLVM infrastructure while supporting domain-aware optimizations and flexible targeting of diverse hardware.