This repository contains benchmarks for different zero-knowledge machine learning (zkML) frameworks. The goal is to compare performance metrics across different implementations using the same multiclass classification model.
The benchmark uses a simple multiclass classification model (multiclass.onnx) with the following characteristics:
- Input: 8-dimensional integer vector (tokens)
- Architecture: Embedding layer → Sum reduction → Linear transformation → ArgMax
- Output: Single integer (class prediction)
- Operations: Gather, ReduceSum, Multiplication, Addition, ArgMax
- Domain size: 4096 (constraint system: 8192)
| Framework | Total Time | Proof Generation | Key Features |
|---|---|---|---|
| Jolt-Atlas | ~0.7s | ~0.7s | Fastest performance, sumcheck + lookpups = optimized circuits |
| Mina-ZKML | ~2.5-2.9s | ~2.5s | Multi-threaded, MINA blockchain integration |
| EZKL | ~6+ minutes | ~18.3s | Comprehensive toolchain, Halo2-based |
Winner: Jolt-Atlas delivers 3.5x faster performance than Mina-ZKML and >500x faster than EZKL's full pipeline.
Mina-ZKML is a zero-knowledge machine learning library designed for the MINA blockchain ecosystem.
cd mina-zkml
cargo build --releasecargo run --release --example multiclassEnvironment:
- Platform: macOS
- Shell: zsh
- Rust: Release mode (optimized)
Performance Metrics:
- Model Loading Time: ~4.5ms
- Total Execution Time: ~2.5-2.9 seconds
- CPU Usage: 583-661% (multi-core utilization)
- Memory Usage: Peak during proof generation
- Public Inputs: 8
- Public Outputs: 1
- SRS Size: 4096
- Constraint System Domain: 8192
Detailed Timing:
Real time: 2.536-2.893 seconds
User time: 14.65-14.66 seconds
System time: 2.12-2.25 seconds
The multiclass example demonstrates:
- Embedding lookup with 31 vocabulary tokens
- Dimension reduction through sum operations
- Linear classification with 10 output classes
- Efficient constraint generation for circuit compilation
- Public input/output verification
- ✅ Full proof generation and verification
- ✅ Public input/output support
- ✅ ONNX model compatibility
- ✅ Optimized constraint system
- ✅ Multi-threaded execution
Model loaded in 4.503042ms
Number of public inputs: 8
Number of public outputs: 1
Required domain size: 4096
Constraint system domain size: 8192
Using SRS size: 4096
Processing public inputs: [[1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 0.0, 0.0]]
Processing public outputs: [[2.0]]
Verifying with 9 public values
Proof verification successful
EZKL is a comprehensive zero-knowledge machine learning library built on Halo2, providing a complete toolchain for zkML applications.
cd ezkl
cargo build --release --features ezklcargo run --release --bin multiclass_network --features ezklEnvironment:
- Platform: macOS
- Shell: zsh
- Rust: Release mode (optimized)
- Backend: Halo2 with KZG commitment scheme
Performance Metrics:
- Total Pipeline Time: 6+ minutes (364.68 seconds)
- Proof Generation Time: 18.26 seconds
- Key Generation Time: ~10.4 seconds (VK: 5.26s, PK: 5.14s)
- Verification Time: 0.24 seconds
- CPU Usage: 75% utilization
- SRS Download: 6 seconds (one-time setup)
Detailed Timing Breakdown:
Real time: 6:04.68 minutes
User time: 269.76 seconds
System time: 6.04 seconds
Pipeline Stages:
- Settings Generation: ~1 second
- Calibration: ~2 seconds
- Circuit Compilation: ~1 second
- Witness Generation: ~3 seconds
- SRS Download: ~6 seconds (cached after first run)
- Key Generation: ~10.4 seconds
- Proof Generation: ~18.3 seconds
- Verification: ~0.24 seconds
- ✅ Complete zkML toolchain
- ✅ Halo2-based circuits
- ✅ KZG polynomial commitment
- ✅ Automatic calibration
- ✅ Private input support
- ✅ ONNX compatibility
⚠️ Slower overall pipeline (6+ minutes vs competitors)
EZKL provides the most comprehensive toolchain but has significantly longer execution times:
- 18x slower proof generation than Mina-ZKML
- 500x+ slower total pipeline than Jolt-Atlas
- Most suitable for applications where setup time is not critical
- Excellent for research and development with extensive debugging features
Each framework implementation follows this pattern:
- Model Loading: Load the ONNX model with specified visibility settings
- Input Processing: Convert input data to the required format
- Proof Generation: Generate zero-knowledge proof of computation
- Verification: Verify the proof with public inputs/outputs
Performance Recommendation: For production applications requiring fast proof generation, Jolt-Atlas provides the best performance with 0.7-second total execution time, making it significantly faster than both Mina-ZKML (2.5s) and EZKL (6+ minutes).
To add a new framework benchmark:
- Create a new directory for the framework
- Implement the multiclass example
- Add benchmark results to this README
- Ensure the same input/output format:
[1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 0.0, 0.0]→[2.0]
Benchmark Standards: All timing comparisons should include total execution time from model loading to proof verification. Current leader: Jolt-Atlas at 0.7 seconds.
- All benchmarks use the same
multiclass.onnxmodel for fair comparison - Timing measurements include both proof generation and verification
- Results may vary based on hardware specifications
- All frameworks should produce the same output for the given input
- Jolt-Atlas demonstrates superior performance with 3.5x speedup over Mina-ZKML and 500x+ speedup over EZKL