Prove Any AI Model.
Trust No Intermediary.
ObelyZK verifies neural network inference entirely on-chain — from a single matrix multiply to a 14-billion parameter transformer. GPU-accelerated zero-knowledge proofs mean anyone can confirm an AI produced a specific output, without re-running the model or trusting a third party.
Why GPU + SIMD for DePIN.
The M31 prime enables native 32-bit SIMD on every CPU. Our CUDA kernels unlock GPU-class throughput. Together, any DePIN node becomes a prover.
M31 = Native 32-bit
The Mersenne-31 prime (2³¹ − 1) fits a single CPU SIMD lane. Every x86/ARM chip does M31 arithmetic at near-native speed — perfect for DePIN edge nodes.
17 CUDA Kernels
Custom sumcheck, GEMM, MLE restrict, and LogUp kernels. H100/A100/4090 nodes get 2.56x+ speedup on MLE operations alone.
Hybrid Execution
GPU for Merkle/FRI/quotients above thresholds, CPU SIMD for FFT/interpolation. Optimal work distribution — uses every core and every GPU simultaneously.
Multi-GPU Scaling
DeviceGuard RAII + propagate_device() across Rayon workers. Scales horizontally across GPU-rich DePIN providers. Any node with a GPU can earn.
We dominate every axis.
No other ZKML framework combines GPU acceleration, full transformer support, on-chain verification, and a privacy protocol.
| Feature | ObelyZK | Giza | EZKL | Lagrange | zkPyTorch |
|---|---|---|---|---|---|
| Proving System | STWO Circle STARK | STWO Circle STARK | Halo2 SNARK | GKR | Expander |
| GPU Accel. | 17 CUDA kernels | None (Phase 3) | Yes (Halo2) | Yes (distributed) | Yes |
| Privacy | Full VM31 | None | Partial | None | None |
| LLM Scale | Qwen3-14B (40L) | Phase 1 only | No LLM support | GPT-2 / Gemma3 | Llama-3 (150s/tok) |
| Starknet | Mainnet | None (Phase 3) | No | No | No |
| ML Operators | 15 components | 11 primitives | ~20 constraints | ~10 ops | Limited |
| Transformer | Full (Attn+RoPE+GQA) | Partial | No | Partial | Partial |
| Codebase | 127,500 lines | ~5,000 lines | ~50,000 lines | Closed source | ~10,000 lines |
| Tests | 1,452 | < 100 | ~500 | Unknown | Unknown |
LuminAIR is still in Phase 1 implementing basic operators. We're proving 14B-parameter transformers with GPU acceleration, privacy, and on-chain verification.
Lagrange proved GPT-2 (124M params, a 2019 model). We proved Qwen3-14B — a 2025-era model that is 112x larger. No privacy. No Starknet.
EZKL uses Halo2 SNARKs — larger proofs, more expensive verification, no Starknet integration. They cannot handle LLM-scale models.
15 provable ML components.
Every operation in a modern transformer — from matmul sumcheck to multi-head attention to rotary embeddings — has a dedicated proof component.
+ any HuggingFace transformer with standard config.json
VM31 — Nobody else has this.
A complete privacy system built natively on STWO's M31 field. Poseidon2-M31 hashing, shielded notes, nullifiers, encrypted transfer — all proven with STARKs. No other ZKML framework has anything like it.
Poseidon2-M31 Hash
t=16, rate=8, d=5 — 248-bit security, native to STWO field. No external crypto dependencies.
Shielded Notes
11 M31-input commitments. Full UTXO model with Poseidon2-compress Merkle tree (depth 20).
Nullifier System
12 M31-input nullifiers for double-spend prevention. Deterministic, verifiable on-chain.
Counter-Mode Encryption
Poseidon2-M31 stream cipher. Zero external crypto deps — entire stack uses one hash function.
Public amount → shielded note + STARK proof
Shielded note → public amount + nullifier + STARK proof
2-in / 2-out private transfer + STARK proof
Multi-component composition — bundle multiple txs
19,683 lines of Cairo.
Deployed on Starknet. Complete GKR verification, shielded pool, aggregated weight binding, and ML-specific AIR constraints — all on-chain.
Live proof streaming.
Watch proofs generate in real time. WebSocket broadcast, 3D visualization, 13 event types — from sumcheck rounds to GPU status to proof completion.
tokio::sync::broadcast fan-out. Connect any number of clients. Three.js helix + Chart.js dashboard embedded in prove-server.
Rerun 0.21 integration for 3D proof topology visualization. Attention heatmaps, sumcheck descent charts, GPU metric dashboards.
ProofStart, LayerStart, SumcheckRound, LayerEnd, GpuStatus, ProofComplete, CircuitNodeMeta, ActivationStats, and more.
What's shipped. What's next.
- GPU sumcheck (17 kernels)
- Qwen3-14B 40-layer proof
- On-chain GKR verifier
- VM31 privacy protocol
- Proof streaming (WebSocket)
- Audit pipeline (M31-native)
- Deploy updated Cairo verifier with RLC mode 4
- Reduce on-chain TX count (9 → 3)
- Confidential computing (TEE + GPU attestation)
- DePIN prover network launch
- Cross-chain proof relay
- Herodotus/Integrity integration
- Sub-second proving for small models
Own the ZKML narrative.
The first ecosystem to ship production ZKML wins the AI + crypto narrative. ObelyZK is 127,500 lines of battle-tested code — the official ZKML stack for Starknet.