Privacy & security

ZK

Zero Knowledge. A class of cryptographic proofs that let you prove something is true without revealing any of the underlying information. ZK lets a network verify a transaction without seeing the transaction's contents.

Also known as: zero knowledge, zero-knowledge proof, ZKP

Zero-knowledge proofs are one of the most elegant ideas in modern cryptography. They let one party (the prover) convince another party (the verifier) that a statement is true without revealing any information beyond the truth of the statement itself. You can prove you know a private key without revealing the key. You can prove a transaction is valid without revealing the amounts. You can prove you computed a function correctly without revealing the inputs or the intermediate steps. The verifier learns one bit (true or false) and nothing else.

The two main families of ZK proofs in production today are SNARKs (Succinct Non-Interactive Arguments of Knowledge) and STARKs (Scalable Transparent Arguments of Knowledge). SNARKs are smaller and faster to verify but require a trusted setup ceremony to generate the proving parameters. STARKs are larger but require no trusted setup and rely only on standard cryptographic assumptions. Different projects pick different tradeoffs: zkSync, Scroll, Polygon zkEVM use SNARK variants; StarkNet uses STARKs.

In DeAI, ZK matters in two ways. First, ZK rollups (zkSync, Scroll, StarkNet) are L2 chains that batch many transactions into a single ZK proof, which Ethereum can verify cheaply. This makes throughput vastly higher without sacrificing security. Second, “ZK ML” is an emerging area trying to use ZK proofs to verify that a specific AI model produced a specific output without revealing the model weights or the input. This would let a verifier trust an inference result without seeing what the model is, which is the holy grail of decentralised AI inference verification.

The honest limit today is that ZK ML at scale is still impractical. Generating a ZK proof for a single forward pass through a small neural network can take minutes; for a large language model it can take hours. Research projects (Modulus Labs, Giza, Ezkl) are working on making this faster, but it’s still 10-1000x too slow for production AI inference. For the moment, DeAI privacy guarantees mostly rely on TEEs and confidential compute rather than ZK. ZK is the long-term goal; TEE is the practical present.

Related terms