Attestation
A cryptographic proof that a piece of code is running on a specific hardware enclave in an unmodified state. Attestation lets remote users verify that a service is genuinely running what it claims to be running.
Also known as: remote attestation
Attestation is what makes TEEs useful for trustless services. Without attestation, a TEE is just a privacy feature: you trust that the code inside the enclave can’t be read by the operator. With attestation, the TEE can prove to a remote user that the exact published code is running, on a specific hardware version, in a specific unmodified configuration. This converts a privacy promise into a verifiable commitment.
The mechanism works by having the hardware sign a measurement of the code that was loaded into the enclave. The signing key is baked into the chip at manufacture time, and Intel (or AMD, or NVIDIA) publishes the public key so anyone can verify that the signature came from a genuine processor. The signed measurement includes the cryptographic hash of the code, the hardware version, and a few configuration parameters. A remote party who knows what code they expect can compare the attested hash to the expected hash and verify they match.
In DeAI, attestation is the difference between “trust the operator that they’re running the model you think they’re running” and “verify that the operator is running the model you think they’re running.” Venice’s TEE inference stack uses attestation through Phala and NEAR AI Cloud. Morpheus’s confidential compute layer (added 2026 via SCRT Labs) uses attestation to prove that the TEE running the inference matches the published code. Without these attestation layers, a malicious operator could swap the model, log the prompts, or modify the responses without detection.
The honest limit is that attestation only proves what the hardware vendor says is true. If the chip itself has a flaw (Spectre, SGAxe, hardware backdoors), the attestation chain inherits the flaw. Attestation is not a substitute for cryptographic proofs like ZK or FHE, which don’t require trusting any specific manufacturer. It’s a pragmatic compromise that works because the alternative (no verifiability at all) is worse and the alternative (full cryptographic proofs of large model inference) is currently too slow to be practical.