Privacy & security

Confidential Compute

Hardware-enforced computation where data and code are encrypted in memory and only the authorised application can access them. The machine's operator cannot read what the application is doing even though they own the machine.

Also known as: confidential computing, secure compute

Confidential compute is the umbrella term for hardware-secured execution. It covers Intel SGX (older, smaller enclaves), Intel TDX (newer, VM-scale enclaves), AMD SEV-SNP (AMD’s equivalent), and NVIDIA Confidential Computing on H100 and H200 GPUs. The shared idea is that the CPU or GPU has hardware support for keeping certain memory regions encrypted at all times, and only the application running inside those regions can decrypt and use the contents. The operating system, the hypervisor, and the machine’s physical owner are all locked out by hardware.

The use case in DeAI is running AI inference on user data without the inference operator seeing the data. Imagine a doctor uploading a patient case to an AI model for analysis. With normal cloud inference, the cloud provider’s servers see the case in plaintext. With confidential compute, the case is encrypted on the doctor’s machine, sent to the inference server in encrypted form, decrypted only inside the GPU enclave, run through the model, and the results are encrypted again before they leave. The operator of the GPU literally cannot read the case data even though their hardware did the work.

The Confidential Computing Consortium (Intel, AMD, NVIDIA, ARM, IBM, Microsoft, Google) has been working on standards for this since 2019. The mature implementations are now in production at hyperscaler clouds (Azure Confidential Computing, GCP Confidential VMs) and in DeAI projects (Phala Network, NEAR AI Cloud, Venice’s TEE inference stack, Oasis Sapphire, Secret Network’s SecretVM). Each project picks a slightly different combination of hardware and orchestration but the underlying primitive is the same.

The honest limit is that confidential compute requires trusting the hardware vendor. If Intel ships SGX with a flaw, every SGX-based service inherits the flaw until the hardware is patched (which sometimes means waiting for the next chip generation). It’s not as strong as cryptographic privacy techniques like FHE or ZK, but it’s currently the only practical way to run large AI models on user data without exposing the data to the operator. The OYM Data Sovereignty dimension scores projects partly on whether they use confidential compute alone or in combination with other privacy primitives.

Related terms