Privacy & security

FHE

Fully Homomorphic Encryption. A cryptographic technique that lets you compute on encrypted data without decrypting it. The result is also encrypted, and only the data owner can read it. FHE is the strongest form of computational privacy.

Also known as: Fully Homomorphic Encryption, homomorphic encryption

FHE is the holy grail of privacy-preserving computation. With FHE, you can encrypt data on your machine, send the ciphertext to an untrusted server, have the server perform arbitrary computations on the ciphertext, and receive an encrypted result that only you can decrypt. The server never sees the plaintext data, never sees the intermediate values during computation, and never sees the final result. Privacy is mathematical and absolute, not dependent on hardware or trust in any third party.

The catch is performance. FHE computations are 10,000 to 1,000,000 times slower than the equivalent computation on plaintext data. A simple operation that takes microseconds on normal data takes seconds or minutes on encrypted data. Until recently, FHE was a theoretical curiosity rather than a practical tool. Recent breakthroughs in FHE schemes (TFHE, CKKS, BFV) plus hardware acceleration have brought the overhead down to the point where some real applications are feasible, but it’s still impractical for high-throughput tasks like AI inference at scale.

Several DeAI projects use FHE for specific use cases where the privacy guarantee is more important than the performance overhead. Zama and Fhenix focus on FHE for general computation. Some Nillion use cases combine FHE with MPC. The general pattern is: FHE for low-throughput high-value privacy (private financial computations, encrypted voting, sensitive medical data analysis), and TEEs or MPC for higher-throughput use cases where some trust assumptions are acceptable.

The honest framing for FHE in DeAI is “powerful but not yet practical at scale.” A 100x speedup in FHE performance every few years would eventually make it competitive with TEEs for AI inference, but that day hasn’t arrived. For 2026 use cases, FHE is the right answer for narrow privacy-critical computations and the wrong answer for general AI inference. The OYM Data Sovereignty dimension scores projects that use FHE for the right things highly while noting where TEEs or MPC are doing the actual work.

Related terms