What Is DePIN? The Infrastructure Layer Beneath Decentralised AI
DePIN turns idle hardware into coordinated infrastructure using token incentives. It is the physical layer that makes decentralised AI possible. Here is what it actually means, how it connects to AI, and where the gaps are.
The term and what it means
DePIN stands for Decentralised Physical InfrastructureDePINDecentralised Physical Infrastructure Networks. Protocols that use token incentives to coordinate real-world physical infrastructure like GPU compute, wireless networks, storage, mapping sensors, or bandwidth.Like crowd-sourced ride-sharing but for physical hardware. Uber incentivises drivers with dollars. DePIN incentivises hardware operators with tokens. The network grows because individuals choose to contribute capacity in exchange for rewards.Read more → Networks. Messari coined the term in November 2022 via a Twitter poll that beat out three alternatives (Proof of Physical Work, TokenTokenA digital unit of value or access rights tracked on a blockchain. Tokens can represent ownership in a project, a right to use a service, a share of future revenue, or simply a tradable asset with no underlying claim.Like a physical poker chip a casino issues. The chip itself has no value. What makes it worth something is what it lets you do at the casino, what the casino has promised, and how much other people will pay you for it.Read more → Incentivised Physical Networks, and EdgeFi). It stuck because it described something specific: networks that use token incentives to coordinate the deployment and operation of physical infrastructure.
The concept is older than the name. Helium was building a decentralised wireless network years before anyone called it DePIN. But the label created a category, and categories attract capital. DePIN startups raised roughly $1 billion in 2025, up from $698 million in 2024 (Messari State of DePIN 2025).
Three conditions separate DePIN from “blockchain project that mentions hardware”:
-
Three-sided platform. Suppliers provide hardware. A protocol coordinates and verifies. Customers pay for services. The suppliers and the protocol team are distinct entities. Bitcoin miningProof of WorkThe original blockchain consensus mechanism where miners compete to solve computationally expensive puzzles. The winner proposes the next block and earns the rewards. Proof of Work secures Bitcoin and most pre-2020 chains.Like a lottery that runs every 10 minutes where the tickets cost electricity. Whoever spends the most electricity buying lottery tickets has the best chance of winning that round's prize. Nobody can fake the result because the proof of their work is verifiable by everyone.Read more → fails this test because miners are the service.
-
Token-based supply incentives. Tokens reward infrastructure providers, solving the cold-start problem. Without this, you’re just a hosting company.
-
Physical asset deployment. The incentivised action involves real hardware in real locations: GPUs, bandwidth, sensors, storage, vehicles. Pure software protocols don’t qualify.
The three-sided DePIN model
Why decentralised AI needs a physical layer
This site covers 34 projects building decentralised AIDeAIDecentralised AI. An umbrella term for blockchain-based projects that build AI infrastructure (compute, data, inference, models, agents) without a single central provider controlling the system.Like the difference between streaming a movie from Netflix and sharing it via BitTorrent. Netflix is fast and polished but one company controls what you can watch and what you pay. BitTorrent is messier but no single operator can shut you out.Read more →. Most of them need one of three physical resources at scale: compute (GPUs for inferenceInferenceRunning a trained AI model to produce an answer. Inference is what happens when you type a prompt into ChatGPT and get a response. The model takes your input, computes a best guess, and returns it.Like asking an expert for their opinion. The training was the decades they spent becoming an expert. The inference is the 30 seconds it takes them to answer your specific question.Read more → and trainingTrainingThe one-time process of teaching a neural network to perform a task by showing it massive amounts of example data and adjusting its internal weights until the outputs are good. Training builds the model; inference uses it.Like the years an apprentice spends learning a trade. You don't see any of the actual work, just thousands of repeated mistakes gradually becoming competence. By the end, the apprentice can do the job. The training was invisible, but the skill is now permanent.Read more →), data (training datasets, real-time feeds, web scraping), or bandwidth (moving data between nodes). Without decentralised versions of these, “decentralised AI” is software running on someone else’s cloud. The protocol might be permissionless, but the hardware underneath belongs to AWS, Google, or Microsoft.
That dependency is not theoretical. Centralised cloud providers enforce terms of service, comply with government orders, and occasionally go down entirely. If the compute layer is centralised, the AI layer inherits every centralised failure mode.
DePIN solves this by creating markets for physical resources. Instead of one company owning all the GPUs, thousands of independent providers contribute hardware and get paid in tokens. The result is infrastructure that no single entity controls, censors, or can shut down.
Two categories, one framework
DePIN splits into two broad categories based on whether location matters:
Digital Resource Networks (DRNs) provide fungible resources where it doesn’t matter where the hardware sits. Physical Resource Networks (PRNs) deploy hardware in specific locations where geography matters.
DePIN categories and OYM-reviewed projects
| Category | Type | OYM projects |
|---|---|---|
| GPU compute | DRN | Akash, Render, io.net, Aethir, Nosana |
| AI training | DRN | Gensyn |
| Confidential compute | DRN | Phala |
| Video / CDN | DRN | Theta, Flux |
| Bandwidth / data | DRN | Grass, Ocean, Vana |
| Machine economy | PRN | peaq, IoTeX |
The AI connection runs through DRNs. When you run a modelModelA trained neural network that takes inputs (text, images, audio) and produces outputs (more text, classifications, generated content). In DeAI the model is the thing that actually does the work.Like a very experienced apprentice who has spent years watching thousands of masters make furniture. They can't explain how they know when a joint is right, but they can make a chair that looks and functions like a Chippendale. The training is invisible. The output is what matters.Read more → on Akash or Render, you’re using DePIN compute. When Grass scrapes the web through distributed bandwidth, that’s DePIN data collection. When Venice routes your promptPromptThe text you give an AI model to tell it what to generate. A prompt can be a simple question, a long instruction, a chunk of context plus a task, or a conversation history the model uses to produce its response.Like a brief you give to a junior designer. A vague brief gets a vague result. A detailed brief with context, constraints, and examples gets something usable. The quality of the output depends heavily on the quality of the brief.Read more → to a GPUGPUGraphics Processing Unit. Originally designed to render video game graphics, GPUs turned out to be exceptionally good at the massively parallel math that AI models need. Modern AI training and inference runs almost entirely on GPUs.Like a factory with 10,000 workers doing the same simple task in parallel, versus a CPU which is more like 10 workers each doing different complex tasks. AI training involves doing simple math a million times per second on a million numbers, which is exactly what the GPU factory is designed for.Read more → provider, the provider’s hardware is DePIN infrastructure.
The economics: cheaper but rougher
DePIN compute networks price GPU access 45-60% below AWS. The cost advantage comes from aggregating globally idle resources: consumer GPUs, underused enterprise hardware, machines that would otherwise sit powered on but empty. The marginal cost of adding supply is near zero because the hardware already exists.
DePIN vs traditional cloud
| Traditional cloud | DePIN | |
|---|---|---|
| GPU pricing | Market rate | 45-60% discount |
| SLAs | 99.99%, enforceable | None, variable uptime |
| Censorship risk | Subject to ToS and gov orders | Permissionless access |
| Single point of failure | Yes | Distributed across nodes |
| Best for | Synchronous training, production | Inference, burst, async tasks |
| Trust model | Trust the provider | Cryptographic verification |
The honest counter: cheaper doesn’t mean better. For production workloads where downtime costs money, the price discount can evaporate once you account for overprovisioning and engineering overhead.
The supply-without-demand problem
Token incentives are brilliant at attracting hardware suppliers. Converting supply into paying customers is the hard part.
Compound VCVCVenture Capital. Private investors who fund projects at an early stage in exchange for equity or token allocations. VC rounds are typically pre-launch, at steep discounts to any future public price, with multi-year vesting.Like angel investors in a startup who buy shares before the company goes public. They take more risk because the company might fail, so they get a better price. Once the company IPOs they can sell, and the public market pays whatever price it thinks is fair.Read more →’s analysis puts it directly: “the demand side is almost always the more difficult of the two to prove out.” The DePIN flywheel works in theory. In practice, many networks are stuck at step one.
The DePIN flywheel (theory vs reality)
io.net claimed 327,000 GPUs at peak. Roughly 6,700 were active. Grass reports millions of nodes. How many generate meaningful revenue for their operators? We’ve written about this gap before: the distance between reported supply and verified demand is the defining feature of this market.
This doesn’t mean DePIN is broken. It means the sector is early. The projects that survive will be the ones where customers pay for the service because it’s useful, not because token subsidies make it temporarily free.
Where DePIN is production-ready
Not all categories are equally mature.
DePIN maturity by category (editorial assessment)
Compute (inference and rendering) is the most developed. Akash has 69 active providers and generates verifiable on-chain revenue. Render processes real rendering jobs for real studios. These aren’t experiments.
Bandwidth and data work at scale but revenue per contributor is thin. Grass has massive node counts but individual earnings are measured in cents. The value accrues to the protocol, not the hardware provider.
Confidential computeConfidential ComputeHardware-enforced computation where data and code are encrypted in memory and only the authorised application can access them. The machine's operator cannot read what the application is doing even though they own the machine.Like renting space in a bank vault. The bank owns the building and runs the security, but what you put in the vault is invisible even to the bank staff. Only you have the key.Read more → is emerging. Phala provides TEE-based infrastructure that lets DePIN compute handle sensitive workloads. This is the missing piece for enterprise adoption: DePIN that can also guarantee privacy.
Machine economy infrastructure is early. peaq and IoTeX are building the identity and payment layers that machines need to participate in DePIN networks autonomously. When your car pays for its own charging and your drone negotiates its own airspace, that’s the machine economy. It’s being built. It’s not here yet.
How this connects to what we cover
Every project review on this site evaluates infrastructure through our Freedom Score and Returns Score. DePIN is the physical layer that determines whether a project’s freedom claims have substance. A project can score well on governance and token distribution, but if all its compute runs on three AWS instances, the infrastructure dimension drags the score down.
For a practical look at how to build your own sovereign infrastructure using DePIN components, see The Sovereignty Stack. For the case for running models on your own hardware instead, see Why Self-Host Your AI.
DePIN is not the whole picture. But it’s the part that turns “decentralised AI” from a protocol design into something that runs on hardware nobody controls. That’s worth understanding.