Listen to this episode
· Updated

How DeAI Actually Competes

DeAI won't beat OpenAI at frontier models. The edge is coordination, specialisation, and economics. Not ideology: game theory and market structure.

DeAI is not going to out-frontier the frontier labs. That’s not the play.

OpenAI, Anthropic, and Google have more GPUs, more researchers, and more money than the entire decentralised AIDeAIDecentralised AI. An umbrella term for blockchain-based projects that build AI infrastructure (compute, data, inference, models, agents) without a single central provider controlling the system.Like the difference between streaming a movie from Netflix and sharing it via BitTorrent. Netflix is fast and polished but one company controls what you can watch and what you pay. BitTorrent is messier but no single operator can shut you out.Read more → ecosystem combined. If DeAI tries to beat them at trainingTrainingThe one-time process of teaching a neural network to perform a task by showing it massive amounts of example data and adjusting its internal weights until the outputs are good. Training builds the model; inference uses it.Like the years an apprentice spends learning a trade. You don't see any of the actual work, just thousands of repeated mistakes gradually becoming competence. By the end, the apprentice can do the job. The training was invisible, but the skill is now permanent.Read more → GPT-6, it loses. That game is rigged.

The real competitive edge is different. It’s cost, coordination, neutrality, and infrastructure. Not ideological purity. Actual market structure advantages that centralised players cannot replicate without ceasing to be centralised.

This is where DeAI can actually win.

The cost inflection point

Something shifted in late 2024. Open-source models stopped being the budget option that admitted they were worse. They became the budget option that admitted they were almost as good.

DeepSeek vs GPT-5.4 cost
~4%
Capability retained
~90%
DeepSeek vs GPT-5.4 gap
25x

Let me be specific about the numbers because the deltas are significant. DeepSeek V3.2 (which now powers both their chat and reasoning endpoints) costs $0.28 per million inputPromptThe text you give an AI model to tell it what to generate. A prompt can be a simple question, a long instruction, a chunk of context plus a task, or a conversation history the model uses to produce its response.Like a brief you give to a junior designer. A vague brief gets a vague result. A detailed brief with context, constraints, and examples gets something usable. The quality of the output depends heavily on the quality of the brief.Read more → tokens and $0.42 per million output tokens (DeepSeek pricing). OpenAI’s current frontier modelModelA trained neural network that takes inputs (text, images, audio) and produces outputs (more text, classifications, generated content). In DeAI the model is the thing that actually does the work.Like a very experienced apprentice who has spent years watching thousands of masters make furniture. They can't explain how they know when a joint is right, but they can make a chair that looks and functions like a Chippendale. The training is invisible. The output is what matters.Read more →, GPT-5.4, costs $2.50 and $15 respectively (OpenAI pricing). Claude Opus 4.6 costs $5 and $25 (Anthropic pricing). For a million tokens in and a million tokens out, you are looking at $0.70 with DeepSeek versus $17.50 with GPT-5.4 versus $30 with Claude Opus 4.6. That is a 25x gap between DeepSeek and OpenAI’s frontier model.

The quality gap is small and shrinking. DeepSeek V3.2 competes directly with GPT-5 level performance, beating it on some maths benchmarks while trailing on general knowledge (Artificial Analysis). You can run it locally. No APIAPIApplication Programming Interface. A structured way for one piece of software to talk to another. In DeAI, APIs let applications request inference from a model without running the model themselves.Like a waiter in a restaurant. You don't walk into the kitchen and cook your own meal. You tell the waiter what you want, they tell the kitchen, the kitchen cooks it, and the waiter brings it back. The API is the waiter.Read more → calls. No data leaving your infrastructure. No usage limits.

Model pricing comparison (March 2026, verified against provider pricing pages)

ModelInput $/1MOutput $/1MTotal (1M in + 1M out)
DeepSeek V3.2 $0.28 $0.42 $0.70
OpenAI GPT-5.4 $2.50 $15.00 $17.50
Claude Opus 4.6 $5.00 $25.00 $30.00

Cost per 1M tokens in + 1M out

Claude Opus 4.6 $30.00
OpenAI GPT-5.4 $17.50
DeepSeek V3.2 $0.70

Qwen3 Coder hits 69.6% on SWE-Bench Verified. That beats older GPT models on key coding benchmarks. Index.dev review The licence is MIT. You can fine-tune it. You can host it. You can build products on top of it without asking permission or paying API taxes.

Llama 4 Scout offers a 10 million tokenTokenA digital unit of value or access rights tracked on a blockchain. Tokens can represent ownership in a project, a right to use a service, a share of future revenue, or simply a tradable asset with no underlying claim.Like a physical poker chip a casino issues. The chip itself has no value. What makes it worth something is what it lets you do at the casino, what the casino has promised, and how much other people will pay you for it.Read more → context window. Ten million. That is not a typo. Meta AI Blog The context length alone opens workloads that would cost significantly more on frontier APIs.

This matters because it breaks the centralised AI business model. OpenAI and Anthropic charge premiums for frontier capability. If frontier-adjacent models cost 4-20% of frontier models and deliver 80-95% of the performance, the premium compresses. You do not need GPT-5.4 for most tasks. You need something good enough, cheap enough, and controllable.

The self-hosting option

When DeepSeek V3.2 costs $0.28 per million input tokens through their API, you might wonder why anyone would bother self-hosting. The answer is control, not cost.

Self-hosting means no rate limits. No usage tracking. No terms of service that change with a policy update. No possibility that the provider decides your use case violates their interpretation of safety guidelines.

Akash Network rents H100s at roughly $1.50-2 per GPU-hour. At 70% utilisation on inferenceInferenceRunning a trained AI model to produce an answer. Inference is what happens when you type a prompt into ChatGPT and get a response. The model takes your input, computes a best guess, and returns it.Like asking an expert for their opinion. The training was the decades they spent becoming an expert. The inference is the 30 seconds it takes them to answer your specific question.Read more → workloads, that translates to competitive per-token costs at scale. The utilisation matters more than the hardware cost. Most centralised providers run at 30-50% utilisation and price accordingly. You can price more efficiently if you control your own utilisation.

The self-hosting economics work at scale. Below certain thresholds, API providers win on convenience. Above them, the maths tips toward ownership. The threshold is dropping as open-source models improve.

Small models, domain specialisation, and the decentralised advantage

Here’s something the frontier narrative obscures. Most AI workloads don’t need general intelligence. They need specific competence.

A model trained on medical literature doesn’t need to write poetry. A model optimised for legal contract review doesn’t need to generate images. A model built for code review doesn’t need to understand interpersonal dynamics. Specialisation beats generalisation in narrow domains.

This is where decentralised training and federated learning become genuinely interesting. FLock.io’s Web3 Agent Model beats GPT-4o, Gemini Flash 2.0, and DeepSeek-V3 on Web3-specific tasks. Messari Q1 2025 Report Not because it’s a better general model, but because it’s trained on Web3 data, for Web3 tasks, in a Web3 context.

Centralised labs have strong incentives to build general models. The economics of frontier training (hundreds of millions of dollars per run) demand broad applicability to justify the investment. You can’t spend $500 million training a model that only does one thing well.

But decentralised networks don’t face that constraint. A specialised model trained on 1,000 GPUs distributed across Bittensor subnets costs a fraction of a frontier run. The training can be continuous, not discrete. The model can be domain-specific without apologising for not being general.

Bittensor’s subnet architecture creates markets for specialised intelligence. One subnet handles text generation. Another handles image generation. Others handle translation, code review, data scraping, embeddings, financial analysis, medical diagnosis. Each subnet optimises for its specific task. The best miners earn the most TAO. Evolution, not engineering.

This is the real DeAI opportunity: not beating GPT-5 at being GPT-5, but building a network of specialised models that collectively exceed what any single generalist can do. Generalists will always exist for broad tasks. But for everything else, specialisation wins.

Coordination layers: where DeAI gets interesting

The cost advantage is real but not uniquely DeAI. Anyone can run DeepSeek V3.2 on AWS. The structural DeAI play is coordination: mechanisms that align incentives across distributed participants to produce AI capability.

Bittensor: the coordination experiment that actually shipped

Bittensor is the largest decentralised AI network by market cap, and it’s one of the few that has real workloads running through it.

128 active subnets. 8,000+ GPUGPUGraphics Processing Unit. Originally designed to render video game graphics, GPUs turned out to be exceptionally good at the massively parallel math that AI models need. Modern AI training and inference runs almost entirely on GPUs.Like a factory with 10,000 workers doing the same simple task in parallel, versus a CPU which is more like 10 workers each doing different complex tasks. AI training involves doing simple math a million times per second on a million numbers, which is exactly what the GPU factory is designed for.Read more → nodes. $100M+ exchange-reported daily volume (unfiltered; may include wash trading). Chutes, a subnet handling serverless inference, processes billions of tokens daily. These are not hypothetical numbers. LinkedIn Analysis

The architecture works like this: subnets are specialised markets for specific AI tasks. Miners compete to produce the best outputs. Validators score them. The best miners earn the most TAO. Underperformers get pushed out. Darwinian selection applied to model inference and training.

Dynamic TAO, launched February 2025, changed how emissionsEmissionsNew tokens created and distributed by a blockchain protocol over time as rewards to validators, stakers, or miners. Emissions fund network security and participation at the cost of diluting existing holders.Like a company that pays employees partly in newly printed shares. Every year the total number of shares goes up, which means existing shareholders own a slightly smaller slice of the same company unless the company grows faster than the printing.Read more → flow. Each subnet now has its own alpha token traded against TAO in an on-chain AMMAMMAutomated Market Maker. A type of decentralised exchange that uses liquidity pools and a pricing formula to enable token trading without an order book. Anyone can deposit tokens into the pool and earn fees from trades.Like a vending machine that sets its own prices based on how much stock is left. As one type of token gets bought and depleted, the machine raises its price for that token automatically. As the other type accumulates, its price drops. No human operator needed.Read more →. Stakers vote with their TAO by depositing into subnets they believe produce value. Emission allocation follows market signals, not validatorValidatorA computer that runs the full blockchain protocol, verifies transactions, and proposes new blocks. Validators are the workers that keep a Proof of Stake network running, and they earn rewards for doing the work correctly.Like a notary public who witnesses and stamps legal documents. Validators witness transactions, check they follow the rules, and stamp them into the permanent record. A notary who commits fraud loses their license. Validators work the same way, except the license is staked tokens that get slashed on misbehaviour.Read more → votes. The first halvingHalvingA protocol event that cuts the rate of new token emissions by half. Halvings are scheduled in advance, happen automatically at fixed intervals, and are a core mechanism for enforcing declining token supply growth over time.Like a savings account where the interest rate is contractually cut in half every four years. You still earn interest, but the rate drops on a known schedule, and the issuer can't change it without breaking the contract.Read more → in December 2025 cut daily emissions from roughly 7,200 TAO to roughly 3,600 TAO. Blockworks Research

The honest assessment: Bittensor’s decentralisation is weaker than marketed. The Opentensor Foundation validates all blocks through Proof of Authority. They halted the entire network in July 2024 and intervened in subnet 28. No timeline for Proof of StakeProof of StakeA consensus mechanism where validators earn the right to create new blocks by staking tokens as collateral. If they misbehave, the network slashes their stake. Proof of Stake replaced energy-intensive mining on most modern chains.Like being a licensed auctioneer. You post a bond to get the license, you earn fees for every auction you run, and you lose the bond if you rig an auction. The bigger the bond, the more auctions you get to run.Read more → transition exists. The top 1% of wallets control roughly 90% of stakeStakingLocking up a cryptocurrency to help secure a blockchain network, usually in exchange for rewards. The locked tokens act as a security deposit that can be taken away if the staker misbehaves.Like putting down a large rental deposit for an apartment. You get the money back if you behave, you earn interest while it's locked, and the landlord takes it if you trash the place.Read more →. Stake weight appears to correlate more strongly with rewards than AI output quality. My Bittensor review details the centralisation concerns.

But the subnet model is genuinely innovative. Anyone can spin up a subnet for any AI task. If it attracts miners and validators, it gets emissions. If it produces useful output, it grows. The market decides what intelligence gets produced, not a central lab.

FLock.io: technical traction, not just tokens

FLock.io is a different coordination play. It focuses on federated learning, training models across distributed datasets without centralising the data. Named partners include CIMG Inc. (Nasdaq-listed, using FLock for AI health products) and BGA, a global nonprofit running “AI for Good” initiatives. They made the CB Insights AI 100 list for 2025. CoinMarketCap

This is notable because most DeAI projects claim technical parity but cannot benchmark it. FLock published results.

The Bittensor integration (FLock OFF subnet) shows coordination layers can compose. Federated small-language-model training on a decentralised network. Not a pitch deck, actual running code.

Federated learning matters for privacy-sensitive domains. Medical data cannot leave hospitals. Financial data cannot cross jurisdictions. Personal data should not be centralised. FLock’s architecture trains models where the data lives, aggregating only model updates. The data never moves. The model improves anyway.

Sentient: the $85 million question mark

Sentient raised $85 million in seed funding in July 2024. Peter Thiel’s Founders Fund led. Pantera participated. Franklin Templeton made a strategic investment in July 2025. CoinDesk

They launched “The GRID” in August 2025, described as an open-source AGI network. The SENT token listed on Binance. Over 100 AI models/agents on the network. MEXC News

But usage metrics are not publicly disclosed. Mainnet is described as “pending” in most documentation. This is early-stage. The funding is real. The team is serious. Whether the coordination mechanism produces better AI than centralised alternatives remains an open question. More “pitch deck plus testnet” than production system.

I am sceptical of any project that raises $85 million before shipping. The history of crypto is littered with well-funded projects that never found product-market fit. Sentient might be different. The investors are credible. But until mainnet launches with measurable usage, it belongs in the “wait and see” category.

Coordination layer traction

ProjectStatusReal Traction?Key Signal
Bittensor Production Yes 128 subnets, $100M+ daily volume, Chutes revenue
Sentient Testnet Early $85M funding, Grid launched Aug 2025, Binance listing
FLock.io Production Yes Web3 Agent Model benchmarks, CB Insights AI 100, Bittensor subnet

The geopolitical angle: neutrality as a feature

At some point, technology stops being about technology.

The US sees AI as an economic engine and national security pillar. China sees it as geopolitical infrastructure: centralised, sovereign, aligned with Belt and Road diplomacy. Atlantic Council

Neither is wrong. AI will define economic competitiveness, military capability, and information sovereignty for the next century. Both countries know this. Both are building AI stacks they control, that advance their interests, that can be weaponised if necessary.

Middle powers and the demand for neutral infrastructure

Middle powers (the EU, Southeast Asia, Latin America, the Gulf states) face a binary choice. US tech or Chinese tech. Neither is neutral. Both come with strings attached: data residency requirements, export controls, surveillance expectations, influence. The Chatham House analysis is blunt: Chatham House

So: demand for neutral infrastructure. Not US-controlled. Not Chinese-controlled. Decentralised. Nodes distributed across multiple jurisdictions, no single government able to shut it down, open-source code with no hidden backdoors, permissionless access that export controls cannot blockBlockA batch of transactions added to a blockchain at a set interval. Each block cryptographically links to the previous one, creating an append-only chain that can't be rewritten without redoing all the work since.Like a page in a ledger. Every page has a fixed number of entries, every page references the previous page, and once a page is filled and signed off it can't be edited without visibly invalidating every page that came after. The chain is just a very long series of these sealed pages.Read more →. The network exists everywhere and nowhere.

Akash has 69 active providers across multiple countries. Bittensor has 8,000+ GPU nodes globally. Render has 15,670 node operators. Not theoretical distributions. Live now.

Sovereign AI is now a stated priority

AI infrastructure is becoming sovereign infrastructure. Countries want control over their data, their compute, their models. The EU’s AI sovereignty initiatives, India’s push for domestic AI capability, Saudi Arabia’s investment in national AI: these are not optional programmes. They are strategic priorities.

Centralised US providers cannot offer true sovereignty. AWS might have EU regions, but AWS is a US company subject to US law. Centralised Chinese providers cannot offer it either. The CLOUD Act, US export controls, Chinese national security laws: these make true data sovereignty impossible with centralised providers.

DeAI can offer it. A network where compute happens across diverse jurisdictions, where no single entity controls the infrastructure, where the code is open and auditable. That is sovereign infrastructure by design, not by promise.

Is this a large market today? No. Most AI buyers prioritise cost and capability over sovereignty. But sovereign AI is a stated priority for the EU, India, Saudi Arabia, Singapore, and others. The demand is emerging. Whether DeAI can capture it depends on whether the networks become reliable enough for government adoption.

The US-China AI race is also making decentralised alternatives valuable for a different reason: resilience. If US-China tensions escalate to technology decoupling, centralised providers in either jurisdiction become unreliable for users in the other. A truly decentralised network, with nodes across both jurisdictions and neutral third parties, provides continuity that neither US nor Chinese centralised providers can guarantee.

On-chain AI standards: the infrastructure layer

Two standards launched in late 2024 and early 2025 that matter more than their current usage suggests: ERC-8004 and x402. For a deeper look at what happens when AI agents hold wallets, see the companion essay.

ERC-8004: agent identity on-chain

ERC-8004 establishes on-chain registries for AI agents. Three registries: Identity, Reputation, and Validation. Deployed January 29, 2026 on 18+ EVM-compatible chains. Chainstack

Autonomous agents need persistent identity, reputation that cannot be forged, and validation other agents can verify without human intermediaries. ERC-8004 provides all three.

An agent registers with its capabilities, pricing, and interaction endpoints. Other agents can discover it, verify its reputation, and transact with it. The registry is immutable and transparent. Avalanche announced ERC-8004 live on their platform in February 2026. The Graph is maintaining dedicated subgraphs across 8 blockchains.

The criticism is fragmentation. 18+ chains means 18+ registries. Network effects require consolidation on a single registry or cross-chain discoverability. 49% alternative facilitator usage suggests concentration is still an issue. SmartContracts Tools But the standard exists. The infrastructure is live.

x402: micropayments for AI-to-AI transactions

x402 turns HTTP 402 status codes (“Payment Required”) into functional micropayment rails. AI agents can pay for API calls and services with on-chain stablecoins. Pay-per-request infrastructure for machine-to-machine commerce. Chainstack x402

The use case is specific but important. An AI agent needs to query a database, run an inference, or access a service. Instead of a subscription or pre-paid API key, it sends a micropayment with the request. The service processes it and responds. No human involved. No account management. No billing department.

Austin Griffith on the Bankless podcast gave a concrete example: “If I put my node up as a service and I put it on X402 and say, I’ll help you do your taxes by giving you the P&L of every one of your transactions because I have a full node here. Then you can go to 8004 and discover me…” The standards compose. Agent discovery via ERC-8004. Payment via x402. Autonomous commerce.

Solana is dominant for production deployments, as speed and low cost matter for micropayments. The Graph integrated x402 for Subgraph queries. Implementations exist on Base, MultiversX, and Sei.

Why these matter

Neither standard has massive adoption today. But they are building blocks for an autonomous agent economy. If AI agents are going to transact without human intermediaries, they need identity, reputation, and payment infrastructure. ERC-8004 and x402 provide it.

The bet is that agent-to-agent commerce becomes meaningful. If that happens, these standards become the rails. If it doesn’t, they remain infrastructure in search of users. But compared to DeAI projects building speculative token economies without utility, standards work is infrastructure investment.

On-chain AI standards

StandardDeployedChainsPurpose
ERC-8004 Jan 29, 2026 18+ EVM Agent identity, reputation, validation registries
x402 Active Solana, Base, MultiversX, Sei Micropayments for API calls via HTTP 402

Cost benchmarks: decentralised compute at scale

The cost advantage for decentralised compute is real, but the details matter.

For a detailed breakdown of how these three networks compare on revenue and tokenomics, see RENDER vs AKT vs IO: the revenue question. The headline claim: “You can get industry-standard GPUs for less than 50% of AWS/Azure costs.” Capracap Substack This is directionally correct but hides important tradeoffs.

GPU compute cost comparison (A100 80GB)

ProviderApprox Cost/GPU-HourNotes
AWS On-Demand ~$3-4 Reference baseline with SLAs
Akash ~$1.50-2 Variable, spot-market, fewer guarantees
Render ~$1-2 Rendering-focused, AI pivot ongoing
io.net ~$0.50-1.50 But verify actual availability

Provider-by-provider

Akash is the most credible open-source compute marketplace. 69 active providers, 700-1,000 GPUs, roughly 60% utilisation. $3.15 million annual revenue, up 128% year-over-year. Real customers: Venice, ElizaOS, Morpheus, Gensyn. Permissionless provider onboarding. Fully open source. Freedom Score: 66/100, among the highest for DePINDePINDecentralised Physical Infrastructure Networks. Protocols that use token incentives to coordinate real-world physical infrastructure like GPU compute, wireless networks, storage, mapping sensors, or bandwidth.Like crowd-sourced ride-sharing but for physical hardware. Uber incentivises drivers with dollars. DePIN incentivises hardware operators with tokens. The network grows because individuals choose to contribute capacity in exchange for rewards.Read more → GPU networks.

The tradeoffs are real. 69 providers is not a cloud marketplace. It’s a pilot programme. AWS has millions of GPUs. Akash has hundreds. You don’t get managed services. You don’t get SLAs. You get raw compute at a discount, and you accept that the provider might disappear. Provider count actually declined through 2025, despite usage growth.

io.net markets 327,000 “registered” GPUs. The daily average of verified, active GPUs in Q1 2025 was 6,720. That’s 2% utilisation of the registered base. Freedom Score: 38/100, F grade. Closed-source core. No governance. Founder departed under allegations. Revenue is growing ($5.7M Q1 2025, 82.6% QoQ growth), but the metrics are inflated and the decentralisation claims are weak. My io.net review details the concerns.

Render has genuine product-market fit for visual rendering. 15,670 node operators. 69 million frames rendered cumulatively. Freedom Score: 32/100, F grade. Permissioned network, closed-source. Late to the AI compute pivot. Real customers like Beeple and Apple Vision Pro. But the tokenomics work better than most. Returns Score: 70/100, B grade. Render analysis

The tradeoffs in plain terms

The honest assessment: decentralised compute is cheaper but smaller. You are trading reliability and scale for cost and sovereignty. That tradeoff makes sense for some workloads and not for others. If you need guaranteed uptime for production inference, centralised providers still win. If you need cheap batch processing and can tolerate interruptions, DePIN networks work.

The DePIN GPU landscape in 2026: io.net leads on claimed capacity (327,000 registered GPUs, 6,720 active), Render leads on rendering volume and mainstream customers, Akash leads on decentralisation and open-source credibility. Aethir (not reviewed here) claims $166.9M ARR with 430K GPUs (self-reported figures). The market is fragmented and the metrics are noisy. But the cost advantage is real for users who can tolerate the tradeoffs.

The quadrant model: where projects actually sit

This is the framework I use at OwnYourMind.ai. Two axes: Freedom Score (decentralisation, censorship resistance, sovereignty) and Returns Score (token utility, value accrual, supply dynamics). The threshold for “high” on either axis is 5.5/10. Full explanation of the quadrant model.

Project quadrant map: Freedom Score (x) versus Returns Score (y) C · Centralised value A · Best of both D · Avoid B · Sovereignty play 0 2.5 5.5 7.5 10 0 2.5 5.5 7.5 10 Freedom Score (decentralisation) Returns Score (token value capture) Aethir · Freedom 3 · Returns 6.3 · Quadrant C Akash Network · Freedom 6.6 · Returns 6.8 · Quadrant A Allora Network · Freedom 5.2 · Returns 4.7 · Quadrant D Olas · Freedom 6.2 · Returns 3.5 · Quadrant B Bittensor · Freedom 5.6 · Returns 6.3 · Quadrant A ElizaOS · Freedom 5.2 · Returns 2.7 · Quadrant D Cookie DAO · Freedom 2.2 · Returns 5.2 · Quadrant D FLock.io · Freedom 5.5 · Returns 5.8 · Quadrant A Fetch.ai / ASI Alliance · Freedom 5 · Returns 5.5 · Quadrant C Flux · Freedom 7.6 · Returns 5.7 · Quadrant A Gensyn · Freedom 5.2 · Returns 2.8 · Quadrant D Golem Network · Freedom 6.9 · Returns 4.6 · Quadrant B Grass · Freedom 3.7 · Returns 5.5 · Quadrant C Intelligent Internet · Freedom 4.2 · Returns 2.3 · Quadrant D io.net · Freedom 3.8 · Returns 5.4 · Quadrant D IoTeX · Freedom 6.4 · Returns 5 · Quadrant B Morpheus · Freedom 7.8 · Returns 6 · Quadrant A NEAR Protocol · Freedom 6.3 · Returns 7.3 · Quadrant A Nillion · Freedom 5.6 · Returns 4.2 · Quadrant B Nosana · Freedom 5.8 · Returns 4.6 · Quadrant B Oasis Network · Freedom 6.2 · Returns 4.6 · Quadrant B Ocean Protocol · Freedom 5.5 · Returns 3.9 · Quadrant B Ora Protocol · Freedom 4.8 · Returns 4.2 · Quadrant D OriginTrail · Freedom 6 · Returns 6.5 · Quadrant A peaq · Freedom 5.3 · Returns 4.8 · Quadrant D Phala Network · Freedom 5.5 · Returns 5 · Quadrant B Render Network · Freedom 3.2 · Returns 7.2 · Quadrant C Sahara AI · Freedom 3.3 · Returns 4.5 · Quadrant D Sentient · Freedom 4.9 · Returns 5 · Quadrant D Theta Network · Freedom 5.9 · Returns 5.3 · Quadrant B Vana · Freedom 5.9 · Returns 5.5 · Quadrant A Venice · Freedom 5.7 · Returns 6.8 · Quadrant A Virtuals Protocol · Freedom 4.2 · Returns 6.8 · Quadrant C Warden Protocol · Freedom 4.7 · Returns 4.8 · Quadrant D
A Best of both 9
B Sovereignty play 9
C Centralised value 5
34 projects plotted on Freedom Score (decentralisation) versus Returns Score (token value capture). Threshold for "high" on each axis is 5.5/10. Hover or tap any dot for the project name and exact scores.

Quadrant A (High Freedom, High Returns) used to be empty. It isn’t anymore. The original quadrant model assumed a hard trade-off: pick decentralisation or pick value capture. That trade-off has weakened. Akash is the cleanest example, a permissionless GPU marketplace running on revenue rather than emissions. Morpheus sits highest on the freedom axis. Bittensor, NEAR, and Venice fill out the cluster from different angles: subnet markets, an L1 with pivot speed, an inference product with a token that buys back its own emissions. None of these are clean. Most have at least one dimension that will keep score chasers up at night. But the simple “you can’t have both” framing no longer holds.

Quadrant B (High Freedom, Low Returns) is the classic sovereignty play: decentralised infrastructure with a token model that hasn’t caught up. Golem, IoTeX, Phala, Theta, and Olas are the canonical examples. The pattern is the same across the quadrant: real protocols, sometimes real users, but value accrual is weak. Returns come from network growth, not revenue distribution.

Quadrant C (Low Freedom, High Returns) is efficient but centralised. Render is the textbook case: a real product, real revenue, token buybacks, foundation control. Virtuals and Aethir follow the same pattern. Returns are visible. The decentralisation isn’t.

Quadrant D (Low Freedom, Low Returns) is the danger zone. io.net is the headline: 327,000 “registered” GPUs, ~6,720 actually verified. The rest of D follows similar patterns. Centralised operation, weak token model, often inflated metrics. The token has to do something the operating company couldn’t already do better. For most of these, that test fails.

The point of the model is that no project can be all things. The chart above gives the current placement; what matters in the long run is the structural trade-off. The question is whether a project’s trade-off is intentional and acknowledged, or obscured by marketing.

Where DeAI actually competes

DeAI competes on three dimensions that centralised players cannot match without ceasing to be centralised.

Cost commodity traps

Open-source models at 5-20% of frontier cost, running on decentralised compute at 50% of cloud cost. Centralised providers can lower prices, but they have revenue expectations and margin requirements. Decentralised providers are selling spare capacity. The cost floor is lower.

Coordination mechanisms

Bittensor’s subnet model, FLock’s federated learning, the emerging agent standards. These are experiments in aligning distributed participants toward AI production. They might fail. They also might produce capabilities that centralised labs cannot replicate because they lack the coordination mechanism, not the technology.

Sovereignty and neutrality

The geopolitical dimension. Governments and enterprises that want AI infrastructure they control, that no foreign power can influence or shut down. Centralised US and Chinese providers cannot offer true sovereignty. DeAI can.

What DeAI does not compete on

What DeAI does not compete on is frontier capability. GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro: these will be better than anything decentralised networks produce for the foreseeable future. The frontier labs have the GPUs, the researchers, and the data. That game is not winnable.

But most AI workloads don’t need frontier models. They need good enough, cheap enough, and controllable. That’s where DeAI plays. And that’s where the economics are shifting toward open-source and decentralised infrastructure.

The real question is not whether DeAI can beat OpenAI. It’s whether DeAI can capture the commodity layer of the AI stack while frontier labs fight for the premium. The commodity layer is larger. The margin is lower. The sovereignty premium is real.

That’s the actual competitive position. Not revolution, but adjacent competition on the dimensions that centralised players structurally cannot optimise.


The dual-score framework (Freedom Score + Returns Score) is my attempt to bring analytical rigour to a space full of marketing claims. If you want the methodology, it’s at Freedom Score Methodology and Returns Score Methodology. If you want project-by-project analysis, see Projects. Current token holdings are disclosed on our disclaimer page.

Score changes, new reviews, one editorial take every two weeks. No spam.