· Updated

Bittensor Subnets: Where the Revenue Actually Is

Most Bittensor subnets farm emissions. A handful earn real revenue. Which TAO subnets are profitable, which are subsidised, and how to tell the difference. Updated April 2026 with Covenant-72B, Intel-Targon, Grayscale GTAO.

Active subnets
129
Verified subnet revenue (annual)
$3-15M
TAO emitted daily
~3,600
TAO still in Root staking
72.5%

The thesis

TAO is the AI narrative tokenTokenA digital unit of value or access rights tracked on a blockchain. Tokens can represent ownership in a project, a right to use a service, a share of future revenue, or simply a tradable asset with no underlying claim.Like a physical poker chip a casino issues. The chip itself has no value. What makes it worth something is what it lets you do at the casino, what the casino has promised, and how much other people will pay you for it.Read more → most people buy. But TAO itself doesn’t generate revenue. It’s the coordination layer, the base currency, the stakingStakingLocking up a cryptocurrency to help secure a blockchain network, usually in exchange for rewards. The locked tokens act as a security deposit that can be taken away if the staker misbehaves.Like putting down a large rental deposit for an apartment. You get the money back if you behave, you earn interest while it's locked, and the landlord takes it if you trash the place.Read more → asset. The revenue, the products, the actual AI workloads: those happen in the subnets.

129 active subnets on Bittensor. Most will fail. A handful generate real revenue from real customers. The gap between the two is where the opportunity and the risk both live.

This article examines the subnets that matter. Not all 129. The ones with verifiable traction, real products, or interesting economic models worth understanding. For how the subnet emission system works mechanically, see our dTAO subnet economics deep dive. For how TAO compares to other DeAI tokens, see MOR vs TAO vs FET. For how to actually buy and stake into subnets, see our practical staking guide.

April 2026 update. Four major developments since the original publication: Templar shipped Covenant-72B (the largest decentralised pre-training run published), Manifold Labs co-authored a confidential compute whitepaper with Intel, Grayscale filed an amended S-1 for the GTAO ETF on NYSE Arca, and on 10 April 2026 Covenant AI publicly withdrew from Bittensor, accusing founder Jacob Steeves of unilateral governance control. TAO dropped roughly 15% on the exit news. The first three reshape the institutional narrative. The fourth reshapes the governance narrative entirely. For the full story, see our Templar exit deep-dive. The underlying revenue picture remains what Pine Analytics put at $3-15M verified across the entire network.

Where the emissions flow

The top 10 subnets capture 56.46% of all TAO emissionsEmissionsNew tokens created and distributed by a blockchain protocol over time as rewards to validators, stakers, or miners. Emissions fund network security and participation at the cost of diluting existing holders.Like a company that pays employees partly in newly printed shares. Every year the total number of shares goes up, which means existing shareholders own a slightly smaller slice of the same company unless the company grows faster than the printing.Read more →. The remaining 119 subnets share the other 43.54%. Concentration is extreme.

Top 10 subnets by emission share

Chutes (SN64) 14.39%
TAOHash (SN14) 8.20%
Gradients (SN56) 6.66%
Targon (SN4) 5.73%
Templar (SN3) 5.62%
Celium (SN51) 4.44%
Prop Trading (SN8) 3.45%
Kaito (SN5) 3.13%
Nineteen (SN19) 2.71%
Pretrain (SN9) 2.13%

Under dTAO, emission allocation is market-driven. Stakers vote with their TAO by depositing into subnets they believe produce value. The more TAO staked into a subnet, the more emissions it receives. This is Darwinian selection applied to AI infrastructure: produce value or lose your emissions to someone who does.

Worth noting in that staking split: 72.5% of staked TAO is still in Root (the legacy staking mechanism). Only 27.5% has moved to alpha (subnet) staking. Most TAO holders are still parking in Root for conservative yield rather than actively allocating to subnets. “Every holder is a VCVCVenture Capital. Private investors who fund projects at an early stage in exchange for equity or token allocations. VC rounds are typically pre-launch, at steep discounts to any future public price, with multi-year vesting.Like angel investors in a startup who buy shares before the company goes public. They take more risk because the company might fail, so they get a better price. Once the company IPOs they can sell, and the public market pays whatever price it thinks is fair.Read more →” hasn’t materialised at scale.

Supply breakdown: Root (SN0) 72.5%, Subnet (Alpha) 27.5% 7.35M TAO staked
Root (SN0) 72.5%
Subnet (Alpha) 27.5%

The Rayon Labs question

One team dominates the network. Rayon Labs operates three subnets: Chutes (SN64), Gradients (SN56), and Nineteen (SN19). Combined, they command 23.71% of all TAO emissions (Messari, OAK Research). Nearly a quarter of the network’s output flows to a single operator.

23.71% of TAO emissions flow to one team Rayon Labs: Chutes + Gradients + Nineteen

This is a centralisation concern worth naming. Bittensor’s thesis is Darwinian competition among independent teams. When one operator captures a quarter of emissions across three subnets, the “decentralised AIDeAIDecentralised AI. An umbrella term for blockchain-based projects that build AI infrastructure (compute, data, inference, models, agents) without a single central provider controlling the system.Like the difference between streaming a movie from Netflix and sharing it via BitTorrent. Netflix is fast and polished but one company controls what you can watch and what you pay. BitTorrent is messier but no single operator can shut you out.Read more →” framing needs honest qualification. Rayon Labs earns those emissions by building products people use. That’s the system working as designed. But it’s also concentration risk.

Subnet by subnet

Chutes (SN64): the revenue machine

Chutes is the subnet everyone points to when they say Bittensor has real revenue. A serverless inference platform where developers deploy AI models without managing infrastructure.

Emission share (#1)
14.39%
Models on OpenRouter
18
Models on chutes.ai
200+
Subscription tiers
$3-20/mo

The numbers tell a growth story. Rayon Labs reports 160 billion tokens processed per day and 9.1 trillion tokens cumulative across 400,000 users. These are self-reported figures. But the OpenRouter integration provides independent verification of real volume: Chutes is listed as a provider on OpenRouter with 18 models and visible daily throughput data showing billions of tokens processed.

The revenue model is real. Developers pay $3-20/month for platform access, or use the APIAPIApplication Programming Interface. A structured way for one piece of software to talk to another. In DeAI, APIs let applications request inference from a model without running the model themselves.Like a waiter in a restaurant. You don't walk into the kitchen and cook your own meal. You tell the waiter what you want, they tell the kitchen, the kitchen cooks it, and the waiter brings it back. The API is the waiter.Read more → with per-token pricing (from $0.22/M tokens for Mistral). Revenue auto-stakes into the alpha token, creating a flywheel where usage drives token demand. Not just emission farming. It’s a functioning business.

Chutes was the first Bittensor subnet to reach $100 million market cap. GitHub activity is substantial: 1,077 commits on the chutes-api repository.

The honest caveat: the growth from 5-6 billion tokens/day (reported in early 2026) to 160 billion/day (reported March 2026) is extraordinary. A 30x increase in roughly two months. That trajectory needed watching, and the watch was warranted.

April 2026 update

OpenRouter’s public provider data shows Chutes peaked at roughly 42 billion tokens per day on 7 February 2026 and has been declining through March, averaging 8-12 billion tokens per day. That’s a meaningful gap from Rayon Labs’ 160B/day claim and contradicts the narrative of explosive growth. Some industry analysts have cited “approaching $10M ARR” for Chutes (0xSammy, April 2026) but the figure is single-source and the OpenRouter throughput data points the other direction. The most recent independent verification remains Pine Analytics’ $1.3-2.4M from March 2026.

The subsidy question

Pine Analytics, an independent Web3 analytics firm, published a bear case analysis on 24 March 2026 that puts Chutes’ economics in a different light. At 14.4% of network emissions, Chutes receives approximately 518 TAO/day, which works out to roughly $142K/day or $52M annualised at current prices. Against estimated external revenue of $1.3-2.4M, that is a subsidy ratio of 22-40:1. Customer payments fund a small fraction of operations; TAO emissions fund the rest.

The pricing math follows directly. Pine Analytics calculates that if Chutes had to cover its costs from customer revenue alone, the break-even price would be approximately $1.41 per million tokens. Together.ai currently charges $0.88/M for Llama 70B. Unsubsidised Chutes would be 1.6-3.5x more expensive than a centralised alternative, not cheaper. The cost advantage that makes Chutes attractive to developers depends entirely on the emission subsidy continuing at its current rate.

This doesn’t make Chutes worthless. It makes the sustainability question more pointed. Real revenue of $1.3-2.4M from real customers is meaningful for a crypto-native compute platform. The question is whether that revenue can grow fast enough to justify the emissions allocated to produce it, and whether a future halvingHalvingA protocol event that cuts the rate of new token emissions by half. Halvings are scheduled in advance, happen automatically at fixed intervals, and are a core mechanism for enforcing declining token supply growth over time.Like a savings account where the interest rate is contractually cut in half every four years. You still earn interest, but the rate drops on a known schedule, and the issuer can't change it without breaking the contract.Read more → that cuts those emissions further would trigger a pricing adjustment or a user exodus.

Gradients (SN56): training for everyone

Gradients makes AI modelModelA trained neural network that takes inputs (text, images, audio) and produces outputs (more text, classifications, generated content). In DeAI the model is the thing that actually does the work.Like a very experienced apprentice who has spent years watching thousands of masters make furniture. They can't explain how they know when a joint is right, but they can make a chair that looks and functions like a Chippendale. The training is invisible. The output is what matters.Read more → fine-tuningFine-tuningThe process of taking a pre-trained model and training it further on a smaller, more specialised dataset to adapt it to a specific task, domain, or style. Fine-tuning is much cheaper than training from scratch.Like hiring an experienced general practitioner doctor and giving them six months of focused training in a sub-speciality. You don't have to teach them medicine from scratch. You just narrow their expertise to the area you actually need.Read more → accessible without code. Upload your data, pick a base model, click train. The platform handles everything: text and image model trainingTrainingThe one-time process of teaching a neural network to perform a task by showing it massive amounts of example data and adjusting its internal weights until the outputs are good. Training builds the model; inference uses it.Like the years an apprentice spends learning a trade. You don't see any of the actual work, just thousands of repeated mistakes gradually becoming competence. By the end, the apprentice can do the job. The training was invisible, but the skill is now permanent.Read more →, RLHF, alignment tuning. Currently on version 5.0.

Cost advantage is the pitch. Multiple sources report training costs of $100-500 on Gradients versus $10,000+ on Google Vertex. Rayon Labs published a comparison showing Gradients v3 at $5/hour versus AWS at $67.50/hour and Google at $19.25/hour. These comparisons are self-reported and the exact workloads may differ, but the directional cost savings are consistent with other decentralised compute pricing we have seen on Akash and Render.

The important caveat: Gradients captures 6.66% of TAO emissions, meaning a significant portion of its pricing competitiveness is funded by TAO inflationInflationThe annual rate at which new tokens are created and added to the circulating supply. Most networks use inflation to pay validators, stakers, and infrastructure providers from freshly minted tokens rather than real revenue.Like a landlord who raises the rent every year. If your salary goes up at the same rate, you break even. If it doesn't, you get poorer without noticing, because the number on your payslip hasn't changed but the ground under it has shifted.Read more → rather than structural efficiency. As with Chutes, the “cheaper than Google Cloud” headline needs to be read alongside the emission subsidy question. The cost gap narrows considerably if you account for the TAO being issued to support the network.

Gradients captures 6.66% of TAO emissions (rank #3). The product is live at gradients.io with community-trained models visible on the platform. Subnet-specific code isn’t publicly available on GitHub, which limits independent technical verification.

Nineteen (SN19): speed as a feature

Nineteen is Rayon Labs’ inference subnet focused on ultra-low latency. Multiple sources describe it as holding a “world record for fastest LLMLLMLarge Language Model. A neural network trained on vast amounts of text to predict the next word in a sequence. Modern LLMs (GPT, Claude, Llama, Qwen, DeepSeek) generate human-quality text and are the foundation of most modern AI products.Like an autocomplete that read every book ever written. It has no memory of individual texts but it has absorbed the patterns of language so deeply that it can generate paragraphs that sound human. The skill is statistical, not conscious.Read more → inference.” The product is live at nineteen.ai for text and image generation.

At 2.71% of emissions (rank #9), Nineteen is the smallest of the Rayon Labs trio. It serves a different market from Chutes: where Chutes optimises for scale and serverless deployment, Nineteen optimises for speed on latency-sensitive workloads.

Targon (SN4): confidential compute with Intel

Targon is the second-largest claimed-revenue subnet after Chutes. Manifold Labs runs it. The thesis is GPUGPUGraphics Processing Unit. Originally designed to render video game graphics, GPUs turned out to be exceptionally good at the massively parallel math that AI models need. Modern AI training and inference runs almost entirely on GPUs.Like a factory with 10,000 workers doing the same simple task in parallel, versus a CPU which is more like 10 workers each doing different complex tasks. AI training involves doing simple math a million times per second on a million numbers, which is exactly what the GPU factory is designed for.Read more → compute and inference with Intel TDX confidential computingConfidential ComputeHardware-enforced computation where data and code are encrypted in memory and only the authorised application can access them. The machine's operator cannot read what the application is doing even though they own the machine.Like renting space in a bank vault. The bank owns the building and runs the security, but what you put in the vault is invisible even to the bank staff. Only you have the key.Read more → on the CPU side and NVIDIA Confidential Computing on the GPU side. Workloads run in encrypted enclaves where neither the operator nor the host hardware can inspect the data.

The headline news is the whitepaper Manifold Labs co-authored with Intel on 23 March 2026: “Decentralized Compute on Untrusted Hardware Using Intel TDX and Encrypted CVMs.” This is the first Bittensor subnet to publish a formal security architecture document with a major semiconductor manufacturer. Intel hosting it on their own community blog gives it more credibility than the typical crypto-native partnership announcement.

Manifold Labs raised $10.5M in a Series A to expand the confidential compute infrastructure. Targon claims $10.4M ARR, but the figure is self-reported and Pine Analytics flags it as “not an audited number” with “absence of a live revenue dashboard.” Targon also powers the Dippy AI app, which itself claims 4-8.6M users across platforms. The Dippy integration is the strongest external usage signal, though the customer-revenue split between Targon and Dippy isn’t publicly disclosed.

The interesting question for Targon isn’t whether it has revenue. It’s whether the Intel partnership translates into enterprise contracts that justify the valuation, or whether it remains a credibility marker on the marketing page. Confidential compute is a real enterprise need, particularly for healthcare and financial services. Targon is positioned for that market in a way no other Bittensor subnet is.

Templar (SN3): decentralised training breakthrough

Templar completed Covenant-72B on 10 March 2026: a 72 billion parameter model pre-trained on 1.1 trillion tokens across 70+ nodes using commodity internet connections. No centralised cluster. No whitelist. Anyone with GPUs could join or leave freely.

72B Parameters trained on decentralised infrastructure Covenant-72B: outperformed LLaMA-2-70B on MMLU (67.1 vs 65.6)

This is genuinely significant. The technical innovation is SparseLoCo, a compression technique that combines top-k sparsification with 2-bit quantisationQuantisationCompressing an AI model by storing each parameter with fewer bits of precision. Quantisation cuts model size and inference cost by 2-4x with small quality losses, making big models practical to run on consumer hardware.Like printing a high-resolution photo at lower DPI. The image is mostly the same, the details are slightly less crisp, and the file size drops dramatically. For most uses you can't tell the difference. For some uses the quality loss matters.Read more → and error feedback to reduce communication overhead by 146x. Without this, the bandwidth requirements would make decentralised training over home internet connections impractical. The results are published on arXiv.

Templar captured 5.62% of emissions at rank #5. Following Covenant-72B, the team launched a successor competition called Templar: Crusades, focused on optimising training efficiency. The Crusades repository lives in the one-covenant org.

April 10 update: Covenant AI has publicly withdrawn from Bittensor. The team that built Covenant-72B and ran SN3, SN39, and SN81 announced their exit on 10 April 2026, accusing OTF leadership of unilateral emission suspension, moderation removal, infrastructure deprecation, and timed token sales. TAO dropped 15% on the news. The technical achievement (Covenant-72B) survives the exit intact and the paper is unaffected, but anyone holding SN3 alpha tokens is now in a difficult position with the original team disowning the subnet. Full analysis in our Templar exit deep-dive.

Sportsensor (SN41): prediction meets Polymarket

Sportsensor runs a decentralised sports predictionInferenceRunning a trained AI model to produce an answer. Inference is what happens when you type a prompt into ChatGPT and get a response. The model takes your input, computes a best guess, and returns it.Like asking an expert for their opinion. The training was the decades they spent becoming an expert. The inference is the 30 seconds it takes them to answer your specific question.Read more → network. Miners compete to build the best models for predicting outcomes across NFL, NBA, MLB, NHL, EPL, and MLS. The meta-model aggregates the best predictions and trades them on Polymarket through proxy wallets.

The revenue model is direct: Almanac (the front-end) charges a 1% fee on winning trades. Revenue buys back the SN41 token. This is a real, verifiable revenue loop tied to Polymarket trading.

ContangoDigital, a VC investing in DeAI, leads the team and also operates @AskBillyBets, a public-facing prediction agent powered by Sportsensor intelligence. The Polymarket integration is confirmed in the GitHub source code.

The concern: both sportsensor.io and sportsensor.ai domains are currently dead. For a product claiming real traction, having no live website is a red flag. GitHub engagement is low (4 stars, 116 commits). The 427% NBA trading profit cited by 0xJeff comes from Sportsensor’s own trading report and could not be independently verified.

Score (SN44): computer vision for sports

Score takes a different approach to sports AI. Rather than predicting outcomes from data, it uses decentralised computer vision to annotate football (soccer) video footage. Miners compete to annotate game footage accurately and quickly, and that annotated data feeds into player scoring models.

Big headline here: DKING (now rebranded to SIRE) secured a $300 million AUM deployment from a leading sports-betting hedge fund. Announced publicly at the Proof of Talk event and covered by crypto media, with ex-FIFA data scientists behind the meta-model. SN44 averages roughly 70% prediction accuracy.

Caveat: dking.bot has an expired SSL certificate, which isn’t a great look for a product managing $300M in strategies. Partnership announcement is verifiable but ongoing performance isn’t transparent.

Zeus (SN18): weather forecasting

Zeus is a weather forecasting subnet using ERA5 reanalysis data from the Copernicus Climate Data Store. Miners compete to produce more accurate forecasts than traditional baselines. The product is live at zeussubnet.com with active development (v1.5.6, 97 commits).

0xJeff mentions the commodity trading application: hedge funds would pay for weather intelligence that helps forecast commodity prices. Zeus’s own website doesn’t make this claim directly. Weather forecasting is real and live. Commodity trading revenue? Plausible, but unsubstantiated.

Metanova Labs (SN68): drug discovery

Metanova runs a decentralised drug screening platform. Miners submit molecular candidates from the SAVI 2020 library (1.75 billion synthesisable compounds). Validators evaluate binding affinity using PSICHIC, a GNN-based oracle. A recent combinatorial update enables generation of billions of additional synthesisable molecules.

One of the most intellectually interesting subnets. The model turns drug discovery from a centralised, decade-long, billion-dollar process into a decentralised competition where participants screen compounds for therapeutic potential. Wet lab testing is described as “imminent” by Bittensor.ai.

GitHub activity is strong (451 commits, updated March 13 2026). The DeSci use case is genuine. The risk is the gap between virtual screening and actual drug candidates. Computer models identify promising molecules. Chemistry determines if they work.

What’s emerged since March

Three subnets that weren’t covered in the original analysis have grown enough to deserve attention.

Lium (SN51): the revenue-vs-emissions question

Lium runs a peer-to-peer GPU rental marketplace at lium.io. Anyone can supply or rent GPU resources without KYC, paying in crypto. The interesting claim is that Lium has reached the point where actual rental revenues outpace TAO emission incentives. If verified, this would make Lium the only subnet with documented economic self-sufficiency from external customers rather than emission subsidy.

The claim is repeated across Subnet Alpha and CoinGecko’s top subnets analysis, but every source traces back to internal Lium and Datura team reports. There’s no public on-chain revenue dashboard. Subnet Alpha itself notes that “much of Celium’s revenue data comes from internal reports, and community analysts have called for a live public dashboard to showcase real demand.”

This is the most consequential unverified claim in the Bittensor ecosystem right now. If Lium publishes verifiable numbers, it changes the subsidy story entirely. Until then, the claim stays in the “interesting if true” column.

Affine (SN120): the co-founder bet

Affine is a reinforcement learningRLReinforcement Learning. A training paradigm where an AI agent takes actions in an environment, receives rewards or penalties for the outcomes, and learns a policy that maximises long-term reward. Used heavily for aligning modern LLMs.Like training a dog with treats. Good behaviour gets a treat. Bad behaviour gets nothing or a reprimand. Over many repetitions the dog learns which behaviours produce treats and starts doing them on purpose.Read more → subnet where miners compete to train and improve models on tasks like program synthesis, reasoning, and code generation. The mechanism is winner-takes-all: only the Pareto-optimal model gets rewarded. Sybil-proof, copy-proof by design.

What makes Affine notable isn’t the mechanism itself. It’s the team. Jacob Steeves, one of Bittensor’s co-founders (publicly known as “Const”), stepped down as CEO of the Opentensor Foundation to build Affine directly. When the founder of the protocol decides his most valuable contribution is building inside the protocol rather than running it, that’s a signal worth weighing.

No revenue. No external product. Pure research subnet at this stage. Worth tracking because of who’s behind it, not because of what’s shipped yet.

Ridges AI (SN62): autonomous coding agents

Ridges runs a marketplace of autonomous software engineering agents that compete to fix bugs, write tests, and generate code. The conceptual competitor is Devin, Cursor, GitHub Copilot Workspace. The differentiator is decentralisation: instead of one company building one agent, many teams compete on benchmarks and the best ones get rewarded.

Up 85% in March 2026 by alpha token price. No commercial product. No verified user metrics. Early enough that anything could happen, but the autonomous coding agent space is one of the few areas where Bittensor’s competitive marketplace structure has a plausible advantage over centralised alternatives.

The application layer: CreatorBid and Bitstarter

Two platforms are building infrastructure around subnets rather than running them directly.

CreatorBid is positioning itself as the distribution network for Bittensor. It functions like Virtuals Protocol but for the TAO network: an agent launchpad where teams build products on top of subnet intelligence. Notable agents include $AION (prediction vaults using SN6), $SIRE/DKING (sports via SN44), $SHOGUN (InfoFi), and $TAOLOR (research copilot). The TAO Council initiative provides integration, partnership, and incubation support for subnet teams. CreatorBid is launching v2 with Trade Compass (winner of the Endgame Hackathon) as the first project.

Bitstarter is a fundraising platform for subnets. It enables subnet owners to sell future emissions or OTC alpha tokens at a discount to raise operating capital. Recent raises include Quasar (SN24, 400 TAO for long-context LLM), Djinn (600 TAO for market intelligence), and Handshake58 (220 TAO for AI payment infrastructure). The model offers tiered discounts: Quasar gave 25% discount with 3-month lock or 40% discount with 6-month lock. Half the TAO raised goes to buybackBuybackUsing protocol revenue to purchase tokens on the open market, usually to burn them or return them to a treasury. Buybacks convert business income into upward pressure on the token by reducing circulating supply.Like a public company using profits to repurchase and retire its own shares. The cash leaves the company's balance sheet, the share count drops, and every remaining shareholder owns a slightly bigger slice of the same business.Read more → to maintain alpha token price. Bitstarter’s team is anonymous, which is a risk factor.

How to evaluate a subnet

Not all subnets are equal. Six signals separate real products from emission farms:

Subnet evaluation framework

SignalStrongWeak
Revenue External customers paying for the product Revenue only from TAO emissions
Product Live product users can access today Testnet, coming soon, or whitepaper only
Verification Usage data on independent platforms (OpenRouter, Polymarket) Self-reported metrics with no external validation
Team Public identity, track record, active GitHub Anonymous, minimal commits, forked code
Emission dependency Revenue exceeds or approaches emission value 100% dependent on emissions for survival
Website Live, maintained, accurate Dead domains, expired certs, placeholder pages

Run those signals honestly and most of the 129 subnets fall into the “weak” column on multiple counts. The handful that score “strong” across the board are where the genuine opportunity sits.

The risk framework

0xJeff’s DeAI Matrix is the most useful risk classification:

Subnet risk tiers (adapted from 0xJeff)

TierCategoryExamplesCharacteristics
Cash Cows Decentralised compute Chutes, Celium Proven revenue, established market, slower growth
Stars Decentralised inference Nineteen, Targon Growing revenue, growing market, high potential
Question Marks Everything else Prediction, DeSci, agents, security Early stage, unproven, high risk/high reward

The cash cows (decentralised compute and inference) generate 8-9 figure annualised revenue according to 0xJeff. These are self-reported aggregate figures and should be treated as directional rather than precise. But the pattern is clear: compute and inference have product-market fit. Everything else is still proving itself.

For an independent view, Pine Analytics estimated $3-15M in identifiable external revenue across the entire network (bear case analysis, 24 March 2026). The lower bound reflects verified data for Chutes ($1.3-2.4M) and a handful of smaller subnets. The upper end incorporates Targon’s self-reported $10.4M projection, which Pine Analytics flags as unaudited and unverifiable on-chain because blockchain records token movements but not API calls. External revenue is structurally difficult to verify for inference subnets.

The institutional shift

Two institutional developments since March deserve mention because they change how outside capital can access TAO without addressing the underlying revenue questions.

Grayscale GTAO. Grayscale filed Amendment No. 1 to its Form S-1 for the Grayscale Bittensor Trust on 2 April 2026, seeking to list on NYSE Arca under the ticker GTAO. The filing structure is creation/redemption in 10,000-share baskets, in-kind TAO or cash via authorised participants. The trust currently trades on OTCQX. If the NYSE Arca listing approves, GTAO becomes the first US-listed regulated wrapper for direct TAO exposure. That’s a meaningful access point for institutional capital that won’t custody crypto natively.

TAO Institute SRI. Industry analyst 0xSammy announced the Subnet Risk Index (SRI) in early April, positioned as institutional-grade risk evaluation for Bittensor subnets. The methodology hasn’t been publicly disclosed and the institute’s affiliations are unclear. Worth flagging because if it ships with credible methodology, it could become the reference benchmark for subnet risk that Pine Analytics’ bear case has already started to displace.

Neither of these changes the verified revenue picture. Both expand the audience that will be looking at the verified revenue picture. The bear case from Pine Analytics ($3-15M demand-side revenue against multi-hundred-million emissions) becomes more uncomfortable when an ETF wrapper and an institutional risk index sit on top of it.

The honest assessment

Bittensor’s subnet model is the most interesting coordination mechanism in DeAI. It creates real incentives for teams to build AI products, compete on quality, and find customers. The Darwinian pressure is genuine: subnets that don’t produce value lose emissions and eventually get deregistered.

Chutes is the standout. A serverless inference platform processing billions of tokens daily through OpenRouter, with paid subscription tiers and a functioning revenue model. It’s the strongest evidence that the subnet model can produce real businesses, not just emission-farming operations.

But the concentration is concerning. One team (Rayon Labs) captures nearly a quarter of emissions. Several subnets with prominent claims have dead websites. Self-reported metrics are pervasive and independent verification is difficult. The 72.5% Root staking ratio suggests most TAO holders aren’t actively participating in the subnet economy that’s supposed to be the network’s core value proposition.

The subnets worth watching are the ones where revenue comes from external customers, not just from TAO emissions recycling through the system. Chutes has this with OpenRouter. Sportsensor has it with Polymarket (if the product survives). Score has it with the DKING hedge fund partnership (if the performance materialises). Metanova could have it if wet lab results validate the computational screening.

For a Bittensor holder, the subnet layer is where fundamental value either materialises or doesn’t. TAO is the bet on the ecosystem. Subnets are where you find out if the bet is paying off.

Score changes, new reviews, one editorial take every two weeks. No spam.