Open Weights
An AI model whose trained parameters are publicly published and downloadable, so anyone can run, fine-tune, or modify it without permission. Llama, Qwen, DeepSeek, Mistral, and Hermes are open-weight models.
Also known as: open weight model, open-source AI model
Open-weights models are the foundation of the sovereign AI thesis. When a model’s weights are public, you can download them, run them on your own hardware, fine-tune them for your own needs, audit them for biases or backdoors, and never depend on the original creator for ongoing access. Closed-weights models (GPT, Claude, Gemini) can only be accessed through the owner’s API, which means you depend on them indefinitely for service, pricing, and policy. The distinction is most of what “sovereign AI” means in practice.
Meta’s Llama family kicked off the modern open-weights era with Llama 2 in July 2023. Since then the field has exploded: Mistral’s open releases, Alibaba’s Qwen series, DeepSeek’s V3 and R1, NVIDIA’s Nemotron, and many community fine-tunes including Nous Research’s Hermes models. The performance gap between the best open-weights models and the best closed-weights models has narrowed dramatically. As of 2026, top open-weights models are competitive with frontier closed models on most benchmarks while being free to run and modify.
The terminology distinction matters. “Open weights” is not the same as “open source.” Open source in the traditional sense means the source code, training data, and training process are all public so anyone can fully reproduce the model. Most open-weights releases publish only the trained weights, not the training data or exact training code. You can run the model but you can’t reproduce or audit how it was made. Truly open-source AI (open weights + open data + open training code) is rare. OLMo, Pythia, and a few others come close.
In DeAI, open-weights models are the practical foundation of any decentralised inference service. Venice, Morpheus, Bittensor subnets, io.net, and most other DeAI inference projects run open-weights models because they need to actually load and serve the model, which closed-weights models don’t allow. The OYM open-weight model comparison page tracks the major open-weights families with their parameter counts, hardware requirements, and benchmark performance, refreshed automatically from HuggingFace.