Model
A trained neural network that takes inputs (text, images, audio) and produces outputs (more text, classifications, generated content). In DeAI the model is the thing that actually does the work.
Also known as: neural network, AI model, foundation model
A model is the end product of a training run. Take a neural network architecture (usually a transformer for modern language models), show it hundreds of billions of tokens of text, and adjust its internal weights so it gets progressively better at predicting the next word. What you’re left with is a frozen set of numbers — the model — that can be loaded onto hardware and queried.
Models come in many flavours. LLMs (large language models like GPT-5, Claude, Llama) are the ones most readers think about, but the same word covers image generation models (diffusion), speech recognition, embedding models that turn text into vectors for search, and classifiers that label inputs. When an OYM article says “Venice runs hundreds of open-weight models,” it means all of the above.
Size matters but not as much as you’d think. A 70-billion-parameter model is bigger than a 7-billion-parameter model, but a well-trained 7B on a good dataset can outperform a badly-trained 70B on many tasks. What the model was trained on (data quality), how long (compute), and how it was tuned afterwards (fine-tuning, RLHF) matter at least as much as raw size. This is why “just use a bigger model” isn’t always the right answer.
The DeAI-relevant question is who controls the model. Open-weight models (Llama, Qwen, DeepSeek, Mistral) have their trained weights published, which means anyone can run them on their own hardware. Closed-weight models (GPT, Claude, Gemini) are available only via the owner’s API. The difference between those two positions is most of what “sovereign AI” means in practice. If you can’t run the model yourself, you’re renting access to someone else’s infrastructure, and they set the terms.