AI & machine learning

ML

Machine Learning. The branch of AI where systems learn patterns from data instead of being explicitly programmed with rules. Modern AI (LLMs, image generation, recommendation systems) is almost entirely machine learning.

Also known as: machine learning

Machine learning is the umbrella term for any technique that learns patterns from data rather than being hand-coded. The earliest ML systems (linear regression, decision trees) date back decades and were used for basic prediction tasks. Modern ML, dominated by deep neural networks, learns much more complex patterns from much larger datasets and is what powers everything from speech recognition to image generation to language models.

The three main paradigms of ML are supervised learning (the model is shown labeled examples and learns to predict the labels), unsupervised learning (the model finds patterns in unlabeled data), and reinforcement learning (the model takes actions in an environment and gets rewards or penalties, learning a policy that maximises reward). Most modern LLMs use a combination: pre-training is unsupervised on raw text, fine-tuning is supervised on curated examples, and RLHF is reinforcement learning with human feedback to align the model’s outputs with what humans want.

Deep learning is a subset of ML that uses neural networks with many layers. The “deep” refers to depth (many layers stacked) which allows the model to learn hierarchical representations: early layers detect simple features, later layers combine them into more complex patterns. The transformer architecture (used in GPT, Claude, Llama, and almost every modern language model) is a specific deep learning design that turned out to be unexpectedly good at language.

The DeAI angle on ML is that running and training ML systems requires significant compute, and that compute has historically been provided by centralised cloud companies. DeAI is trying to decentralise both the compute and, where possible, the model weights. Decentralised compute marketplaces (Akash, Render, io.net) provide the GPU layer. Open-weight models (Llama, Qwen, DeepSeek) provide the model layer. The combination is what makes “sovereign AI” possible without depending on a single hyperscaler or AI lab.

Related terms