Julia Turc
The myth of 1-bit LLMs | Extreme Quantization
24:37
Julia Turc
Quantization: How LLMs survive in low precision
20:34
Julia Turc
Knowledge Distillation: How LLMs train each other
16:04
Julia Turc
Mixture of Experts: How LLMs get bigger without getting slower
26:42
Julia Turc
Llama 4 Explained: Architecture, Long Context, and Native Multimodality
24:02
Julia Turc
DeepSeek's GRPO (Group Relative Policy Optimization) | Reinforcement Learning for LLMs
23:16
Julia Turc
Proximal Policy Optimization (PPO) for LLMs Explained Intuitively
22:03
Julia Turc
Tülu 3 from AI2: Full open-source fine-tuning recipe for LLMs
13:49
Julia Turc
8 Timeless tips for training LLMs | Become a better ML engineer
15:53
Julia Turc
How does OpenAI Operator work under the hood? | Tech deep dive
9:55
Julia Turc
How does DeepSeek actually work? | Full technical review
15:29
Julia Turc
Discover How LLMs Work by Dissecting Llama
9:53