Two tracks covering foundations through production LLM training. Start with Track 0 (free) or unlock everything with Premium.
Build the mental models that separate research engineers from ML practitioners.
Master LLM-specific architectural decisions — attention, KV cache, tokenization, and scaling.
Master the parallelism strategies for training models that don't fit on a single GPU.
Predict model performance from compute budget and plan training runs that maximize capability per dollar.
Master the post-training pipeline — SFT, RLHF, DPO, and Constitutional AI to make capable models useful and safe.
Master LLM serving — vLLM internals, batching strategies, speculative decoding, and production deployment patterns.
Understand what happens inside neural networks — probing, attention analysis, causal methods, and circuit discovery.
Track 0 is completely free. Sign in to save progress and unlock Track 1.