Enrolment options

This course offers a deep dive into Large Language Models (LLMs), blending essential theory with hands-on labs to develop both practical skills and conceptual understanding—preparing you for roles in LLM development and deployment.
The curriculum begins with a brief overview of key historical NLP techniques. It then transitions to the transformer architecture, focusing on its attention mechanism and tokenization—the core of modern LLMs. Pre-training objectives such as masked/denoising language modeling and causal language modeling will also be covered, forming the basis for models like BERT, GPT, and T5. The course then examines LLM post-training techniques used to refine pre-trained models, including instruction tuning (SFT), reinforcement learning from human feedback (e.g., PPO/DPO), and reinforcement learning from verifiable rewards (e.g., GRPO). Finally, the course will address LLM application and future directions—including RAG, agents, multimodality, and alternative model architectures.

Guests cannot access this course. Please log in.