MC04: Intro to LLM – How It Works + Playground
Duration: 1.5–2 hours | Format: Live Demo + Guided Exploration
This module provides a beginner-friendly yet technically sound introduction to Large Language Models (LLMs). Participants will understand how LLMs work under the hood—from tokenization to inference—and gain hands-on experience using real LLMs through interactive tools.
✅ What You’ll Learn:
The architecture and training fundamentals of LLMs (e.g., Transformer, attention mechanism)
Key concepts: tokens, embeddings, context window, temperature, top-k/top-p sampling
How LLMs "reason" and generate responses
Prompting techniques and how prompt structure affects output
Differences between open-source and API-based models (e.g., GPT-4, Claude, LLaMA)
🛠️ Hands-On: LLM Playground Experience
You'll explore:
OpenAI Playground / Hugging Face Spaces
Visualizing token flows and response generation
Experimenting with different prompts and parameters
Comparing model behaviors with live feedback
🎯 Outcomes:
By the end of this module, residents will:
Understand how to interact with LLMs confidently
Grasp how model architecture impacts performance and limitations
Be ready to build on top of LLMs using prompts or APIs in future modules
part of AI Residency apply for cohort 7 https://academy.decodingdatascience.com/airesidencyfasttrack