


MAR
07
Sat, 07 Mar
Online
2
days
4
hours
56
min
0
sec
In MC08, you’ll learn how to turn real-world data into a reliable Retrieval-Augmented Generation (RAG) system using LlamaIndex. We’ll cover how to connect and ingest data from common enterprise sources (PDFs, docs, web pages, knowledge bases, databases), clean and structure it, and build an indexing + retrieval pipeline that consistently returns the right context at query time. You’ll implement chunking strategies, metadata design, embedding + vector store setup, and retrieval tuning (top-k, filters, hybrid search, reranking) so your assistant responds with grounded, source-backed answers instead of hallucinations.
Outcomes (what you’ll be able to do after this module)
Who this is for
This masterclass is part of the AI Residency.
✅ Join the new AI Residency cohort to build this end-to-end with guided support, project feedback, and a production-ready workflow—from data ingestion → indexing → retrieval → evaluation → deployment.
https://academy.decodingdatascience.com/airesidencyfasttrack


