This session is designed to give participants a dual perspective:
How to leverage Hugging Face for exploring, fine-tuning, and deploying open-source large language models (LLMs), and
How to build powerful Retrieval-Augmented Generation (RAG) applications using LlamaIndex, a leading framework for connecting LLMs to private or custom data sources.
Part 1: Hugging Face
Explore how to use Hugging Face Transformers, model hub, and inference tools to experiment with open-source models like LLaMA, Mistral, and Falcon. Learn about model selection, tokenizers, and inference strategies.
Part 2: LlamaIndex and RAG
Understand how to integrate LLMs with external data through LlamaIndex. Learn to ingest documents, build indexes, and create context-aware Q&A systems using the RAG pattern.
By the end of this session, you’ll be equipped to build intelligent, data-aware applications using the best of open-source AI and modern RAG techniques.
part of ai residency Cohort 6 starting soon
https://academy.decodingdatascience.com/airesidencyfasttrack