Join us for a fast, practical build-lab session where we break down MCP, Skills.md and OpenClaw, —what they mean, why they matter, and how to apply them when shipping reliable AI agent workflows. We’ll end with an open Challenge Q&A to unblock your project, tighten your demo, and improve submission quality.
What we’ll cover
- MCP (Model Context Protocol): how modern agents connect to tools, data, and context cleanly
- Skills.md: how to define, document, and reuse “agent skills” (so your agent is structured, testable, and scalable)
- OpenClaw: what it is, why there’s hype, and where it fits vs other agent stacks
- Challenge Q&A: live troubleshooting for prompts, RAG pipelines, evaluation, reliability, deployment, and demo structure
Outcomes (you’ll leave with)
- A clear mental model of MCP and when to use it
- A practical approach to writing Skills.md for your agent/app
- Clarity on OpenClaw vs alternatives and how to decide
- A working blueprint for Agentic RAG (retrieve → reason → act → verify)
- Actionable fixes to make your project more judge-ready
Who should attend
- Challenge participants (all tracks)
- Builders working on AI apps using RAG or tool-using agents
- Anyone who wants to improve reliability, structure, and demo quality