Building RAG Agents with LLMs (BRAL) – Outline

Detailed Course Outline

  • Introduction to the workshop and setting up the environment.
  • Exploration of LLM inference interfaces and microservices.
  • Designing LLM pipelines using LangChain, Gradio, and LangServe.
  • Managing dialog states and integrating knowledge extraction.
  • Strategies for working with long-form documents.
  • Utilizing embeddings for semantic similarity and guardrailing.
  • Implementing vector stores for efficient document retrieval.
  • Evaluation, assessment, and certification.