← Back to Trainings
GenAI: LLM Apps (RAG, Agents & Evaluation)
Build reliable LLM applications: RAG pipelines, evaluation, safety, and agent workflows for real delivery constraints.
GenAILLMRAGAgentsEvaluationSafety
Duration
3 days
Format
Remote
Level
Intermediate
Key outcomes
- Design a production-style RAG pipeline with good chunking/retrieval
- Reduce hallucinations with grounding and citation patterns
- Evaluate answer quality and detect failure modes
- Implement safety basics: prompt injection awareness and guardrails
Syllabus
Day 1 — RAG foundations
- Chunking strategies and embeddings
- Retrieval: top-k, reranking (concept), filtering
- Prompting patterns for grounded answers + citations
Day 2 — Reliability & evaluation
- Test sets and automated evaluation patterns
- Hallucination, missing context, bad retrieval
- Prompt injection: risks and mitigation patterns
Day 3 — Agents & orchestration
- Tool calling and multi-step workflows
- Latency/cost trade-offs and caching
- Monitoring, feedback loops, and iteration strategy