HomeResourcesBuilding LLM Apps for Production, Part 1

Building LLM Apps for Production, Part 1

While setting up a naive RAG stack is straightforward, addressing a long tail of quality, evaluation and scalability challenges is essential for software engineers to render their applications production-ready.

​This multi-part workshop series will guide you through using LlamaIndex and Ray to implement reliable and scalable RAG.

​You'll learn how to build RAG and approach scalability challenges, design experiments to optimize key application components and utilize scalable workflows to quantitatively compare them.

​Instructors:

Simon (co-founder/CTO, LlamaIndex)

​Adam (Technical Trainer, Anyscale)

Ready to try Anyscale?

Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.