Ray Training Program

Exclusive in-person or virtual Ray training, led by the creators of Ray. Connect with Ray experts to enhance your skills in Ray, GenAI, LLMs, and beyond.

Training Courses

Explore our carefully designed courses, tailored to ensure that every practitioner gains practical, hands-on knowledge.

Ray AI Libraries

Gain a comprehensive overview of Ray, scale common AI workloads, and empower your team to build with Ray.

Build and Optimize
ML Platforms with Ray

Learn to leverage Ray to build your ML platform, reduce infra costs, and enable new workloads.

End-to-End LLM Workloads

Build end-to-end LLM workloads with Ray, including fine-tuning, deployment, batch inference, evaluations, and more.

Retrieval Augmented Generation (RAG)

Build, and deploy RAG applications while developing evaluation routines and applying Ray best practices.

Large-Scale Stable Diffusion Models

Enable large-scale data processing and distributed stable diffusion model training.

LLM Inference with Ray and vLLM

Build, tune, and scale high-performance inference workloads with Ray and vLLM.

Ray logo

Ray is the standard for building and scaling
high-performance AI applications.

40,000+

Github repo downloads

32.5k

Stars by the community

1000+

Contributors

At OpenAI, Ray allows us to iterate at scale much faster than we could before. We use Ray to train our largest models, including ChatGPT.
Greg BrockmanCo-founder and president, Open AI
open-ai
“Ant Group has deployed Ray Serve on 240,000 cores for model serving. The peak throughput during Double 11, the largest online shipping day in the world, was 1.37 million transactions per second. Ray allowed us to scale elastically to handle this load and to deploy ensembles of models in a fault tolerant manner.”
Tengwei CaiStaff Engineer, Ant group
ant-group-icon
“Ray has brought significant value to our business, and has enabled us to rapidly pretrain, fine-tune and evaluate our LLMs.”
Min Cai Distinguished Engineer, Uber
uber-logo
“Ray enables us to run deep learning workloads 12x faster, to reduce costs by 8x, and to train our models on 100x more data.”
Haixuin WangVP engineering, Instacart
instacart-logo
“Ray has profoundly simplified the way we write scaalble distributed programs for Coheres’ LLM pipelines.”
Siddhartha KamalakaraML engineer, Cohere
cohere-logo
“We use Ray to run a number of AI workloads at Samsara. Since implementing the platform, we’ve been able to scale the training of our deep learning models to hundreds of millions of inputs, and accelerate deploymnet while cutting inference costs by 50%.”
Evan WelbourneHead of AI and Data, Samsara
samsara-logo
“We were able to improve the scalability by an order of magnitude, reduce the latency by over 90%, and improve the cost efficiency by over 90%. It was financially infeasible for us to approach that problem with any other distributed compute framework.”
Patrick AmesPrincipal Engineer
aws-circle-logo