Webinar

Fast and efficient hyperparameter tuning with Ray Tune

Wednesday, October 20, 4:00PM UTC

Hyperparameter tuning or optimization is used to find the best performing machine learning (ML) model by exploring and optimizing the model hyperparameters (eg. learning rate, tree depth, etc). It is a compute-intensive problem that lends itself well to distributed execution.

Ray Tune is a Python library, built on Ray, that allows you to easily run distributed hyperparameter tuning at scale. Ray Tune is framework-agnostic and supports all the popular training frameworks including PyTorch, TensorFlow, XGBoost, LightGBM, and Keras.

Join this webinar with Will Drevo, product manager for Ray machine learning libraries, for an overview of Ray Tune and demo of using it for tuning a deep learning model.

We will showcase many Ray Tune highlights, including how to: 

  • Set up distributed hyperparameter search in under 10 lines of code

  • Scale from a single machine to a cluster with minimal code changes

  • Trial leading search methods (ASHA, BOHB, PBT, etc) with built-in access

  • Visualize results with TensorBoard or MLflow

We will also share stories of users that are finding the most-performant models, while saving compute costs and maximizing CPU/GPU utilization with Ray Tune. 

View slides >>>

Q&A summary >>>

Speakers

Will Drevo

Will Drevo

Product Manager, Anyscale

Will is a Product Manager for ML at Anyscale. Previously, he was the first ML Engineer at Coinbase, and ran a couple of ML-related startups, one in the data labeling space and the other in the pharmaceutical space. He has a BS in CS and Music Composition from MIT, and did his master's thesis at MIT in machine learning systems. In his spare time, he produces electronic music, travels, and tries to find the best Ethiopian food in the Bay Area.