ML library for hyperparameter tuning. Anyscale supports and further optimizes Ray Tune for improved performance, reliability, and scale.
Ray Tune is a Python library for experiment execution and hyperparameter tuning at any scale.
With Ray Tune, tune your favorite machine learning framework (scikit-learn, XGBoost, StatsForecast, and more) by running state of the art algorithms such as Population Based Training (PBT) and HyperBand. Tune also integrates with a wide range of additional hyperparameter optimization tools, including Ax, Optuna, and more.
Hyperparameter optimization helps you quickly increase your model performance. Take advantage of cutting-edge optimization algorithms to reduce the cost of fine-tuning and optimize training schedules.
Run your existing code in parallel across a cluster with just a few code snippets. Run and scale many trials in parallel, as easily as you coding on your laptop.
Ray Tune makes it easy to speed up hyperparameter searches by scaling out a Ray cluster. Speed up a single trial by running it across compute instances, or run more trials at a time by adding more nodes.
Ray Tune removes boilerplate from your code training workflow, supporting logs results to tools such as Weights & Biases, MLflow, and TensorBoard and multiple storage options for experiment results (NFS, cloud storage), while also being highly customizable.
Manual hyperparameter tuning is slow and tedious. Automated hyperparameter tuning methods like grid search, random search, and Bayesian optimization can help us find optimal hyperparameters more quickly. Even so, tuning hyperparameters on a single computer can still take a long time. Using a single computer with limited resources (CPU and RAM) to try every hyperparameter combination the data set’s search algorithm produces creates a major bottleneck because each hyperparameter combination that the random search selects is independent of the previous and next selections.
To accelerate the search, Ray Tune runs different hyperparameter combinations in parallel on in the cloud. By distributing the hyperparameter tuning task among several computers, we can increase the speed and effectiveness of hyperparameter tuning
In principle, this isn’t an easy task. It usually requires significant code refactoring. However, Ray provides the means to build distributed applications with few code changes, making it quick and easy to do hyperparameter tuning at scale.
Build, deploy, and manage scalable AI and Python applications on the leading AI platform. Unlock your AI potential with Anyscale.