HomeBlogBlog Detail

Riot Games and deep reinforcement learning in gaming

By Erik Martinez   
blog-rl-summit-riot-recap-thumb

At our recent Production RL Summit, Ben Kasper from Riot Games talked about how using reinforcement learning (RL) has helped game designers improve game balance. Already, impressive strides have been made in using RL for both simple and complex games, from Atari to DotA 2. But Riot Games is doing more than just training an agent to play a game. They're using deep RL to create a better user experience for gamers. How?

It started when Riot Games wanted to be proactive in their game design with the help of RL. They trained an agent that could then play itself in order to discover any issues that would potentially ruin the game experience for players. They employed a straightforward RL recipe that explicitly considered future variations in gameplay, captured patterns in game states that unbalanced the game, and distributed game-playing at scale for comprehensive learning. 

Starting with one deck of Legends of Runeterra and eventually training with 10, Riot Games was able to train an agent to 48% win-rate level against top human players. From there, it was able to discover which particular decks were too strong or too weak by pitting the agent against itself. With a little digging, Riot Games was able to drill down to a particular card in the decks that was too strong but when tweaked, balanced the deck.

The experiment was successful and proven out as the RL-generated metrics consistently matched design intuition, correctly predicted the strongest deck, and showed directional congruence with live data from real-world games. The next step was to productionize the whole process with an API in addition to an app, monitoring, and dashboard for analysis. RL algorithm “balance” scores even became part of game designer KPIs.

And this is where things get interesting. Watch the talk and listen to Ben first hand, as he describes Riot Games’ initial foray into RL, how it culminated in a tool enabling game designers on Legends of Runeterra to analyze and iterate on game balance prior to the release of content, and finally the recent updates they’ve made to their tech stack to meet both current and future challenges. Check out the session replay to learn more!

Ready to try Anyscale?

Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.