In PvP games, bot AI is often one of the last things you’ll develop. We needed it to be one of the first.
Most game AI takes the form of an ‘expert system’ — a carefully crafted set of rules that give bots their operating orders based on the current state of the game. This scripted approach is incredibly complex, especially for deep strategy games (like the ones we love to play and make).
Imagine a hypothetical real-time strategy game: the number of competing objectives to consider can be overwhelming. Should I focus on my economy and generating more income? Should I send out scouts to understand the enemy’s position and strategy? Should I build up my infantry, or should I rush the enemy’s base with what I’ve got?
Now imagine building an expert system AI that takes into account all of those variables and player decision points in a game that’s in early production, with systems and mechanics changing dramatically from week to week. Any scripted AI systems built during early production will require constant upkeep to stay compatible with always-evolving game mechanics and systems.
But our Alpha-Driven Development philosophy puts a higher value on having effective bots from the earliest stages of development, for a few reasons:
- We put games in front of players very, very early on in the process compared to most game studios — getting real player feedback early is part of how we think we’ll build games players love. But with a small population of game testers and limited opportunities for playtests (ours are weekly at the moment), we still want to make it a great experience for players to queue up for a match at any time. Enter bots!
- We iterate extremely quickly. It’s crucial that we automate as much as possible to make it easier and quicker for developers to implement and test their changes. The ability to hop into a bot match to make sure you didn’t break anything, and to put your latest and greatest mechanic through its paces without needing a willing human opponent is quite helpful.
- We automate our testing. Having competent AI allows us to automatically run bot vs. bot tests to ensure nothing’s seriously broken, which saves a ton of time and energy from human members of the development team.
But if expert systems (the most common AI framework for strategy games) are costly and difficult to implement early, how could we build a bot system that works well and scales? We brought in an AI programming expert, Martin Hesselborn, to help us figure it out.
Our answer was a less-common approach to AI: a ‘searching system’ that, instead of making decisions based on a heavily scripted decision-making framework, simulates a variety of potential choices available in the current game state, and measures all of their outcomes to determine the best course of action.
“One thing that struck me about the game OMG’s building is that you can figure out quite a bit about the game rules by simply trying out an ability and seeing what happens,” Martin recalls. “So instead of building out a complex script for every single ability in the game — which, of course, would be changing all of the time — I wondered if a searching system could do the trick. I believed it could.”
“The system ultimately has two components: an evaluation function, and search. Search runs a bunch of experiments based on all of the available choices, and then simulates the outcomes some amount of time into the future,” he explains. “Evaluation then looks at those outcomes, and decides based on some relatively straightforward criteria — like the health of the player’s units versus the AI’s — which outcome was best.”
The evaluation criteria can be tweaked to enable different bot personalities, and the bot difficulty scales surprisingly well based on how far into the future it looks. The more simulations of potential future choices the AI makes, the more accurate its decision-making (and the more deadly an opponent it makes).
This approach rolls with the punches. Because all a searching AI system does is simulate the choices available to a player with no rationale or consideration, and then measure its outcome according to easy-to-evaluate rules, this approach is compatible with virtually any new game mechanic we introduce.
A few stars aligned to make this approach feasible for the game OMG is building:
- Because of the language we’re using for our game servers (C#), it’s incredibly easy to make perfect copies of the current game state to simulate forward. If we had chosen a different language, the game state would be harder to duplicate without writing lots of custom code to walk through many data structures, and cloning the game state accurately could be very difficult.
- We spent a lot of time up-front building the foundation for a scalable online game. Our servers tend to run pretty fast and cheap. Our game doesn’t use a “tick” system like many online games (our tickless architecture updates only as often as is necessary). If we had a 144Hz tick rate server, the simulation cost would skyrocket.
- Our game is deterministic, so the CPU can accurately predict the outcome of each action. This further reduces the cost of simulating forward future game states because the AI doesn’t have to also guess at non-deterministic outcomes (like if we had a complicated physics simulation involved).
—
We’re quite pleased with the effectiveness of the AI we’ve built. Big thanks to Martin for his expert guidance and contribution early on! As always, One More Game is hiring.