In the web-based Pokémon Showdown battle simulator, Sarantinos developed an algorithm to analyze a team of six Pokémon — predicting how they would perform based on all the possible battle scenarios ahead of them and their comparative strengths and weaknesses.
Microsoft, which owns the popular Minecraft game franchise as well as the Xbox game system, has tasked AI agents with a variety of activities — from steering clear of lava to chopping trees and making furnaces. Researchers hope some of their learnings could eventually play a role in real-world technology, such as how to get a home robot to take on certain chores without having to program it to do so.
While it ”goes without stating” that real humans behave quite differently from fictional video game creatures, “the core ideas can still be used,” Sarantinos said. “If you use psychology tests, you can take this information to conclude how well they can work together.”
Amy Hoover, an assistant professor of informatics at the New Jersey Institute of Technology who’s built algorithms for the digital card game Hearthstone, said “there really is a reason for studying games” but it is not always easy to explain.
“People aren’t always understanding that the point is about the optimization method rather than the game,” she said.
Games also offer a useful testbed for AI — including for some real-world applications in robotics or health care — that’s safer to try in a virtual world, said Vanessa Volz, an AI researcher at the Danish startup Modl.ai, which builds AI systems for game development.
But, she adds, “it can get overhyped.”
“It’s probably not going to be one big breakthrough and that everything is going to be shifted to the real world,” Volz said.
Japanese electronics giant Sony launched its own AI research division in 2020 with entertainment in mind, but it’s nonetheless attracted broader academic attention. Its research paper introducing Sophy last year made it on the cover of the prestigious science journal Nature, which said it could potentially have effects on other applications such as drones and self-driving vehicles.
The technology behind Sophy is based on an algorithmic method known as reinforcement learning, which trains the system by rewarding it when it gets something right as it runs virtual races thousands of times.
“The reward is going to tell you that, ‘You’re making progress. This is good,’ or, ‘You’re off the track. Well, that’s not good,'” Spranger said.
The world’s best Gran Turismo players are still finishing ahead of Sophy at tournaments, but average players will find it hard to beat — and can adjust difficulty settings depending on how much they want to be challenged.
PlayStation players will only get to try racing against Sophy until March 31, on a limited number of circuits, so it can get some feedback and go back into testing. Peter Wurman, director of Sony AI America and project lead on GT Sophy, said it takes about two weeks for AI agents to train on 20 PlayStations.
“To get it spread throughout the whole game, it takes some more breakthroughs and some more time before we’re ready for that,” he said.
And to get it onto real streets or Formula One tracks? That could take a lot longer.
Self-driving car companies adopt similar machine-learning techniques, but “they don’t hand over complete control of the car the way we are able to,” Wurman said. “In a simulated world, there’s nobody’s life at risk. You know exactly the kinds of things you’re going to see in the environment. There’s no people crossing the road or anything like that.”