Artificial Intelligence (AI) has been said by many to bring us a utopia and, now more frequently, a dystopia. Regardless of where research into AI takes us we’ll be seeing the benefits in games in multiple ways. AIs are not new to games and have been used in games for a long time, what’s more is that a good way to test AIs is to use games.
In the 90s an IBM computer beat a world champion chess player and that was impressive at the time. A chess AI can be programmed relatively easy since there’s a set way to play (basically look at all possible moves of a set and pick the best one).
A Game like go is harder to program for and as a result was deemed to be a triumphant challenge for programmers to create a program that can beat a human (the quantity of what needs to be coded for is huge). Last month, Google’s DeepMind beat a top-tier European go player.
Instead of programming for every possible move like in Deep Blue, Google let their program learn on its own. “AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games.” This is striking because it’s a leap in how we make AIs that play games. We just toss the AI at the game and hope it learns what to do – just like we do with human players.
To hear more about the future of DeepMind watch this lecture by Demis Cassabas (founder of DeepMind) about the future and capabilities of artificial intelligence.
Challenges for DeepMind’s Artificial Intelligence
Does DeepMind seem too good to be true to you? It’s probably because the annoucnemtn around how it beat the go player is a big claim. Gary Marcus deconstructs the advancement and looks at the challenges AlphaGo (and AI in general) needs to still overcome.
But not so fast. If you read the fine print (or really just the abstract) of DeepMind’s Nature article, AlphaGo isn’t a pure neural net at all — it’s a hybrid, melding deep reinforcement learning with one of the foundational techniques of classical AI — tree-search, invented by Minsky’s colleague Claude Shannon a few years before neural networks were ever invented (albeit in more modern form), and part and parcel of much his students’ early work.
What’s more is that AI still hasn’t reached a level of knowledge and reasoning to deal with questions that require multiple contexts. Indeed, a recent test concluded that present AIs can’t beat an 8th grader.
The Allen Institute’s science test includes more than just trivia. It asks that machines understand basic ideas, serving up not only questions like “Which part of the eye does light hit first?” but more complex questions that revolve around concepts like evolutionary adaptation. “Some types of fish live most of their adult lives in salt water but lay their eggs in freshwater,” one question read. “The ability of these fish to survive in these different environments is an example of [what]?”
These were multiple-choice questions—and the machines still couldn’t pass, despite using state-of-the-art techniques, including deep neural nets. “Natural language processing, reasoning, picking up a science textbook and understanding—this presents a host of more difficult challenges,” Etzioni says. “To get these questions right requires a lot more reasoning.”
It’s only a matter of time until the AI teams get from the 8th grade to high school then to the university level.
How does this relate to games though? With smarter AI comes we will get better bots in games and we’ll see that making NPCs will get easier.
Developing a Unified AI Framework
This month Firas Safadi, Raphael Fonteneau, and Damien Ernst published a paper in the International Journal of Computer Games Technology about how we ought to think about AI in games. They argue that we need a unified framework for dealing with AI development and deployment in games.
Their paper, Artificial Intelligence in Video Games: Towards a Unified Framework, is worth a read and will undoubtedly shape how we think about AI in games for years to come. Think about the possibility that game engines will ship with a suite of default AI behaviours that can be easily modified by non-coders.
Here’s the abstract:
With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of complex environments is pressing. Since video game AI is often specifically designed for each game, video game AI tools currently focus on allowing video game developers to quickly and efficiently create specific AI. One issue with this approach is that it does not efficiently exploit the numerous similarities that exist between video games not only of the same genre, but of different genres too, resulting in a difficulty to handle the many aspects of a complex environment independently for each video game. Inspired by the human ability to detect analogies between games and apply similar behavior on a conceptual level, this paper suggests an approach based on the use of a unified conceptual framework to enable the development of conceptual AI which relies on conceptual views and actions to define basic yet reasonable and robust behavior. The approach is illustrated using two video games, Raven and StarCraft: Brood War.