Reality is a Game

Game thinking from Adam Clare

Tag: AI(Page 1 of 2)

Artificial Intelligence in Relation to Games

Artificial Intelligence (AI) has been said by many to bring us a utopia and, now more frequently, a dystopia. Regardless of where research into AI takes us we’ll be seeing the benefits in games in multiple ways. AIs are not new to games and have been used in games for a long time, what’s more is that a good way to test AIs is to use games.

In the 90s an IBM computer beat a world champion chess player and that was impressive at the time. A chess AI can be programmed relatively easy since there’s a set way to play (basically look at all possible moves of a set and pick the best one).

DeepMind

A Game like go is harder to program for and as a result was deemed to be a triumphant challenge for programmers to create a program that can beat a human (the quantity of what needs to be coded for is huge). Last month, Google’s DeepMind beat a top-tier European go player.

Instead of programming for every possible move like in Deep Blue, Google let their program learn on its own. “AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games.” This is striking because it’s a leap in how we make AIs that play games. We just toss the AI at the game and hope it learns what to do – just like we do with human players.

To hear more about the future of DeepMind watch this lecture by Demis Cassabas (founder of DeepMind) about the future and capabilities of artificial intelligence.

Challenges for DeepMind’s Artificial Intelligence

Does DeepMind seem too good to be true to you? It’s probably because the annoucnemtn around how it beat the go player is a big claim. Gary Marcus deconstructs the advancement and looks at the challenges AlphaGo (and AI in general) needs to still overcome.

But not so fast. If you read the fine print (or really just the abstract) of DeepMind’s Nature article, AlphaGo isn’t a pure neural net at all — it’s a hybrid, melding deep reinforcement learning with one of the foundational techniques of classical AI — tree-search, invented by Minsky’s colleague Claude Shannon a few years before neural networks were ever invented (albeit in more modern form), and part and parcel of much his students’ early work.

What’s more is that AI still hasn’t reached a level of knowledge and reasoning to deal with questions that require multiple contexts. Indeed, a recent test concluded that present AIs can’t beat an 8th grader.

The Allen Institute’s science test includes more than just trivia. It asks that machines understand basic ideas, serving up not only questions like “Which part of the eye does light hit first?” but more complex questions that revolve around concepts like evolutionary adaptation. “Some types of fish live most of their adult lives in salt water but lay their eggs in freshwater,” one question read. “The ability of these fish to survive in these different environments is an example of [what]?”

These were multiple-choice questions—and the machines still couldn’t pass, despite using state-of-the-art techniques, including deep neural nets. “Natural language processing, reasoning, picking up a science textbook and understanding—this presents a host of more difficult challenges,” Etzioni says. “To get these questions right requires a lot more reasoning.”

It’s only a matter of time until the AI teams get from the 8th grade to high school then to the university level.

How does this relate to games though? With smarter AI comes we will get better bots in games and we’ll see that making NPCs will get easier.

Developing a Unified AI Framework

This month Firas Safadi, Raphael Fonteneau, and Damien Ernst published a paper in the International Journal of Computer Games Technology about how we ought to think about AI in games. They argue that we need a unified framework for dealing with AI development and deployment in games.

Their paper, Artificial Intelligence in Video Games: Towards a Unified Framework, is worth a read and will undoubtedly shape how we think about AI in games for years to come. Think about the possibility that game engines will ship with a suite of default AI behaviours that can be easily modified by non-coders.

Here’s the abstract:

With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of complex environments is pressing. Since video game AI is often specifically designed for each game, video game AI tools currently focus on allowing video game developers to quickly and efficiently create specific AI. One issue with this approach is that it does not efficiently exploit the numerous similarities that exist between video games not only of the same genre, but of different genres too, resulting in a difficulty to handle the many aspects of a complex environment independently for each video game. Inspired by the human ability to detect analogies between games and apply similar behavior on a conceptual level, this paper suggests an approach based on the use of a unified conceptual framework to enable the development of conceptual AI which relies on conceptual views and actions to define basic yet reasonable and robust behavior. The approach is illustrated using two video games, Raven and StarCraft: Brood War.

Another Quick Glance At Artificial Intelligence

Artificial Intelligence (AI) is becoming better with every passing year and thus more interesting. I have no idea what the state of AI will be in years to come but for now, this is some noteworthy stuff for game makers.

Emergent AI

“You work for it” when you create emergent AI. in This talk Ben Sunshine-Hill explores what it’s like to create and work with emergent artificial intelligences.

To hear Sunshine-Hill tell it, you should aim to design AI that behaves just like people and creatures in real life do, and that means you shouldn’t rely on “emergence” as a crutch; you should know exactly why your AI does what it does. At best, players should find your AI believable — not surprising.

Artificial Intelligence Research in Games

The first in a multi-part series of public lectures on AI in games. Recorded on 20th October 2014 at the University of Derby.

In this first video, we detail some of the most interesting work in using video games as benchmarks within the AI research community. This is largely focussed on four competition benchmarks:

– The Ms. Pac-Man Competition
– The Mario AI Competition
– The 2K BotPrize
– The Starcraft AI Competition

What if AIs are more trustworthy than humans?

In this keynote session, Bitcoin developer Mike Hearn talks on the topic ”Fighting for the right to be ruled by machines”. He outlines a possible scenario over the next 50 years, in which an ever worsening political situation results in some people deciding that only computers/robots can be trusted to control the critical infrastructure of society (cars, planes, mobile networks, legal systems etc) and therefore that the people currently in charge of them need to be evicted from those positions of power.

If all of this talk about artificial intelligence gets you thinking then you should check out the Experimental AI in Games workshop at AIIDE 2015 which is just a few months away. Their accepted papers include Would You Look At That! Vision-Driven Procedural Level Design and An Algorithmic Approach to Decorative Content Placement.

Previously I posted about other conferences about artificial intelligence.

We Need to Rethink How We Approach Artificial Intelligence

A computer scientist from the University of Toronto, Hector Levesque, is calling on fellow researchers into the field of artificial intelligence (AI) to drastically re-evaluate what they are doing. If we are going to continue research into the field of AI we need to change how we measure success. The popular way most AI software is evaluated is through something called the Turing test which is essentially a test to see if an AI can convince a human that it is also a human being.

Last year Wired wrote about how to pass the Turing test:

Just using the Turing test as a measurement is a narrow focus on what intelligence means, which is one of my criticisms of the field. Levesque sees huge problems with this too, and furthers the criticism to point out that now people are developing software to only pass the Turing test without creating anything remotely intelligent. For an example of this just check out Cleverbot.

Cleverbot

There are other problems with the field too, like how computer scientists are trying to program an AI when there is no clear understanding what intelligence is. People working in psychology, linguistics, and philosophy would probably find it problematic with how programmed AI functions (and is built) on various levels.

Levesque’s critique is covered by the New Yorker and shows some other problems with the current state of computer science AI research. Interestingly, there are some simple questions that Levesque and others have come up with that can better discern the quality of AI. These are questions based around the complexity of the english language and what some may call “common sense” (I have issue with how they approach this idea of common sense).

It’s worth a read and provides a good synopsis of what’s going on in the field of AI research.

Levesque saves his most damning criticism for the end of his paper. It’s not just that contemporary A.I. hasn’t solved these kinds of problems yet; it’s that contemporary A.I. has largely forgotten about them. In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human beings possess. That’s a gargantuan task— “more like scaling a mountain than shoveling a driveway,” as Levesque writes. But it’s what the field needs to do.

You can read the full text of Levesque’s paper here.

Visualizing Pathfinding Algorithms

pathfinding

Pathfinding is used in games to construct how AIs (and/or non-player characters) navigate the environment. At it’s core it is to emulate wayfinding. When working on a board game it’s easy to see and modify how characters and whatnot move around the board. In video games it can be hard to figure out exactly why a character is moving in a particular way.

For non-programemrs understanding the algorithms at work behind the scenes can be difficult. At GitHub there is a PathFinding visualization project which allows you to play with different algorithms.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

%d bloggers like this: