A computer scientist from the University of Toronto, Hector Levesque, is calling on fellow researchers into the field of artificial intelligence (AI) to drastically re-evaluate what they are doing. If we are going to continue research into the field of AI we need to change how we measure success. The popular way most AI software is evaluated is through something called the Turing test which is essentially a test to see if an AI can convince a human that it is also a human being.

Last year Wired wrote about how to pass the Turing test:

Just using the Turing test as a measurement is a narrow focus on what intelligence means, which is one of my criticisms of the field. Levesque sees huge problems with this too, and furthers the criticism to point out that now people are developing software to only pass the Turing test without creating anything remotely intelligent. For an example of this just check out Cleverbot.

Cleverbot

There are other problems with the field too, like how computer scientists are trying to program an AI when there is no clear understanding what intelligence is. People working in psychology, linguistics, and philosophy would probably find it problematic with how programmed AI functions (and is built) on various levels.

Levesque’s critique is covered by the New Yorker and shows some other problems with the current state of computer science AI research. Interestingly, there are some simple questions that Levesque and others have come up with that can better discern the quality of AI. These are questions based around the complexity of the english language and what some may call “common sense” (I have issue with how they approach this idea of common sense).

It’s worth a read and provides a good synopsis of what’s going on in the field of AI research.

Levesque saves his most damning criticism for the end of his paper. It’s not just that contemporary A.I. hasn’t solved these kinds of problems yet; it’s that contemporary A.I. has largely forgotten about them. In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human beings possess. That’s a gargantuan task— “more like scaling a mountain than shoveling a driveway,” as Levesque writes. But it’s what the field needs to do.

You can read the full text of Levesque’s paper here.