If one buys into Daniel Dennett’s proposed use of “the intentional stance” to generate explanations and predictions of human behavior (say, in an AI program that observes a person and tries to find ways of helping), then accounting for human error is a tough problem (because the stance assumes rationality and errors aren’t rational). That’s one reason I’m interested in errors.
Game AI faces a similar problem in that some games like chess and pool/billiards allow a computer player to make very good predictions many steps ahead, often beyond the ability of human players. Such near-optimal skill makes the computer players not much fun. One has to find ways of making the computers appear to play at a similar level of skill as whatever human they play against.
I just came across a very interesting article on the topic of how to make computer players have plausibly non-optimal skills. Here’s a good summarizing quote:
In order to provide an exciting and dynamic game, the AI needs to manipulate the gameplay to create situations that the player can exploit.
In pool this could mean, instead of blindly taking a shot and not caring where the cue ball ends up, the AI should deliberately fail to pot the ball and ensure that the cue ball ends up in a place where the player can make a good shot.
An interesting anecdote from the article is that the author created a pool-playing program that understood the physics of the simulated table and balls so well that it could unfailingly knock any ball it wanted to into a pocket. The program didn’t make any attempt to have the cue ball stop at a particular position after the target ball was pocketed, however. Yet naive human players interpreted the computer’s plays as trying to optimize the final position of the cue ball, apparently because they projected human abilities onto the program, and humans cannot unfailingly pocket any ball but seemingly are pretty good at having the cue ball stop where they want.