I've been thinking about some research of Dr. Lilly's that had referred to the ways in which humans understand language. He posited that speech isn't so much about its component parts (words) as it was about reference (context) and patterns (idioms, common repetitions).
A lot of our communication may be about recognizing patterns and extrapolating what we expect to be the complete message.
How do I relate this to videogames?
I'd like to know more about how games are analyzing players. Many fighting games are known to get "smarter" as people play, taking note of common moves and deciding the best counters. Bots are used in first-person shooters and are able to exhibit some surprising behaviors, but most of them are hard-coded reactions; They are not observing their environment.
It would be interesting if, every time a bot encountered a player-controlled character, it began 'taking notes' on sequences of behaviors - Does the player attempt to face the bot and provide cover fire in an attempt to achieve distance? Does the player favor melee attacks? Or grenades?
Most behaviors in first-person shooters have been defined by a scripting language specifically written to control bots, so why can't we design bots that translate player actions into that scripting language and devise strategies to counter those behaviors? If we then allowed these bots to communicate what they've learned and added a dash of game theory, then we might begin to see various levels of competition and cooperation in matches.
Here's a paper I'd like to read.
This paper has a lot of good concepts similar to what I'm talking about.
1 comment:
1) I'm not overly concerned with 'realistic' behaviors in games, but ones that lead to interesting gameplay choices; E.g., Pikmin does not contain 'realistic' behaviors, but the behaviors are integral to the gameplay and the player must interact accordingly. See my post on realism for clarification
2) Giving the AI more tools at its disposal does not necessarily mean making it perfect - sometimes adding layers and tools make things more prone to errors and/or inventive decisions since there are more options to choose from. I personally know way too many indecisive people.
3) I agree that Turn-Based Strategy games could benefit from some learning algorithms. Instead of a computer enemy using a limited number of programmed tactics, it would note your own tendencies and attempt to reconfigure its tactics-set in order to anticipate your moves.
4) Real-Time Strategy games could definitely stand some improvement. They tend to suffer from the same problem of the Unreal bot that owns everyone - since a computer can keep track of every unit at once, making it smarter just means allowing it more access to the data, whereas a human opponent cannot simultaneously control all aspects of the playing field.
5) One of the first ideas in my post have to do with pattern-processing leading to possibilities. This means that the reactions will be to expected outcomes, which don't always match reality. Thus we could expect behaviors that are neither too random nor too exact.
Let's say a bot observes that a player, when caught in a sniping position, tends to break left. The next time the bot catches the player sniping it would aim left - but the player breaks right this time. Now the bot has to decide whether to scrap its pattern or discard the anomaly and the player has to consider a change of tactics.
6) With multiple bots and game theory, we can create a model that ensures that the bots will not always team up against you, but that at times they might decide it's in their interest to help you out.
7) Obviously if your AI is simply 'pinpointing' on the player they are going to feel the squeeze and will probably begin to hate the game (Imagine Ninja Gaiden with enemies that got more exact as you played - I shudder to think). This is probably something, as a game developer, you should not do. Unless, y'know, that's your aim.
Post a Comment