Shantanu Pandey
  • Projects
    • Hard Light Vector
    • Memory Manager
    • Samurai Crusader
    • Dyslexia VR
    • Geriatric Depression Suite
  • Prototypes
    • Game Mechanics
    • Diana's Inferno
    • MakeUp & BreakUp
    • 10Pest
    • Ninja Pinball
  • Blogs
    • Sic Parvis Magna
    • Game Engineering - Week 1
    • Game Engineering - Week 2
    • Game Engineering - Week 3
    • Game Engineering - Week 4
    • Game Engineering - Week 5
    • Game Engineering - Week 6
    • Game Engineering - Week 7
    • Game Engineering - Week 8
    • Game Engineering - Week 9
    • Engine Feature - Audio Subsystem
    • Final Project - GameEngineering III
    • AI Movement Algorithms
    • Pathfinding Algortihms
    • Decision Makiing
  • About

DECISION MAKING

4/14/2019

1 Comment

 
In video games, decision making means the ability of any character to decide what to do. Decision making is typically a small part of a larger game AI. Some of the decision-making techniques include Decision Trees, Behavior trees, State Machines, Behavior Trees, Fuzzy Logic or Decision Learning. I will be implementing some of these decision-making techniques in this write-up. I will be using C++ for my implementation.

To understand more about decision making, the character processes set of information that uses to generate an action that it wants to carry out. The input to the decision-making system is the knowledge that a character possesses, and the output is an action request. External knowledge is the information that a character knows about the game environment around it. Internal knowledge is information about the character’s internal state or thought processes.
Picture
Decision Making Schematic
World Manager

For this external knowledge, I have created a global static class WorldManager which store various information regarding the world and other AI agents. My world manager is storing the player character (like UE4 so that we can get the player character), all the NPC characters and the world map. We can use this world manager to add any data we might require in decision making.

Picture
Game Representation

The hierarchy of my game is as follows:
  • Game Object: Represent entities in the world.
  • Renderer: Used to draw the game objects.
  • Kinematic: Represents physics system of the game object. Movement is handled by this.
  • AIController: The AI controller of the game object. All AI based decisions are made by this.
We will just call the GameObject.Update() in our program. Updates to any other system will be handled by the game object.
The AI Controller will have a reference of its owner (Game Object). It will also have a reference to the decision-making technique used by the it and an action manager. All the decision-making techniques discusses here will derive from the same base class. The job of decision-making technique is to return an action, which will be added to the action manager. Two queues; active and pending are maintained in the action manager and it is the job of the action manager to carry out these actions. 

Picture
Game Architecture
Decision Trees

A decision tree is one of the simplest decision-making technique. These are modular and easy to maintain. A decision tree is made up of connected decision points. The starting decision is called as root of the tree.

There are two types of nodes used in the decision tree; action and decision. Actions are the leaf nodes that contain the logic to be carried out by the AI agent. Decision nodes contains the branching which AI agent follows based on some game/world condition. The algorithm continues along the tree, making choices at each decision node until the decision process has no more decisions to consider. At each leaf of the tree an action is attached. When the decision algorithm arrives at an action, that action is carried out immediately.
Picture
Decision Tree Class
For the purpose of demo, I am implementing the below decision tree.
Picture
Sample Decision Tree implemented in Code
It is a simple decision check with two action. When the player is within 100 units distance from the enemy character, it will chase the player. Otherwise the bot just aimlessly wanders around the screen. Below video shows the demo. The tree is simple, but with the semantics implement in this code, we can have much powerful decision tree.

Decision Tree Demo

Behavior Tree

Behavior Trees are similar to hierarchical state machines, but instead of a state, the main building block of a behavior tree is a task. These tasks are composed into sub-trees to represent more complex actions. In turn, these complex actions can again be composed into higher level behaviors. Because all tasks have a common interface and are largely self-contained, they can be easily built up into hierarchies (i.e., behavior trees) without having to worry about the details of how each sub-task in the hierarchy is implemented. This also means that arbitrary tasks and groups can be combined together without any of them needing to know what else is in the behavior tree.
Picture
Behavior Tree Class
Tasks
All tasks in behavior tree return a status (indicating their success, failure, etc.) which is used by the behavior tree. These tasks can be classified into three categories:
  • Condition: These test some property of the game. These sit at the leaf node of the tree.
  • Action: These alter the state of the game. These sit at the leaf node of the tree.
  • Composite: These keeps track of other child tasks (condition, action or composite)
Now we will take a look at some commonly used composite tasks.

Selector
A Selector will return immediately with a success status code when one of its children runs successfully. As long as its children are failing, it will keep on trying. If it runs out of children completely, it will return a failure status code. Selectors are used to choose the first of a set of possible actions that is successful.
Picture
Selector OnExecute() Method
Sequencer
A Sequencer will return immediately with a failure status code when one of its children fails. As long as its children are succeeding, it will keep going. Sequences represent a series of tasks that need to be undertaken.
Picture
Sequencer OnExecute() Method
Decorators
The decorator pattern refers to a class that wraps another class, modifying its behavior. Decorator is a type of task that has one single child task and modifies its behavior in some way. These are the Composite tasks with a single child. Two type of decorator I have implemented I this program are:
  • Invertor: Returns the inverted result of its child.
  • Until Fail: Makes a decision whether to allow their child behavior to run or not (they are sometimes called “filters”).
Picture
Invertor OnExecute() Method
Picture
Until Fail OnExecute() Method
Below is the behavior tree I am using in my code. With the collection of above mentioned tasks, powerful behaviors can be achieved. 
Picture
Creation of Behavior Tree in Code
Picture
Sample Behavior Tree Used in Demo
Below video show the demo of the code with the above behavior tree. The goal for the player is to get to the green coin on the screen. Once the player reaches the green coin, the game restarts with +1 win to the player. If the player dies before reaching the green coin, +1 loss is added to the player. If the player is within the chase range (which can be set explicitly while creating tasks), the enemy will chase the player and will damage the player if the player is in damage range. I am showing win, loss and health on screen. It is a simple behavior tree where the enemy is patrolling the green coin. But with composite node, we have given the enemy layered behavior which makes for interesting AI. With the structure we have implemented, it can be extended to accommodate for more complex behaviors.

Behavior Tree Demo

Decision Learning
​

Earlier we looked at looked at decision trees: a series of decisions that generate an action to take based on a set of observations. These Decision trees can be efficiently learned: constructed dynamically from sets of observations and actions provided through strong supervision. The constructed trees can then be used in the normal way to make decisions during gameplay. One of the algorithm that can be used for Decision Tree Learning is ID3.
 
ID3
The basic ID3 algorithm uses the set of observation–action examples. Observations in ID3 are usually called “attributes.” The algorithm starts with a single leaf node in a decision tree and assigns a set of examples to the leaf node. It then splits its current node (initially the single start node) so that it divides the examples into two groups. The division is chosen based on an attribute, and the division chosen is the one that is likely to produce the most efficient tree. When the division is made, each of the two new nodes is given the subset of examples that applies to them, and the algorithm repeats for each of them. This algorithm is recursive: starting from a single node it replaces them with decisions until the whole decision tree has been created. The split process looks at each attribute in turn (i.e., each possible way to make a decision) and calculates the information gain for each possible division. The division with the highest information gain is chosen as the decision for this node.
To look at our behavior tree we used previously, it will look something like this:
Picture
Truth Table used for Decision Tree Learning from previously created Behavior Tree
For two possible outcomes, the entropy of a set of actions is given by:

E = - (p(eat) log2 p(eat) + p(patrol) log2 p(patrol) + p(chase) log2 p(chase))  (we will add more if there are more actions),

where p(action) is the proportion of action in the example set. An important thing to notice here is that we aren’t interested in finding the exact information gain. We are only interested in finding the greatest information gain. Because logarithms to any positive power retain the same order (i.e., if log2 x > log2 y, then loge x > loge y), we can simply use the basic log in place of log2 and save on the floating point division.


Entropy and Information Gain

Entropy is a measure of the information in a set of examples. It measures the degree to which the actions in an example set agree with each other. Information gain is simply the reduction in overall entropy. Information in a set is the degree to which membership of the set determines the output. Below code shows the entropy class’ definition.
Picture
Entropy Class Definition as Implemented
This ID3 algorithm can be used in place for designing the behavior tree. Thought not without its fault (we have to decide the input example table), this is a step towards the machine learning side of AI in games. We can make our AI learn for the previously set of inputs. ID3 acts as the rudimentary algorithm in terms of machine learning, and there are many improvements of ID3 present currently. As game programmer, we have to earn the complexity of our AI and it is up-to us to use whichever algorithm justifies the complexity of our program.
1 Comment

    Author

    Write something about yourself. No need to be fancy, just an overview.

    Archives

    April 2019

    Categories

    All

    RSS Feed

FOR ANY FURTHER DETAILS


Telephone

801-859-1131

Email

Resume
File Size: 227 kb
File Type: pdf
Download File

Visit Hard Light Vector
[email protected]
Git Hub
  • Projects
    • Hard Light Vector
    • Memory Manager
    • Samurai Crusader
    • Dyslexia VR
    • Geriatric Depression Suite
  • Prototypes
    • Game Mechanics
    • Diana's Inferno
    • MakeUp & BreakUp
    • 10Pest
    • Ninja Pinball
  • Blogs
    • Sic Parvis Magna
    • Game Engineering - Week 1
    • Game Engineering - Week 2
    • Game Engineering - Week 3
    • Game Engineering - Week 4
    • Game Engineering - Week 5
    • Game Engineering - Week 6
    • Game Engineering - Week 7
    • Game Engineering - Week 8
    • Game Engineering - Week 9
    • Engine Feature - Audio Subsystem
    • Final Project - GameEngineering III
    • AI Movement Algorithms
    • Pathfinding Algortihms
    • Decision Makiing
  • About