Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualwodinoneeye

Posted 23 October 2012 - 02:02 AM

  • any experience how to unit test AI stuff? Any best practices how to test it in general?


Visualization ...

I was doing a FSM driven behavior system for a simulation and because of the complexity of how the objects interacted --- a VERY important part of correcting/composing the AI logic was being able to visialize what all the objects were actually doing in the simulation (graphicly instead of reams of numbers), and beyond that was a way to show selected state data (visually/immediately instead of stop motion poking in the debugger) and then when all else failed falling back to some kind of ad hoc logging to show the critical calculations (to spot why it wasnt behaving the way I wanted it to)

The FSM (the high level logic) was part of the system which used proximity scanning to build target lists and pathfinding to sort target priorities/validity and multiple tasks/target types that were all being considered in parallel (priority picking) and it included the ability for an object to abandon a current task ( a multi-turn sequence of actions) when a better opportunity was newly detected. Each object would potentially consider dozens of object in its proximity. (Objects internal state also changed goals that shifted priorities).

So the visualization was the way I spotted something that looked like it wasnt acting right and then the internal data presentation could be turned on to try to trace how /what decsion was being made (with logging for very complex cases)

---

Unit testing would be having an 'arena' setup to place the require mix of objects in a canned situation to force them to interact. Usually tou do it in increasing complexity (to first test undistracted proper behavior and then to try where they conflict to make sure decision priorities were correct and the transitions(like abandoning a previous 'task') were done correctly.

Unit testing the tools used by the AI logic was fairly straight forward - displaying target lists, priority orderings, A* paths was easy to see that they were behaving properly (for a particular simulation situation)

---

For your AI structuring having things set up (from the start) to facilitate the testing is needed (like piping logging info to the screen and to files)

Presentation of the AI logic (CODE) to be easily readable (for when you are tracing the logic to figure out why something didnt act the way you think it should...) I created a high level language for the 'scripting' that was actually a macro expansion system (was in C/C++) -- the high level stuff was much simpler to read/understand (and went far beyond the 'high level subroutine calls' method in what it simplified.)


---

One thing that complicated matters was that the AI logic for object behavior was broken up into 'phases' where execution was broken up into grouped processing (I did a lock step calculate -act- resolve ... to get rid of the issues of the AI to hanlde changing world data). The macro system allowed each 'state' to have its associated logic visually grouped even though they executed in different places (phases) of the 'turn' logic.

Organizing the execution framework that way also allowed mutli-threading, as each objects logic was independant with the input 'world state' data static for each 'turn'. They could all be executed independantly with no 'data change' interlocking needed (which can be a huge source of overhead).


--------------------

Other data constructs for your all-in-one-system :

Target Lists

Priority assignment and sorting and picking (for boths tasks and targets)

flexible temporary state data used for persistant decisions

Map searching (and building relevant symbolized maps from the simulations map data)

Object attribute retrievals/ scanning

Visualization-oriented data (symbols for visual presentations)

Logging data templates (to facilitate tools for log playbacks)

My project was still fairly simple in the structuring of the AI logic and I hadnt yet gotten into 'planners' and hierachical goals/solutions/tasks (but I was headed that way because of limitations of the simplex system)

If your AI processing requirments are such that you exceed what can be done on one multi-core CPU then data replication/update systems will be needed for your AI-engine (that gets ugly).

----

Im not sure how you would structure this - but I found because of the huge amount of processing that AI takes (my simulation was running real-time) that you need methods of culling unneeded processing (ie- quantum priorities). The data driving that is usually imbedded throughout the main AI logic (both the engine code and the AI 'script' data) and the culling itself can happen at many points (In my system it happened in each processing phase in different ways)


I had to have many flags to block certain types of AI processing depending on the current state/actions of the object (which included being in the middle of atomic animations - which could incude locking out another object being interacted with) ---- So thats another consideration ---- of how closely you need the AI to have to interact with other game-engine functions (versus being largely independant)

#8wodinoneeye

Posted 23 October 2012 - 02:01 AM

  • any experience how to unit test AI stuff? Any best practices how to test it in general?


Visualization ...

I was doing a FSM driven behavior system for a simulation and because of the complexity of how the objects interacted --- a VERY important part of correcting/composing the AI logic was being able to visialize what all the objects were actually doing in the simulation (graphicly instead of reams of numbers), and beyond that was a way to show selected state data (visually/immediately instead of stop motion poking in the debugger) and then when all else failed falling back to some kind of ad hoc logging to show the critical calculations (to spot why it wasnt behaving the way I wanted it to)

The FSM (the high level logic) was part of the system which used proximity scanning to build target lists and pathfinding to sort target priorities/validity and multiple tasks/target types that were all being considered in parallel (priority picking) and it included the ability for an object to abandon a current task ( a multi-turn sequence of actions) when a better opportunity was newly detected. Each object would potentially consider dozens of object in its proximity. (Objects internal state also changed goals that shifted priorities).

So the visualization was the way I spotted something that looked like it wasnt acting right and then the internal data presentation could be turned on to try to trace how /what decsion was being made (with logging for very complex cases)

---

Unit testing would be having an 'arena' setup to place the require mix of objects in a canned situation to force them to interact. Usually tou do it in increasing complexity (to first test undistracted proper behavior and then to try where they conflict to make sure decision priorities were correct and the transitions(like abandoning a previous 'task') were done correctly.

Unit testing the tools used by the AI logic was fairly straight forward - displaying target lists, priority orderings, A* paths was easy to see that they were behaving properly (for a particular simulation situation)

---

For your AI structuring having things set up (from the start) to facilitate the testing is needed (like piping logging info to the screen and to files)

Presentation of the AI logic (CODE) to be easily readable (for when you are tracing the logic to figure out why something didnt act the way you think it should...) I created a high level language for the 'scripting' that was actually a macro expansion system (was in C/C++) -- the high level stuff was much simpler to read/understand (and went far beyond the 'high level subroutine calls' method in what it simplified.)


---

One thing that complicated matters was that the AI logic for object behavior was broken up into 'phases' where execution was broken up into grouped processing (I did a lock step calculate -act- resolve ... to get rid of the issues of the AI to hanlde changing world data). The macro system allowed each 'state' to have its associated logic visually grouped even though they executed in different places (phases) of the 'turn' logic.

Organizing the execution framework that way also allowed mutli-threading, as each objects logic was independant with the input 'world state' data static for each 'turn'. They could all be executed independantly with no 'data change' interlocking needed (which can be a huge source of overhead).


--------------------

Other data constructs for your all-in-one-system :

Target Lists

Priority assignment and sorting and picking (for boths tasks and targets)

flexible temporary state data used for persistant decisions

Map searching (and building relevant symbolized maps from the simulations map data)

Object attribute retrievals/ scanning

Visualization-oriented data (symbols for visual presentations)

Logging data templates (to facilitate tools for log playbacks)

My project was still fairly simple in the structuring of the AI logic and I hadnt yet gotten into 'planners' and hierachical goals/solutions/tasks (but I was headed that way because of limitations of the simplex system)

If your AI processing requirments are such that you exceed what can be done on one multi-core CPU then data replication/update systems will be needed for your AI-engine (that gets ugly).

----

Im not sure how you would structure this - but I found because of the huge amount of processing that AI takes (my simulation was running real-time) that you need methods of culling unneeded processing (ie- quantum priorities). The data driving that is usually imbedded throughout the main AI logic (both the engine code and the AI 'script' data) and the culling itself can happen at many points (In my system it happened in each processing phase in different ways)


I had to have many flags to block certain types of AI processing depending on the current state/actions of the object (which included being in the middle of atomic animations - which could incude locking out another object being interacted with) ---- So thats another consideration of how closely you need the AI to have to interact with other game-engine functions (versus being largely independant)

#7wodinoneeye

Posted 23 October 2012 - 01:58 AM

  • any experience how to unit test AI stuff? Any best practices how to test it in general?


Visualization ...

I was doing a FSM driven behavior system for a simulation and because of the complexity of how the objects interacted --- a VERY important part of correcting/composing the AI logic was being able to visialize what all the objects were actually doing in the simulation (graphicly instead of reams of numbers), and beyond that was a way to show selected state data (visually/immediately instead of stop motion poking in the debugger) and then when all else failed falling back to some kind of ad hoc logging to show the critical calculations (to spot why it wasnt behaving the way I wanted it to)

The FSM (the high level logic) was part of the system which used proximity scanning to build target lists and pathfinding to sort target priorities/validity and multiple tasks/target types that were all being considered in parallel (priority picking) and it included the ability for an object to abandon a current task ( a multi-turn sequence of actions) when a better opportunity was newly detected. Each object would potentially consider dozens of object in its proximity. (Objects internal state also changed goals that shifted priorities).

So the visualization was the way I spotted something that looked like it wasnt acting right and then the internal data presentation could be turned on to try to trace how /what decsion was being made (with logging for very complex cases)

---

Unit testing would be having an 'arena' setup to place the require mix of objects in a canned situation to force them to interact. Usually tou do it in increasing complexity (to first test undistracted proper behavior and then to try where they conflict to make sure decision priorities were correct and the transitions(like abandoning a previous 'task') were done correctly.

Unit testing the tools used by the AI logic was fairly straight forward - displaying target lists, priority orderings, A* paths was easy to see that they were behaving properly (for a particular simulation situation)

---

For your AI structuring having things set up (from the start) to facilitate the testing is needed (like piping logging info to the screen and to files)

Presentation of the AI logic (CODE) to be easily readable (for when you are tracing the logic to figure out why something didnt act the way you think it should...) I created a high level language for the 'scripting' that was actually a macro expansion system (was in C/C++) -- the high level stuff was much simpler to read/understand (and went far beyond the 'high level subroutine calls' method in what it simplified.)


---

One thing that complicated matters was that the AI logic for object behavior was broken up into 'phases' where execution was broken up into grouped processing (I did a lock step calculate -act- resolve ... to get rid of the issues of the AI to hanlde changing world data). The macro system allowed each 'state' to have its associated logic visually grouped even though they executed in different places (phases) of the 'turn' logic.

Organizing the execution framework that way also allowed mutli-threading, as each objects logic was independant with the input 'world state' data static for each 'turn'. They could all be executed independantly with no 'data change' interlocking needed (which can be a huge source of overhead).


--------------------

Other data constructs for your all-in-one-system :

Target Lists

Priority assignment and sorting and picking (for boths tasks and targets)

flexible temporary state data used for persistant decisions

Map searching (and building relevant symbolized maps from the simulations map data)

Object attribute retrievals/ scanning

Visualization-oriented data (symbols for visual presentations)

Logging data templates (to facilitate tools for log playbacks)

My project was still fairly simple in the structuring of the AI logic and I hadnt yet gotten into 'planners' and hierachical goals/solutions/tasks (but I was headed that way because of limitations of the simplex system)

If your AI processing requirments are such that you exceed what can be done on one multi-core CPU then data replication/update systems will be needed for your AI-engine (that gets ugly).

----

Im not sure how you would structure this - but I found because of the huge amount of processing that AI takes (my simulation was running real-time) that you need methods of culling unneeded processing (ie- quantum priorities). The data driving that is usually imbedded throughout the main AI logic and the culling itself can happen at many points (In my system it happened in each processing phase in different ways)


I had to have many flags to block certain types of AI processing depending on the current state/actions of the object (which included being in the middle of atomic animations - which could incude locking out another object being interacted with) ---- So thats another consideration of how closely you need the AI to have to interact with other game-engine functions (versus being largely independant)

#6wodinoneeye

Posted 23 October 2012 - 01:53 AM

  • any experience how to unit test AI stuff? Any best practices how to test it in general?


Visualization ...

I was doing a FSM driven behavior system for a simulation and because of the complexity of how the objects interacted --- a VERY important part of correcting/composing the AI logic was being able to visialize what all the objects were actually doing in the simulation (graphicly instead of reams of numbers), and beyond that was a way to show selected state data (visually/immediately instead of stop motion poking in the debugger) and then when all else failed falling back to some kind of ad hoc logging to show the critical calculations (to spot why it wasnt behaving the way I wanted it to)

The FSM (the high level logic) was part of the system which used proximity scanning to build target lists and pathfinding to sort target priorities/validity and multiple tasks/target types that were all being considered in parallel (priority picking) and it included the ability for an object to abandon a current task ( a multi-turn sequence of actions) when a better opportunity was newly detected. Each object would potentially consider dozens of object in its proximity. (Objects internal state also changed goals that shifted priorities).

So the visualization was the way I spotted something that looked like it wasnt acting right and then the internal data presentation could be turned on to try to trace how /what decsion was being made (with logging for very complex cases)

---

Unit testing would be having an 'arena' setup to place the require mix of objects in a canned situation to force them to interact. Usually tou do it in increasing complexity (to first test undistracted proper behavior and then to try where they conflict to make sure decision priorities were correct and the transitions(like abandoning a previous 'task') were done correctly.

Unit testing the tools used by the AI logic was fairly straight forward - displaying target lists, priority orderings, A* paths was easy to see that they were behaving properly (for a particular simulation situation)

---

For your AI structuring having things set up (from the start) to facilitate the testing is needed (like piping logging info to the screen and to files)

Presentation of the AI logic (CODE) to be easily readable (for when you are tracing the logic to figure out why something didnt act the way you think it should...) I created a high level language for the 'scripting' that was actually a macro expansion system (was in C/C++) -- the high level stuff was much simpler to read/understand (and went far beyond the 'high level subroutine calls' method in what it simplified.)


---

One thing that complicated matters was that the AI logic for object behavior was broken up into 'phases' where execution was broken up into grouped processing (I did a lock step calculate -act- resolve ... to get rid of the issues of the AI to hanlde changing world data). The macro system allowed each 'state' to have its associated logic visually grouped even though they executed in different places (phases) of the 'turn' logic.

Breaking it up that way also allowed mutli-threading, as each objects logic was independant with the input 'world state' data static for each 'turn'. They could all be executed independantly with no 'change' interlocking needed.


--------------------

Other data constructs for your all-in-one-system :

Target Lists

Priority assignment and sorting and picking (for boths tasks and targets)

flexible temporary state data used for persistant decisions

Map searching (and building relevant symbolized maps from the simulations map data)

Object attribute retrievals/ scanning

Visualization-oriented data (symbols for visual presentations)

Logging data templates (to facilitate tools for log playbacks)

My project was still fairly simple in the structuring of the AI logic and I hadnt yet gotten into 'planners' and hierachical goals/solutions/tasks (but I was headed that way because of limitations of the simplex system)

-

Im not sure how you would structure this - but I found because of the huge amount of processing that AI takes (my simulation was running real-time) that you need methods of culling unneeded processing (ie- quantum priorities). The data driving that is usually imbedded throughout the main AI logic and the culling itself can happen at many points (In my system it happened in each processing phase in different ways)


I had to have many flags to block certain types of AI processing depending on the current state/actions of the object (which included being in the middle of atomic animations - which could incude locking out another object being interacted with) ---- So thats another consideration of how closely you need the AI to have to interact with other game-engine functions (versus being largely independant)

#5wodinoneeye

Posted 23 October 2012 - 01:50 AM

  • any experience how to unit test AI stuff? Any best practices how to test it in general?


Visualization ...

I was doing a FSM driven behavior system for a simulation and because of the complexity of how the objects interacted --- a VERY important part of correcting/composing the AI logic was being able to visialize what all the objects were actually doing in the simulation (graphicly instead of reams of numbers), and beyond that was a way to show selected state data (visually/immediately instead of stop motion poking in the debugger) and then when all else failed falling back to some kind of ad hoc logging to show the critical calculations (to spot why it wasnt behaving the way I wanted it to)

The FSM (the high level logic) was part of the system which used proximity scanning to build target lists and pathfinding to sort target priorities/validity and multiple tasks/target types that were all being considered in parallel (priority picking) and it included the ability for an object to abandon a current task ( a multi-turn sequence of actions) when a better opportunity was newly detected. Each object would potentially consider dozens of object in its proximity. (Objects internal state also changed goals that shifted priorities).

So the visualization was the way I spotted something that looked like it wasnt acting right and then the internal data presentation could be turned on to try to trace how /what decsion was being made (with logging for very complex cases)

---

Unit testing would be having an 'arena' setup to place the require mix of objects in a canned situation to force them to interact. Usually tou do it in increasing complexity (to first test undistracted proper behavior and then to try where they conflict to make sure decision priorities were correct and the transitions(like abandoning a previous 'task') were done correctly.

Unit testing the tools used by the AI logic was fairly straight forward - displaying target lists, priority orderings, A* paths was easy to see that they were behaving properly (for a particular simulation situation)

---

For your AI structuring having things set up (from the start) to facilitate the testing is needed (like piping logging info to the screen and to files)

Presentation of the AI logic to be easily readable (for when you are tracing the logic to figure out why something didnt act the way you think it should...) I created a high level language for the 'scripting' that was actually a macro expansion system (was in C/C++) -- the high level stuff was much simpler to read/understand.


---

One thing that complicated matters was that the AI logic for object behavior was broken up into 'phases' where execution was broken up into grouped processing (I did a lock step calculate -act- resolve ... to get rid of the issues of the AI to hanlde changing world data). The macro system allowed each 'state' to have its associated logic visually grouped even though they executed in different places (phases) of the 'turn' logic.

Breaking it up that way also allowed mutli-threading, as each objects logic was independant with the input 'world state' data static for each 'turn'. They could all be executed independantly with no 'change' interlocking needed.


--------------------

Other data constructs for your all-in-one-system :

Target Lists

Priority assignment and sorting and picking (for boths tasks and targets)

flexible temporary state data used for persistant decisions

Map searching (and building relevant symbolized maps from the simulations map data)

Object attribute retrievals/ scanning

Visualization-oriented data (symbols for visual presentations)

Logging data templates (to facilitate tools for log playbacks)

My project was still fairly simple in the structuring of the AI logic and I hadnt yet gotten into 'planners' and hierachical goals/solutions/tasks (but I was headed that way because of limitations of the simplex system)

-

Im not sure how you would structure this - but I found because of the huge amount of processing that AI takes (my simulation was running real-time) that you need methods of culling unneeded processing (ie- quantum priorities). The data driving that is usually imbedded throughout the main AI logic and the culling itself can happen at many points (In my system it happened in each processing phase in different ways)


I had to have many flags to block certain types of AI processing depending on the current state/actions of the object (which included being in the middle of atomic animations - which could incude locking out another object being interacted with) ---- So thats another consideration of how closely you need the AI to have to interact with other game-engine functions (versus being largely independant)

#4wodinoneeye

Posted 23 October 2012 - 01:35 AM

  • any experience how to unit test AI stuff? Any best practices how to test it in general?


Visualization ...

I was doing a FSM driven behavior system for a simulation and because of the complexity of how the objects interacted --- a VERY important part of correcting/composing the AI logic was being able to visialize what all the objects were actually doing in the simulation (graphicly instead of reams of numbers), and beyond that was a way to show selected state data (visually/immediately instead of stop motion poking in the debugger) and then when all else failed falling back to some kind of ad hoc logging to show the critical calculations (to spot why it wasnt behaving the way I wanted it to)

The FSM (the high level logic) was part of the system which used proximity scanning to build target lists and pathfinding to sort target priorities/validity and multiple tasks/target types that were all being considered in parallel (priority picking) and it included the ability for an object to abandon a current task ( a multi-turn sequence of actions) when a better opportunity was newly detected. Each object would potentially consider dozens of object in its proximity. (Objects internal state also changed goals that shifted priorities).

So the visualization was the way I spotted something that looked like it wasnt acting right and then the internal data presentation could be turned on to try to trace how /what decsion was being made (with logging for very complex cases)

---

Unit testing would be having an 'arena' setup to place the require mix of objects in a canned situation to force them to interact. Usually tou do it in increasing complexity (to first test undistracted proper behavior and then to try where they conflict to make sure decision priorities were correct and the transitions(like abandoning a previous 'task') were done correctly.

Unit testing the tools used by the AI logic was fairly straight forward - displaying target lists, priority orderings, A* paths was easy to see that they were behaving properly (for a particular simulation situation)

---

For your AI structuring having things set up (from the start) to facilitate the testing is needed (like piping logging info to the screen and to files)

Presentation of the AI logic to be easily readable (for when you are tracing the logic to figure out why something didnt act the way you think it should...) I created a high level language for the 'scripting' that was actually a macro expansion system (was in C/C++) -- the high level stuff was much simpler to read/understand.


---

One thing that complicated matters was that the AI logic for object behavior was broken up into 'phases' where execution was broken up into grouped processing (I did a lock step calculate -act- resolve ... to get rid of the issues of the AI to hanlde changing world data). The macro system allowed each 'state' to have its associated logic visually grouped even though they executed in different places (phases) of the 'turn' logic.

Breaking it up that way also allowed mutli-threading, as each objects logic was independant with the input 'world state' data static for each 'turn'. They could all be executed independantly with no 'change' interlocking needed.


--------------------

Other data constructs for your all-in-one-system :

Target Lists

Priority assignment and sorting and picking (for boths tasks and targets)

flexible temporary state data used for persistant decisions

Map searching

Object attribute retrievals/ scanning

Visualization oriented data (symbols for visual presentation)

Logging data templates (to facilitate tools for log playbacks)

My project was still fairly simple in the structuring of the AI logic and I hadnt yet gotten into 'planners' and hierachical goals/solutions/tasks (but I was headed that way because of limitations of the simplex system)

PARTNERS