Jump to content
  • Advertisement
  • Remove ads and support GameDev.net for only $3. Learn more: The New GDNet+: No Ads!

  • 09/07/99 04:18 PM
    Sign in to follow this  

    Hierarchial AI

    Artificial Intelligence

    <%Topic="Hierarchal AI"%> Courtesy of Amit Patel

    Newsgroup: comp.ai.games
    From: andrew@cs.uct.ac.za (Andrew Luppnow)
    Date: Fri, 2 Dec 1994 10:10:50 +0200 (SAT)

    This document proposes an approach to the problem of designing the AI routines for intelligent computer wargame opponents. It is hoped that the scheme will allow the efficient, or at least feasible, implementation of opponents which are capable of formulating strategy, rather than behaving predictably according to fixed sets of simple rules.

    In the text below, "DMS" is an abbreviation for "decision-making-system". I use the term very loosely to denote any programming subsystem which accepts, as input, a "situation" and which generates, as output, a "response". The DMS may be a simple neural network, a collection of hard-coded rules, a set of fuzzy logic rules, a simple lookup table, or whatever you want it to be! It's most important feature is that it must be SIMPLE and TRACTABLE - in particular, it must accept input from a small, finite set of possible inputs and generate output which belongs in a similarly small, finite set of possible outputs.

    Some time ago I asked myself how a programmer might begin to implement the AI of a wargame which requires the computer opponent to develop a sensible military strategy. I eventually realized that simply feeding a SINGLE decision-making system with information concerning the position and status of each friendly and enemy soldier is hopelessly inefficient - it would be akin to presenting a general with such information and expecting him to dictate the movement of each soldier!

    But in reality a general doesn't make that type of decision, and neither does he receive information about the precise location of each soldier on the battlefield. Instead, he receives strategic information from his commanders, makes strategic decisions and presents the chosen strategy to the commanders. The commanders, in turn, receive tactical information and make tactical decisions based on (1) that information and (2) the strategy provided by the general.

    And so the process continues until, at the very bottom level, each soldier receives precise orders about what he and his immediate comrades are expected to accomplish.

    The important point is that the whole process can be envisaged in terms of several 'levels'. Each level receives information from the level immediately below it, 'summarises' or 'generalises' that information and presents the result to the level immediately above it. In return, it receives a set of objectives from the level above it and uses (1) this set of objectives and (2) the information from the lower level to compute a more precise set of objectives. This latter set of objectives then becomes the 'input from above' of the next lower level, and so on. In summary: information filters UP through the levels, becoming progressively more general, while commands and objectives filter DOWN through the levels, becoming progressively more detailed and precise.

    I decided that this paradigm might represent a good conceptual model for the implementation of the AI procedures in a complex strategy-based game: a "tree of DMS's" can be used to mimic the chain of command in a military hierarchy. Specifically, one might use one or more small, relatively simple DMS's for each level. The inputs for a DMS of level 'k' would be the outputs of a level (k+1) DMS and the information obtained by 'summarising' level (k-1) information. The outputs of the level k DMS would, in turn, serve as inputs for one or more level (k-1) DMS's. Outputs of the level zero DMS's would be used to update the battlefield.

                                            "Top brass" - fewer,
                                            MORE GENERAL options
                                            allow lookahead and
    Level 3        ^              o         "what-if reasoning."
                  /|\            / \
    Level 2      / | \          o   o
                   |           /|\  |\
    Level 1        |          o o o o o
                 \ | /       /| | | | |\
    Level 0       \|/       o o o o o o o   Individual soldiers -
                   V                        many options, but
                                            decision-making is
             As information                 simple and doesn't
             filters UP the                 attempt "lookahead",
             tree, it becomes               "what-if reasoning",
             more general. As               etc.
             objectives filter
             DOWN the tree,
             they become more

    The main advantage of this scheme is that it allows the "higher levels" of the hierarchy to formulate strategy, without being overwhelmed by the immense and intractably large number of possibilities which the computer AI would have to consider if it possessed only information about individual soldiers. Indeed, at the topmost level, decisions would involve rather abstract options such as

    • "direct all military activity towards seizing territory X", or
    • "conduct wars of attrition in territories X, Y, and Z", or
    • "buy time - stick to diplomacy for the time being", or
    • "avoid direct military engagement - concentrate on disrupting enemy supply routes",
    • etc.

    Under these circumstances, it would be feasible for the computer to attempt a certain amount of "lookahead", or to consider "what-if" scenarios - something which would be out of the question if options were presented in terms of the actions of individual soldiers.

    At the time of writing this, I haven't yet had the opportunity to explore an implementation of these ideas in a working game, but if anybody DOES enjoy some practical success with these ideas, I'd be interested in hearing from him/her!

    --- Andrew Luppnow

      Report Article
    Sign in to follow this  

    User Feedback

    There are no comments to display.

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • Latest Featured Articles

  • Featured Blogs

  • Popular Now

  • Similar Content

    • By Waaayoff
      I'm looking for an algorithm that I can use to remove vertices that are close to each other within a margin of error in a triangular mesh. Pretty much similar to Blender's "Remove Doubles" feature, if anyone is familiar with it.
      I think the issue isn't just removing the doubles, but also how would I handle the face indices once I remove "duplicate" vertices?
    • By iGrfx
      I've learned that the triangle clipping in the rasterization process usually using Sutherland–Hodgman algorithm. I also found an algorithm called "Guard-band". I'm writing a software raster so I want to know what technical the GPU use, I want to implement it for study. Thanks!
      updated: what's the more proper triangulate algorithm?
    • By Vilem Otte
      Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes:

      Fig. 01 - PCSS variant that allows for perfectly smooth, large-area light shadows
      Let's start the article by comparison of the following 2 screenshots - one with shadows and one without:
      Fig. 02 - Scene from default viewpoint lit with light without any shadows (left) and with shadows (right)
      This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position.
      Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages:
      For single point, it is possible to define visibility in a binary way - either in shadow or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping - which is a basic technique for all others used here.
      Standard Shadow Mapping
      Trivial, yet should be mentioned here.
      inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03 - code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on z-axis for light point of view basis.
      Which is well known to everyone here, giving us basic results, that we all well know, like:

      Fig. 04 - Standard Shadow Mapping
      This can be simply explained with the following image:

      Fig. 05 - Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel.
      Percentage-Close-Filtering (PCF)
      To make shadow more visually appealing, adding soft-edge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering:
      Fig. 06 - Percentage close filtering (PCF) results in nice soft-edged shadows, sadly the shadow is uniformly soft everywhere
      Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses):
      inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = -filterSize; i <= filterSize; i++) { for (int j = -filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07 - PCF filtering source code
      Representing this with image:

      Fig. 08 - Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel.
      While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though.
      Percentage-Closer Soft Shadows
      To understand problem in both previous techniques let's replace point light with area light in our sketch image.

      Fig. 09 - Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors - distance between receiver and light, distance between blocker and light and light size (shape).
      To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2-pass algorithm that does calculate average blocker distance as the first step - using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jittered-PCF for example). In short, PCSS computation will look similar to this:
      float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance - it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10 - Pseudo-code of PCSS shadow mapping
      The first problem is to determine correct average blocker calculation - and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance:
      // Input parameters are: // tex - Input shadow depth map // state - Sampler state for shadow depth map // projCoord - holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV - input size for blocker search // rotationTrig - input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with pre-defined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here - standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers - we output 2 values - average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11 - Average blocker estimation for PCSS shadow mapping
      For penumbra size calculation - first - we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as:
      penmubraSize = lightSize * (receiverDepth - averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noise-less shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to mid-sized area lights, yet large-sized area lights will cause problems.
      Fig. 12 - PCSS shadow mapping in practice
      As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of back-projection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action:
      Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author.
      You might also be interested in our recently featured article on Contact-hardening Soft Shadows Made Fast.
    • By lilbump
      Hi guys i'm new here, i really hope my question won't sound utterly stupid..
      I'd like to know whether it's better to use a PRNG or a regular RNG, if you are trying to program your own video slot machine. Actually i don't even have clearly understood the difference between the two =D
      2nd question is: which developer i should rely on? I'm following this guide, they talk about RNG but not which one or where to find it.
      Thank you in advance :)
    • By Colin Jackman
      Simply put, my function compares two AABBs for collision detection, and when they are around the same size it works fine, but I noticed that if I greatly decreased the size of one of them (or augmented the size of the other), then the collision wouldn't be detected correctly; I would have to have them intersect for a collision to be registered rather than having it register when they are at least in direct contact, which is the intended behavior.
      Below is my code.
      local function DetectCollision(a, b) -- AABB to AABB local collisionX = (a.Position.X + a.Size.X) >= b.Position.X and (b.Position.X + b.Size.X) >= a.Position.X local collisionY = (a.Position.Y + a.Size.Y) >= b.Position.Y and (b.Position.Y + b.Size.Y) >= a.Position.Y local collisionZ = (a.Position.Z + a.Size.Z) >= b.Position.Z and (b.Position.Z + b.Size.Z) >= a.Position.Z return collisionX and collisionY and collisionZ end EDIT - To be more specific, the issues start to occur when I cut the size of one of the AABBs in half. For instance, if I had two cubes where one's size is 12 on all axes and the other is six on all axes, then the collision will not register. Upon debugging, I noticed that only one of the collision bools will become false. This seems to depend on what axis the smaller bounding box moves from in relation to the bigger one, so if I moved the smaller AABB away from the bigger one on the y-axis, then collisionY will be false.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!