Jump to content
  • entries
    15
  • comments
    52
  • views
    6107

Pathfinding Navigation Mesh : Wall Collision Avoidance

thecheeselover

2093 views

Subscribe to our subreddit to get all the updates from the team!

Introduction

In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls.

Configuration

  • 3D or 2D navigation mesh, as long as the 3D one is not cyclic.
  • Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals.
  • An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh.

The Algorithm

The agent is the pink low-poly humanoid and the final destination is the flag.

ps1.thumb.png.51f7c30354febae7d7506525e7bd38d9.png

ps5.thumb.png.c74e59246b5a0490ed6add7c9f57dd11.png

 

The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game.

Psss, check this article if you haven't seen it yet.

The algorithm is the following :

  1. For each waypoint, pick the current one and the next one until the next one is the last.
  2. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one.
  3. Convert each edge to a 2D line by discarding the Y component (UP vector).
  4. For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius.
  5. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges.
  6. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint.
  7. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6.

ps2.thumb.png.b152f5b8a2d20c0d0262147d5f4604e5.png

ps3.thumb.png.699ea4016b877a916455be2f94dedb12.png

ps4.thumb.png.8a6fd9db0a068366dd96fe3f47437f3c.png

 

Here's a video of the algorithm in action : 

 



11 Comments


Recommended Comments

I feel like there has to be a simpler way to accomplish this. Can you make the corners have non-zero radius during string pulling? Or treat the corners as circles, and move the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer)?

Share this comment


Link to comment
13 hours ago, swiftcoder said:

I feel like there has to be a simpler way to accomplish this. Can you make the corners have non-zero radius during string pulling? Or treat the corners as circles, and move the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer)?

@miimii1205  did the string pulling algorithm, so I'll tag him. As far as I know, the second option seems to be really complex. The easiest one would actually be the first option you proposed.

 

I'm sure there are static ways that are better than what I just described. Static in the sense that they are computed before the path generation algorithm is done. However, my algorithm is dynamic. It runs during run time and can be limited to the two next waypoints.

 

Also, the algorithm that I use to intersect two 2D lines is performant.

Edited by thecheeselover

Share this comment


Link to comment

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation. 

Share this comment


Link to comment
3 hours ago, DrEvil said:

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation. 

There are two reasons for [that I can remember] :

  1. Our navigation mesh are generated in-house and originally, we weren't supposed to add the wall collision avoidance module. We actually don't want the navigations meshes to depend onto each agent's radius. There's actually a way to do what you're talking about with multilayered navigation meshes.
  2. Each agent can have different radius values.

Share this comment


Link to comment
DrEvil

Posted (edited)

I understand, but typically that means layers are built for each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast.

I'm mainly curious what your reasoning is though. I have spend considerable time in the past on my hobby project experimenting with a manually built un-retracted navigation mesh, but it was annoying enough to try and handle the agent radius at runtime that I shelved it. That was many years ago though. For various reasons I've felt like trying to revive it recently, because for my particular use case(a bot in older quake based FPS engines, although I am parsing the collision of the map data to generate the navigation, there is a lot of noise and trash in the output that takes considerable effort to trim out, and still more manual effort to provide contextual information (team specific markup, etc)

Here's a really old video showing the tile based recast driven traditional navigation mesh.

https://www.youtube.com/watch?v=5qYgRA5oINs

Problem is that although I can trim some amount of data from these maps(like brushes marked as sky boxes, etc, even so the resulting data set still contains enough collision data that I get for instance a lot of navigation on "top" of the world, where the brush flags can't be automatically considered relevant or not, even though the navigation inside the world is reasonable, there is a lot of of garbage navigation as well, and enough 

https://imgur.com/a/r3BbBxn (etf_2fort recast auto navmesh, without hiding a lot of top brush fases you don'e even really see the important navigation of the world)

This is another old video of the non retracted navmesh I was experimenting with way back as well, once I got dynamic obstacle support working. There was a lot to like about where this was heading, even though the navmesh was manually built, but I ultimately burned out on compensating for the radius stuff at runtime. There are a lot of edge cases in a real world 3d game map.

https://www.youtube.com/watch?v=UhSNwaTV3II

I've recently gotten re-interested in my bot and am thinking about giving this approach another go, since after spending a lot of time on the automatic recast drive building, I'm finding that I still have to spend considerable time fixing up the data to trim out data where I don't want it, mark team specific regions, where the resulting time to produce a completed navigation doesn't save much, especially since the manual navmesh creation process is not only very fast with the way I have it set up, but it's very precise and results in far less regions in the resulting data set. Plus I have various shortcuts that take advantage of my problem space, such as the tendency for symmetry in the game maps the bot supports, so I can do half the map and then mirror all the sectors over to the other half.

Here's a map where only the green areas have been layed out, and the cyan sectors have been mirrored for the other half of the map

https://imgur.com/a/1r1qlYS

 

Edited by DrEvil

Share this comment


Link to comment
DrEvil

Posted (edited)

There is something super appealing about the manual approach. The quality of output is way more controllable and optimal, in terms of numbers of regions, size of regions, etc, and this even includes a number of manual "special" sectors, like floating "ledge" sectors that I use to cast downwards to make drop connections and such.

https://imgur.com/a/rTYiRTB

https://imgur.com/a/iUtnE9A

Just gotta solve that pesky radius stuff.

Edited by DrEvil

Share this comment


Link to comment
On 5/12/2018 at 12:17 PM, DrEvil said:

I understand, but typically that means layers are built for each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast.

Initially, I tried what is described in this tutorial because our project is in Java and uses the jMonkey Engine 3.1. It was such a hassle just to configure the pathfinding librairies because CritterAI is so obscure. I had to modify the source code.

What is CritterAI? According to the mentionned tutorial

Quote
Parameter Insight

The jme3AI system uses CritterAI, which is based off Recast and Detour navigation. The author of Recast lays out a few specific rules for NavMesh creation in this blog post, which logically apply to jme3AI. Below is a translation of this post as it pertains to jme3AI.

Also, the navmesh generator just didn't work and is extremely old. Everytime I fix something related to it, something else would break. So I just rage quit and decided that we would make our own navmesh module. That would allow us to have more control over the AI.

Another thing to note is that our game has differents worlds made of randomly generated levels. Each world geometry is quite different from each other. For example, we use a navigation mesh for the jungle level because of its organic nature but can directly use the A* in the savannah because it's made up of voxels with the marching cubes algorithm.

Making our own library comes with downsides too. We cannot directly use the awesome features of the libgdx ai library for example.

Furthermore, we have a lot of units with different radiuses : humanoids, snakes, bigger snakes, tigers, mini-bosses, bosses, legendary bosses and so on. That would make a lot of layers for the navigation mesh.

 

Now for your problem and hobby project, more specifically the navigation mesh cells on top of buildings, what you can do is either check that each of them is indirectly or directly connection to a root cell or you could check that each of them has at least 1 connection.

My second proposal is easier but will likely still sometimes outputs garbage. My first proposal is most expensive in terms of performance and also requires a human to flag the root cell or a certain root position if you can get the closer cell to a point.

That's cool to have obstacles!

 

Our needs and problems are actually different : you work on a prebuilt level (mesh) when we work with a completely generated level. The best designer for our generated world is the generator actually. That's why it is he (it?) which creates the navigation mesh for our game. The only things that is not in the generator are the octree, the automatic connections, which is a lot of work by the way, and cells metadata.

 

What do you want to do with your navmesh generator? Is its purpose only to generate the navmesh of several prebuilt maps or is it to work for a community of modders? Do you plan to add a lot of new maps?

Share this comment


Link to comment

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural data. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on the grid, since much of the pathing looks at meta data associated with the specific block types that are being pathed through, and that would be lost by abstracting it up to a navmesh. Some blocks allow a mob to move over them, others don't, etc. I recently added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

Share this comment


Link to comment
12 hours ago, DrEvil said:

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural data. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on the grid, since much of the pathing looks at meta data associated with the specific block types that are being pathed through, and that would be lost by abstracting it up to a navmesh. Some blocks allow a mob to move over them, others don't, etc. I recently added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

We have different worlds for our game and everything is still "work in progress" and can change. Until yesterday, those were the planified worlds :

  1. Savannah - Zones made up of marching cubes voxels where they are applied like a heightmap
  2. Jungle - Organic brute force dungeon generation
  3. Goblin cave - Marching cubes voxels with a lot of tunnels and with layers
  4. Sky - Extremely organic and relying a lot on assets

The world we are currently working on is the jungle, which needs generated navigation meshes.

We put a lot of effort into navmesh and yesterday we changed the design of the jungle level generator for a classic 2D array dungeon digger. Reading your comment, it made me think that maybe we wasted our time on the navmesh because we could have just used the newly designed 2D array for the A* and string pulling. We could still use our work on the sky world but we will probably work on it in like 6 months... Damn it! We try to follow Agile methodologies and still fail to not waste our time. It just proves that I'm more of a bottoms-up and backend / engine programmer than what I should be for our game.

 

Nice game by the way :)

 

Share this comment


Link to comment

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimentation, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding that an aspiring A.I. developer could do solo work to get noticed in the industry, and it got me my first few jobs at Pandemic and id Software. It can be counter productive when an actual product is your end goal though. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

Share this comment


Link to comment
1 hour ago, DrEvil said:

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimentation, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding that an aspiring A.I. developer could do solo work to get noticed in the industry, and it got me my first few jobs at Pandemic and id Software. It can be counter productive when an actual product is your end goal though. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

We are a small team of 2. I'm the "lead" programmer and my friend is the lead designer; whilst being what I believe is a talented junior programmer, the word lead really only means that I do the most programming. I still have a lot of work to do on my maturity / professionalism side however :/

Nice for your jobs! My experimentations with voxels and procedural generation made me win all the internships that I really wanted in university and I chose one at Ludia, which is a video game development company naturally.

It's been an extremely constructive discussion with you, so thank you very much :)  I hope everything you have planned goes well for you too!

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By Seer
      I have programmed an implementation of the Separating Axis Theorem to handle collisions between 2D convex polygons. It is written in Processing and can be viewed on Github here. There are a couple of issues with it that I would like some help in resolving.
      In the construction of Polygon objects, you specify the width and height of the polygon and the initial rotation offset by which the vertices will be placed around the polygon. If the rotation offset is 0, the first vertex is placed directly to the right of the object. If higher or lower, the first vertex is placed clockwise or counter-clockwise, respectively, around the circumference of the object by the rotation amount. The rest of the vertices follow by a consistent offset of TWO_PI / number of vertices. While this places the vertices at the correct angle around the polygon, the problem is that if the rotation is anything other than 0, the width and height of the polygon are no longer the values specified. They are reduced because the vertices are placed around the polygon using the sin and cos functions, which often return values other than 1 or -1. Of course, when the half width and half height are multiplied by a sin or cos value other than 1 or -1, they are reduced. This is my issue. How can I place an arbitrary number of vertices at an arbitrary rotation around the polygon, while maintaining both the intended shape specified by the number of vertices (triangle, hexagon, octagon), and the intended width and height of the polygon as specified by the parameter values in the constructor?
      The Polygon code:
      class Polygon { PVector position; PShape shape; int w, h, halfW, halfH; color c; ArrayList<PVector> vertexOffsets; Polygon(PVector position, int numVertices, int w, int h, float rotation) { this.position = position; this.w = w; this.h = h; this.halfW = w / 2; this.halfH = h / 2; this.c = color(255); vertexOffsets = new ArrayList<PVector>(); if(numVertices < 3) numVertices = 3; shape = createShape(); shape.beginShape(); shape.fill(255); shape.stroke(255); for(int i = 0; i < numVertices; ++i) { PVector vertex = new PVector(position.x + cos(rotation) * halfW, position.y + sin(rotation) * halfH); shape.vertex(vertex.x, vertex.y); rotation += TWO_PI / numVertices; PVector vertexOffset = vertex.sub(position); vertexOffsets.add(vertexOffset); } shape.endShape(CLOSE); } void move(float x, float y) { position.set(x, y); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void rotate(float angle) { for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); vertexOffset.rotate(angle); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void setColour(color c) { this.c = c; } void render() { shape.setFill(c); shape(shape); } }  
      My other issue is that when two polygons with three vertices each collide, they are not always moved out of collision smoothly by the Minimum Translation Vector returned by the SAT algorithm. The polygon moved out of collision by the MTV does not rest against the other polygon as it should, it instead jumps back a small distance. I find this very strange as I have been unable to replicate this behaviour when resolving collisions between polygons of other vertex quantities and I cannot find the flaw in the implementation, though it must be there. What could be causing this incorrect collision resolution, which from my testing appears to only occur between polygons of three vertices?
      Any help you can provide on these issues would be greatly appreciated. Thank you.
    • By Adeilton Alves
      Hello everyone, I'm new here and sorry if this isn't the right place to ask but i asked in a few forums around the internet and no one yet help with it.. I'm have been trying to mod this game for years, but I still stuck with the raw files from RACJIN games, 
      Raw Files [ Mod edit: Removed ]
      I would like to identify the compression algorithm used to compress these files so that they can be decompressed and analyzed.

      Game : Naruto Uzumaki Chronicles 2... A.K.A Naruto Konoha Spirits in Japan.
    • By Waaayoff
      I'm looking for an algorithm that I can use to remove vertices that are close to each other within a margin of error in a triangular mesh. Pretty much similar to Blender's "Remove Doubles" feature, if anyone is familiar with it.
      I think the issue isn't just removing the doubles, but also how would I handle the face indices once I remove "duplicate" vertices?
    • By iGrfx
      I've learned that the triangle clipping in the rasterization process usually using Sutherland–Hodgman algorithm. I also found an algorithm called "Guard-band". I'm writing a software raster so I want to know what technical the GPU use, I want to implement it for study. Thanks!
      updated: what's the more proper triangulate algorithm?
    • By Vilem Otte
      Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes:

      Fig. 01 - PCSS variant that allows for perfectly smooth, large-area light shadows
       
      Let's start the article by comparison of the following 2 screenshots - one with shadows and one without:
       
      Fig. 02 - Scene from default viewpoint lit with light without any shadows (left) and with shadows (right)
       
      This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position.
      Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages:
      For single point, it is possible to define visibility in a binary way - either in shadow or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping - which is a basic technique for all others used here.
       
      Standard Shadow Mapping
      Trivial, yet should be mentioned here.
      inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03 - code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on z-axis for light point of view basis.
       
      Which is well known to everyone here, giving us basic results, that we all well know, like:

      Fig. 04 - Standard Shadow Mapping
       
      This can be simply explained with the following image:

      Fig. 05 - Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel.
       
      Percentage-Close-Filtering (PCF)
      To make shadow more visually appealing, adding soft-edge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering:
       
      Fig. 06 - Percentage close filtering (PCF) results in nice soft-edged shadows, sadly the shadow is uniformly soft everywhere
       
      Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses):
      inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = -filterSize; i <= filterSize; i++) { for (int j = -filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07 - PCF filtering source code
       
      Representing this with image:

      Fig. 08 - Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel.
       
      While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though.
       
      Percentage-Closer Soft Shadows
      To understand problem in both previous techniques let's replace point light with area light in our sketch image.

      Fig. 09 - Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors - distance between receiver and light, distance between blocker and light and light size (shape).
       
      To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2-pass algorithm that does calculate average blocker distance as the first step - using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jittered-PCF for example). In short, PCSS computation will look similar to this:
      float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance - it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10 - Pseudo-code of PCSS shadow mapping
       
      The first problem is to determine correct average blocker calculation - and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance:
      // Input parameters are: // tex - Input shadow depth map // state - Sampler state for shadow depth map // projCoord - holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV - input size for blocker search // rotationTrig - input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with pre-defined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here - standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers - we output 2 values - average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11 - Average blocker estimation for PCSS shadow mapping
       
      For penumbra size calculation - first - we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as:
      penmubraSize = lightSize * (receiverDepth - averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noise-less shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to mid-sized area lights, yet large-sized area lights will cause problems.
       
      Fig. 12 - PCSS shadow mapping in practice
       
      As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of back-projection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action:
       
      Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author.
      You might also be interested in our recently featured article on Contact-hardening Soft Shadows Made Fast.
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!