Jump to content
  • Advertisement
  • Remove ads and support GameDev.net for only $3. Learn more: The New GDNet+: No Ads!

  • 09/17/18 01:59 AM

    Effect: Area Light Shadows Part 1: PCSS

    Graphics and GPU Programming

    Vilem Otte

    Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes:

    FirstImage.png

    Fig. 01 - PCSS variant that allows for perfectly smooth, large-area light shadows

     

    Let's start the article by comparison of the following 2 screenshots - one with shadows and one without:

    pcss01.png pcss05.png

    Fig. 02 - Scene from default viewpoint lit with light without any shadows (left) and with shadows (right)

     

    This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position.

    Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages:

    • For single point, it is possible to define visibility in a binary way - either in shadow or not in shadow
    • From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't)

    From here, one can follow into the idea of shadow mapping - which is a basic technique for all others used here.

     

    Standard Shadow Mapping

    Trivial, yet should be mentioned here.

    inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord)
    {
    	return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f;
    }

    Fig. 03 - code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on z-axis for light point of view basis.

     

    Which is well known to everyone here, giving us basic results, that we all well know, like:

    fig4.png

    Fig. 04 - Standard Shadow Mapping

     

    This can be simply explained with the following image:

    fig5.png

    Fig. 05 - Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel.

     

    Percentage-Close-Filtering (PCF)

    To make shadow more visually appealing, adding soft-edge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering:

    fig6_1.png fig6_2.png

    Fig. 06 - Percentage close filtering (PCF) results in nice soft-edged shadows, sadly the shadow is uniformly soft everywhere

     

    Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses):

    inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize)
    {
    	float shadow = 0.0f;
    	float2 grad = frac(projCoord.xy * resolution + 0.5f);
    
    	for (int i = -filterSize; i <= filterSize; i++)
    	{
    		for (int j = -filterSize; j <= filterSize; j++)
    		{
    			float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize));
    			tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f;
    			tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f;
    			tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f;
    			tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f;
    			shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y);
    		}
    	}
    
    	return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1));
    }

    Fig. 07 - PCF filtering source code

     

    Representing this with image:

    fig8.png

    Fig. 08 - Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel.

     

    While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though.

     

    Percentage-Closer Soft Shadows

    To understand problem in both previous techniques let's replace point light with area light in our sketch image.

    fig9.png

    Fig. 09 - Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors - distance between receiver and light, distance between blocker and light and light size (shape).

     

    To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2-pass algorithm that does calculate average blocker distance as the first step - using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jittered-PCF for example). In short, PCSS computation will look similar to this:

    float ShadowMapPCSS(...)
    {
    	float averageBlockerDistance = PCSS_BlockerDistance(...);
      
    	// If there isn't any average blocker distance - it means that there is no blocker at all
    	if (averageBlockerDistance < 1.0)
    	{
    		return 1.0f;
    	}
    	else
    	{
    		float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...)
    		float shadow = ShadowPCF(..., penumbraSize);
    		return shadow;
    	}
    }

    Fig. 10 - Pseudo-code of PCSS shadow mapping

     

    The first problem is to determine correct average blocker calculation - and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance:

    // Input parameters are:
    // tex - Input shadow depth map
    // state - Sampler state for shadow depth map
    // projCoord - holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map)
    // searchUV - input size for blocker search
    // rotationTrig - input parameter for random rotation of kernel samples
    inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig)
    {
    	// Perform N samples with pre-defined offset and random rotation, scale by input search size
    	int blockers = 0;
    	float avgBlocker = 0.0f;
    	for (int i = 0; i < (int)PCSS_SampleCount; i++)
    	{
    		// Calculate sample offset (technically anything can be used here - standard NxN kernel, random samples with scale, etc.)
    		float2 offset = PCSS_Samples[i] * searchUV;
    		offset = PCSS_Rotate(offset, rotationTrig);
    
    		// Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker
    		float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x;
    		if (z < projCoord.z)
    		{
    			blockers++;
    			avgBlockerDistance += z;
    		}
    	}
    
    	// Calculate average blocker depth
    	avgBlocker /= blockers;
    
    	// To solve cases where there are no blockers - we output 2 values - average blocker depth and no. of blockers
    	return float2(avgBlocker, (float)blockers);
    }

    Fig. 11 - Average blocker estimation for PCSS shadow mapping

     

    For penumbra size calculation - first - we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as:

    penmubraSize = lightSize * (receiverDepth - averageBlockerDepth) / averageBlockerDepth

    This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noise-less shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to mid-sized area lights, yet large-sized area lights will cause problems.

    fig12_1.png fig12_2.png

    Fig. 12 - PCSS shadow mapping in practice

     

    As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of back-projection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action:

     

    Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author.
    You might also be interested in our recently featured article on Contact-hardening Soft Shadows Made Fast.



      Report Article


    User Feedback


    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • Latest Featured Articles

  • Featured Blogs

  • Popular Now

  • Similar Content

    • By Iris_Technologies
      Suppose i don't have any linker at hand but i am calling an exported function from a C++ DLL Windows, i.e. sqrt from mvcrt14.dll, how would i get just and only just the Relative Virtual Address of sqrt from that dll to simulate what linker does and convert this call to a call to such RVA on the hexcoded generated .exe file? 
      Either, how would i read the RVA of Mac, Android, iOS and Linux library formats?
    • By Neoshaman
      I'm struggling to find the correct way to make ray intersection with curve that are swept along an axis.
      In theory I thought it should be simple, I decompose the problem into component:
      - find intersection on the curve (cross section), which is easy
      - extrude that intersection point into an axis aligned line, solve the slope intersection to that line to get the final offset.

      To be sure I got it right, I'm starting with a swept 45° line centered on origin (line-plane intersection with dot product would be more efficient, but remember I'm trying to validating swiping a long a line).
      - line equation is origine + directionVector * t
      - line to line intersection equation is t = (origine1 - origine2)/(directionVector2 - directionVector1) >> assuming they are never parallel.

      So let line2dIntersection(directionVector1,origine1,directionVector2,origine2)

      Assuming the ray start on the xy plane (pseudo code):
      - I first compute the cross section into xz
      intersection1 = line2dIntersection(rayDir.xz, vector2(origine.x,0), vector2(1,1).normalize(), vector2(0,0));
      result.xz = raydir.xz * intersection1;

      -Then find the slope swipe offset into yz
      intersection2 = line2dIntersection(rayDir.yz, vector2(origine.y,0), vector2(1,0).normalize(), vector2(0,result.z));
      result.y = raydir.y * intersection2;

      But all my result are garbage. What am I doing wrong? where is the leap of logic I did?
    • By Naitsirc
      I wrote a flood fill algorithm in Java to get the bounds of each sprite in a spritesheet. The problem is that it works fine with some sprites but with the rest of them it doesn't. For example:
      https://i.gyazo.com/cdbdb0ce40a46445ca8e7b62176ab442.png
      I put a red square surrounding some of the errors.
      This is the flood fill method:
      private static Rectangle floodFill(int[] pixels, int x0, int y0, int width, int height) { Rectangle frame = new Rectangle(x0,y0,1,1); Deque<Point> queue = new LinkedList<>(); queue.addLast(new Point(x0,y0)); while(!queue.isEmpty()) { Point p = queue.poll(); final int x = p.x; final int y = p.y; if(x < 0 || x >= width || y < 0 || y >= height) { continue; } // Is not part of a sprite, or has been visited before if(pixels[x+y*width] == 0) { continue; } pixels[x+y*width] = 0; // Mark as visited // Update bounds if(x < frame.x) { frame.x = x; } else if(x > frame.x + frame.width) { frame.width++; } if(y < frame.y) { frame.y = y; } else if(y > frame.y + frame.height) { frame.height++; } queue.add(new Point(x-1,y)); queue.add(new Point(x-1,y-1)); queue.add(new Point(x-1,y+1)); queue.add(new Point(x+1,y)); queue.add(new Point(x+1,y-1)); queue.add(new Point(x+1,y+1)); queue.add(new Point(x,y-1)); queue.add(new Point(x,y+1)); } return frame; } The flood fill method is called from here:
      private static List<Rectangle> split(BufferedImage image) { List<Rectangle> sprites = new ArrayList<>(); int[] pixels = ((DataBufferInt)image.getRaster().getDataBuffer()).getData().clone(); final int width = image.getWidth(); final int height = image.getHeight(); for(int y = 0;y < height;y++) { for(int x = 0;x < width;x++) { if(pixels[x+y*width] != 0) { Rectangle r = floodFill(pixels,x,y,width,height); sprites.add(r); } } } return sprites; } What it does is visit each pixel, and if it is not equal to zero (background color), then it is a sprite, and the flood fill method gets it bounds.
      I have searched for a solution and I've tried many times with different implementations but I couldn't found a solution.
      Can someone help me? I think I am missing something but I can't see it.
      Thanks!
    • By multiappple
      Hi,guys.
          I need to project a picture from a projector(maybe a camera) onto some meshes and save those into the mesh texture according to the mesh's unfolded UV.It  just like the light map which encode the lighting-info into the texture instead of the project-info.
          The following picture is an example(But it just project without writting into texture).I noticed blender actually has this function that allow you to draw a texture on to a mesh.But i have no idea on how to save those project pixel into the mesh's texture.
          I think maybe i can finish this function if i have a better understanding about how to produce Light map.Any advises or matertials can help me out?(any idea,any platform,or reference)>?

    • By Seer
      I have programmed an implementation of the Separating Axis Theorem to handle collisions between 2D convex polygons. It is written in Processing and can be viewed on Github here. There are a couple of issues with it that I would like some help in resolving.
      In the construction of Polygon objects, you specify the width and height of the polygon and the initial rotation offset by which the vertices will be placed around the polygon. If the rotation offset is 0, the first vertex is placed directly to the right of the object. If higher or lower, the first vertex is placed clockwise or counter-clockwise, respectively, around the circumference of the object by the rotation amount. The rest of the vertices follow by a consistent offset of TWO_PI / number of vertices. While this places the vertices at the correct angle around the polygon, the problem is that if the rotation is anything other than 0, the width and height of the polygon are no longer the values specified. They are reduced because the vertices are placed around the polygon using the sin and cos functions, which often return values other than 1 or -1. Of course, when the half width and half height are multiplied by a sin or cos value other than 1 or -1, they are reduced. This is my issue. How can I place an arbitrary number of vertices at an arbitrary rotation around the polygon, while maintaining both the intended shape specified by the number of vertices (triangle, hexagon, octagon), and the intended width and height of the polygon as specified by the parameter values in the constructor?
      The Polygon code:
      class Polygon { PVector position; PShape shape; int w, h, halfW, halfH; color c; ArrayList<PVector> vertexOffsets; Polygon(PVector position, int numVertices, int w, int h, float rotation) { this.position = position; this.w = w; this.h = h; this.halfW = w / 2; this.halfH = h / 2; this.c = color(255); vertexOffsets = new ArrayList<PVector>(); if(numVertices < 3) numVertices = 3; shape = createShape(); shape.beginShape(); shape.fill(255); shape.stroke(255); for(int i = 0; i < numVertices; ++i) { PVector vertex = new PVector(position.x + cos(rotation) * halfW, position.y + sin(rotation) * halfH); shape.vertex(vertex.x, vertex.y); rotation += TWO_PI / numVertices; PVector vertexOffset = vertex.sub(position); vertexOffsets.add(vertexOffset); } shape.endShape(CLOSE); } void move(float x, float y) { position.set(x, y); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void rotate(float angle) { for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); vertexOffset.rotate(angle); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void setColour(color c) { this.c = c; } void render() { shape.setFill(c); shape(shape); } }  
      My other issue is that when two polygons with three vertices each collide, they are not always moved out of collision smoothly by the Minimum Translation Vector returned by the SAT algorithm. The polygon moved out of collision by the MTV does not rest against the other polygon as it should, it instead jumps back a small distance. I find this very strange as I have been unable to replicate this behaviour when resolving collisions between polygons of other vertex quantities and I cannot find the flaw in the implementation, though it must be there. What could be causing this incorrect collision resolution, which from my testing appears to only occur between polygons of three vertices?
      Any help you can provide on these issues would be greatly appreciated. Thank you.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!