**2**

# Calculating scissor region for deferred lighting

###
#1
Members - Reputation: **432**

Posted 19 September 2011 - 10:46 AM

I'd like to limit the the rendering of each light using a scissor region, but I'm having trouble understanding how to calculate the screen-space region affected by the light. I've been trying to follow this tutorial, which is the only source I can find that talks about using scissor testing for deferred shading, but it isn't the easiest to understand.

Does anyone have better references or explanations on how to calculate the region to scissor for a given light? I'm also open to other techniques besides scissor tests, but since I'm already using scissor testing, it seemed a good fit.

###
#2
Members - Reputation: **325**

Posted 19 September 2011 - 11:38 AM

vec4 top_right_light_position = projection_matrix * view_matrix * vec4(light_position.x + light_radius, light_position.y + light_radius, light_posiion.z + light_radius, light_position.w);

vec4 botom_left_light_position = projection_matrix * view_matrix * vec4(light_position.x - light_radius, light_position.y - light_radius, light_posiion.z - light_radius, light_position.w);

You then know the bounds for the rectangle by using x, y, of each corner and can define a scissor rectangle. Although I'm sure there's a much better way to do this, and I haven't yet tested this. Just figured I'd give you some inspiration to get the wheels turning.

###
#3
Members - Reputation: **461**

Posted 19 September 2011 - 11:50 AM

###
#4
Members - Reputation: **325**

Posted 19 September 2011 - 11:55 AM

But with the radius check and what you're saying the vertex and fragment shaders still touch every pixel for lighting.

The scissor op removes those pixels that are not needed.

It's significantly faster I think.

###
#5
Members - Reputation: **1199**

Posted 19 September 2011 - 04:35 PM

For their method though do what the other guy said and just draw a 3D box around the x,y,z of your light, at a specific size big enough to cover how far the light distance shines, and it will project onto the screen as the quad you want. Still wont need scissor testing as the pixels that the box projects to on screen are the only pixels that will be drawn and shaded anyway.

###
#6
Members - Reputation: **117**

Posted 20 September 2011 - 02:10 AM

Basically you draw an enclosed model that will completely surround the light (it can be cube or a rough sphere). After while rendering this volume you use stencil operetors to find which pixels will be illuminated (or will be shadowed for shadow volumes)

There are 3 posiblities for a pixel depending on which sides of volume (front side or back side relative to camera) are rendered

- If both sides are rendered (passed the depth test) then this pixel is behind the light and too far away. It won't be illuminated.

- If back side is not rendered while front side is rendered that pixel is inside the light volume so it will be illuminated.

- If both sides are not rendered (couldn't passed the depth test or maybe pixel is not on top of them) that it is not illuminated again.

alternatively you can use two sided stencil tests. Increase stencil buffer while rendering front and decrease it while rendering back. If a stencil buffer is 1 in the end it means that only the front side is rendered and pixel will be illuminated. Finally render a full screen quad to render light

###
#7
Members - Reputation: **432**

Posted 21 September 2011 - 09:34 PM

I've started trying to read the "Scissor Optimization" chapter from the Mechanics of Robust Stencil Shadows article on Gamasutra, which deals exactly with the problem I'm having of using scissor regions to limit light rendering. However, I so far haven't been able to follow along very well. The article makes assumptions and pulls in equations that I don't know about, and doesn't really explain what its doing or why the equations are important.

I really want to understand the "why" of what I'm doing, and not just copy/paste math equations, so I'm still searching for solutions...

###
#8
Members - Reputation: **1199**

Posted 22 September 2011 - 03:58 PM

**IS**scissor testing. Just draw a sphere, where your light is, and the only pixels that will be be lit are the ones that the sphere projects to on screen.

However, as expected, performance is terrible when the number of lights increases, as every light is currently being rendered as a full-screen quad.

I think you missed the deferred rendering memo. You draw a sphere for each light. When you really need to optimize, you draw portions of your screen with multiple lights at once using scissor tests:

Assuming 8 lights in your scene:

1.Draw sphere, sphere projected on screen will basically give you the "scissor pixels" the only ones the light will effect.

2. Light those pixels and ADD blend them to the scene.

3. Step 1 for next light.

8 Blend operations.

Optimized

1. Figure out (like that mechanics article, or basic ray tracing) what lights are in what part of your screen image.

2. Assuming your screen is cut in 4, say 2 lights are in each quadrant. You draw 4 quads, upperleft, upperight, etc. And send to your pixel shader 2 light locations for each quad you draw (the lights that actually are projected on screen and only light/influence those pixels in 1 specific quadrant).

3. Light those pixels and ADD blend them to scene.

4. Step 1 for next quadrant.

4 Blend operations. Because you calculated lighting for 2 lights at once.

For now just settle with the first version 8 blends.

###
#9
Members - Reputation: **432**

Posted 22 September 2011 - 04:20 PM

What you're describing is using stencil testing, right? Correct me if I'm wrong, but isn't that just another method of doing the same thing as screen-space scissor testing? I don't understand why you would need both. Stencil test would show you exactly where to render based on the "shadow" created by the sphere geometry, but requires an extra pass to render the sphere geometry. Scissor test limits rendering to a particular rectangle on the screen, with no extra pass, but requires some fancy math to calculate the extent of the light on the screen. End result should be roughly the same either way.Drawing spheres as we suggested

ISscissor testing. Just draw a sphere, where your light is, and the only pixels that will be be lit are the ones that the sphere projects to on screen.However, as expected, performance is terrible when the number of lights increases, as every light is currently being rendered as a full-screen quad.

I think you missed the deferred rendering memo. You draw a sphere for each light. When you really need to optimize, you draw portions of your screen with multiple lights at once using scissor tests:

###
#10
Crossbones+ - Reputation: **2670**

Posted 22 September 2011 - 09:01 PM

I've started trying to read the "Scissor Optimization" chapter from the <a href='http://www.gamasutra.com/view/feature/2942/the_mechanics_of_robust_stencil_.php?page=6' class='bbc_url' title='External link' rel='nofollow external'>Mechanics of Robust Stencil Shadows</a> article on Gamasutra, which deals exactly with the problem I'm having of using scissor regions to limit light rendering. However, I so far haven't been able to follow along very well. The article makes assumptions and pulls in equations that I don't know about, and doesn't really explain what its doing or why the equations are important.<br /><br />I really want to understand the "why" of what I'm doing, and not just copy/paste math equations, so I'm still searching for solutions...<br />

That article is self-contained. The only assumptions it makes is that you know what a plane is and what a dot product is, and it does not pull in equations from out of nowhere. If you really want to understand how to calculate the proper scissor rectangle for a light source, then that article is the right place to look.

I work on this stuff: C4 Engine | The 31st | Mathematics for 3D Game Programming and Computer Graphics | Game Engine Gems | OpenGEX

**Kickstarter in progress** for our Halloween FPS game, *The 31st*.

###
#11
Members - Reputation: **432**

Posted 22 September 2011 - 10:16 PM

I have no doubt that the article is sound, I'm just having a hard time understanding it. For example, it mentions that, in order to calculate the coordinates for the planes bounding the light, the following two equations must be satisfied:That article is self-contained. The only assumptions it makes is that you know what a plane is and what a dot product is, and it does not pull in equations from out of nowhere. If you really want to understand how to calculate the proper scissor rectangle for a light source, then that article is the right place to look.

But so far I don't have the slightest idea where these two conditions come from. I did some reading on dot products today to see if it had some other property I didn't know of, and found the geometric form which might be what the first equation is using, but other than that I'm still swimming in the dark. Again, no fault of the article, just a lack in my understanding. I just haven't yet found what I'm missing for it to make sense.

###
#12
Crossbones+ - Reputation: **2670**

Posted 22 September 2011 - 10:40 PM

**T***

**L**=

*r*means that the distance between the tangent plane

**T**and the light position

**L**is equal to the radius

*r*of the light source. The equation Nx^2 + Nz^2 = 1 just means that the normal to the plane is unit length, where we left out the y coordinate because it is known to be zero.

I work on this stuff: C4 Engine | The 31st | Mathematics for 3D Game Programming and Computer Graphics | Game Engine Gems | OpenGEX

**Kickstarter in progress** for our Halloween FPS game, *The 31st*.

###
#13
Members - Reputation: **432**

Posted 22 September 2011 - 11:03 PM

Wow... awesome, thank you for that explanation! That is the first I've read of a dot product doing that. I will start through the article again with that in mind.The dot product between a normalized plane and a point gives you the signed perpendicular distance from the point to the plane. The equation

T*L=rmeans that the distance between the tangent planeTand the light positionLis equal to the radiusrof the light source. The equation Nx^2 + Nz^2 = 1 just means that the normal to the plane is unit length, where we left out the y coordinate because it is known to be zero.

###
#14
Members - Reputation: **1199**

Posted 23 September 2011 - 01:06 AM

No. I think deferred is a bit over your head. In no offense, the reason your not getting a lot of stuff is that your skipping from beginner to advanced with no intermediate.What you're describing is using stencil testing, right?

As I said and the whole optimization of what everyone here including you are talking about is: For each light, I dont want to run a pixel shader on pixels that are not affected by the light. How do you do that? Simplest way, know how big of a 3d area in the world your light effects. So draw a sphere in your world, any surfaces that collide with that sphere (use GL_DEPTH_TEST as GL_GREATER), the sphere ("light") will actually be hitting a surface. Since the sphere is drawn right over the screen those are the exact pixels you need to light. So while you draw your sphere, you arent drawing the sphere, you still run your pixel shader and pass the light position etc, but the pixel shader only draws the triangles being sent (The ones from the lights sphere that collide and hit surfaces).

look up some more tutorials, i swear every one i read talked about that. Your scissor test is going to do NOTHING faster than the first method I posted UNLESS you actually figure out and do method 2 and compute lighting in grid regions for each light in 1 pass for multiple lights. Just do the method we told you and ur fine. All you have to do is draw a 3d sphere at the lights position and scale it big enough. If its a small desk lamp draw the sphere like 3 units, a streetlight 50 units etc.

###
#15
Members - Reputation: **325**

Posted 23 September 2011 - 08:23 AM

Which do you think would be faster, drawing a 16 triangle sphere where each vertex has to be multiplied by a set of matrices?

or computing the edges of the scissor region by taking the light position and adding in the strafe and up direction the radius of the light to get the top and right edges of the light's volume of affect projected onto 2d space?

I agree that drawing spheres to set a stencil region is fairly easy... perhaps I'll do a test, here's the code I was thinking of...

glm::mat4 view_matrix = camera->GetViewMatrix(); glm::mat4 projection_matrix = camera->GetProjectionMatrix(); glm::vec3 right_vector = glm::normalize(glm::vec3(view_matrix[0][0], view_matrix[1][0], view_matrix[2][0])); glm::vec3 up_vector = glm::normalize(glm::vec3(view_matrix[0][1], view_matrix[1][1], view_matrix[2][1])); for (unsigned int i = 0; i < light_count; i++){ light = lights[i]; glm::vec3 light_position = light->GetPosition(); float light_radius = light->GetRadius(); glm::vec3 right_side = projection_matrix * view_matrix * (light_position + (right_vector * light_radius)); glm::vec3 up_side = projection_matrix * view_matrix * (light_position + (up_vector * light_radius)); float scissor_left, scissor_right, scissor_up, scissor_down; scissor_right = right_side.x; scissor_up = up_side.y; scissor_left = up_side.x - (right_side.x - up_side.x); scissor_down = right_side.y - (up_side.y - right_side.y); }

If someone could just double check my math, but I believe that will get you the scissor region, you could then just draw a rectangle that size in ortho mode into a light fbo and you should be set.

###
#16
Members - Reputation: **432**

Posted 23 September 2011 - 09:15 AM

The reason for my confusion is that I've already tried restricting light rendering to small regions of the screen using scissor testing, and achieved better performance. This article on deferred shading uses scissor testing, but never mentions rendering spheres around the light sources. All lights are rendered as fullscreen quads, but the scissor test limits shader execution to the portion of the screen where the light is located.As I said and the whole optimization of what everyone here including you are talking about is: For each light, I dont want to run a pixel shader on pixels that are not affected by the light. How do you do that? Simplest way, know how big of a 3d area in the world your light effects. So draw a sphere in your world, any surfaces that collide with that sphere (use GL_DEPTH_TEST as GL_GREATER), the sphere ("light") will actually be hitting a surface. Since the sphere is drawn right over the screen those are the exact pixels you need to light. So while you draw your sphere, you arent drawing the sphere, you still run your pixel shader and pass the light position etc, but the pixel shader only draws the triangles being sent (The ones from the lights sphere that collide and hit surfaces).

look up some more tutorials, i swear every one i read talked about that. Your scissor test is going to do NOTHING faster than the first method I posted UNLESS you actually figure out and do method 2 and compute lighting in grid regions for each light in 1 pass for multiple lights. Just do the method we told you and ur fine. All you have to do is draw a 3d sphere at the lights position and scale it big enough. If its a small desk lamp draw the sphere like 3 units, a streetlight 50 units etc.

I'm certainly not opposed to trying the sphere method, as it sounds fairly simple, I just don't understand why you think both are required for deferred shading to work.

You mention that the method you describe doesn't use stencil testing. How else would you limit shader execution to an area of the screen masked by the rendered sphere? I've tried looking up more tutorials, as you suggest, but the only alternative I've found to scissor regions is using stencil testing, using an extra pass to render the light sphere to the stencil buffer, and then doing stencil failure tests to determine which pixels to run the shaders on. I'd love to see the tutorials you refer to.

###
#17
Members - Reputation: **1199**

Posted 23 September 2011 - 03:26 PM

Which do you think would be faster, drawing a 16 triangle sphere where each vertex has to be multiplied by a set of matrices?

I'm certainly not opposed to trying the sphere method, as it sounds fairly simple,

No, what I'm getting at is that it is easier for his perspective, I use what I know vs copying a function that doesn't make any sense to me. But in the theory of deferred shading, yes they will be the EXACT same time, unless your doing the 1,000+ disco lights. Also Pablo, the original method is faster (the one I'm proposing) IF IF IF your doing your stencil region 1 light at a time. How many operations does it take to project a low poly sphere say 100 polys: 100*32 instructions = 3200 instructions. Lets say his lighting shader is 32 instructions per pixel, and that includes pixels in his scissor test that are still NOT even receving light. So if your scissor region has more than 100 pixels that still wont receive light (which is going to happen), then your still running your pixel shader on pixels that equate to nothing, AND still wasting the blending operation which is the biggest concern for performance.

As to your question of the sphere. Well I have nothing to do at work atm, so damn if you dont get this then I give up.

###
#18
Members - Reputation: **1199**

Posted 23 September 2011 - 03:32 PM

###
#20
Members - Reputation: **432**

Posted 26 September 2011 - 11:00 PM

After this explanation and a bit more reading, I am having a much easier time understanding the math used in the article. The only question I have at this point is more abstract.The dot product between a normalized plane and a point gives you the signed perpendicular distance from the point to the plane. The equation

T*L=rmeans that the distance between the tangent planeTand the light positionLis equal to the radiusrof the light source. The equation Nx^2 + Nz^2 = 1 just means that the normal to the plane is unit length, where we left out the y coordinate because it is known to be zero.

Why is a quadratic equation being used to determine the extents of the light in screen space? I know what a quadratic equation looks like, and I understand how the shape makes sense for finding the two equation solutions that define the two sides of the light source. What I'm looking for is the proof or explanation that shows why the quadratic equation gives valid results for this use. I want to actually understand how this works as much as possible. Most of the quadratic equations I've seen are curved, but I would assume this calculation to require straight lines from the camera to the light edges. I'm guessing there is a variation on quadratic equations that mimics this?

I've been reading what I can find on quadratic equations all evening, but haven't found much beyond the typical curve examples from math class. Any information or links to explain how/why they are being used here would be awesome.