• ### Announcements

• #### Wondering what's new and changed at GameDev.net?06/20/17

Check out the latest Staff Blog update that talks about what's changed, what's new, and what's up with these "Pixels".
Followers 0

# OpenGL Shadowsampler comparison

## 5 posts in this topic

Doing cascaded shadow mapping, and I need some confirmation I am doing the right thing. The shadow mapping works (mostly) but on a few angles the shadows vanish.

The sampler is setup like this:

GLCALL(glGenSamplers(1, &mTextureSampler));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE));
GLCALL(glBindSampler(OpenGLTexture::TEXTURE_UNIT_SHADOW_DIRECTIONAL, mTextureSampler));

I do depth comparison rather than distance comparison; my depth textures thus contain depth values.

vec4 projCoords = UnifDirLightPass.mVPMatrix[index] * vec4(worldPos, 1.0);
projCoords.w    = projCoords.z - DEPTH_BIAS;
projCoords.z    = float(index);
float visibilty = texture(unifShadowTexture, projCoords);

float angleNormal = clamp(dot(normal, UnifDirLightPass.mLightDir.xyz), 0, 1);

fragColor = vec4(diffuse, 1.0) * visibilty * angleNormal * UnifDirLightPass.mLightColor;                                    

"unifShadowTexture" is of type "sampler2DarrayShadow", and "UnifDirLightPass.mVPMatrix[]" contains the bias * lightProj * lightView matrix for each split.

1. By having MIN/MAG filtering to GL_LINEAR, shouldn't I also have visibility = 0.5 for some samples? All I see is either 0.0 or 1.0.

2. Is the comparison in texture() valid? I believe I read somewhere that depth values aren't stored linearly, does this create a problem when I try to compare it against bias * lightProj * lightView * worldPosition?

3. For what reason would you use distance comparison (= writing the squared distance to the depth texture) over depth comparison for shadow mapping? It just seems like extra computations?

4. I am using shadow samplers, which gives me 0.0 or 1.0 values. How can I accomplish soft shadows with this for directional and point lights with such binary values to work with?

5. Why would you not use shadow samplers over normal samplers in shadow mapping?

Thanks

Edited by KaiserJohan
0

##### Share on other sites

1. By having MIN/MAG filtering to GL_LINEAR, shouldn't I also have visibility = 0.5 for some samples? All I see is either 0.0 or 1.0.

The result of the texture sampler (you have a shadow sampler which do a depth comparision) is either in shadow or not in shadow (use PCF or other fitlerings to get soft shadows). The linear filtering is used to get the right depth comparision value, but the final result is only the binary result of the comparision of the filtered depth value and your reference value.

4. I am using shadow samplers, which gives me 0.0 or 1.0 values. How can I accomplish soft shadows with this for directional and point lights with such binary values to work with?

To create soft  shadows , you can't use standard texture filtering methods (eg linear filtering), they will not really work. An often used approach is, to sample multiple shadow samples and calculate the shadow values yourself (eg PCF), I use ie 12-24 samples per rendered pixel to soften my shadows in my engine. Look out for hardware supported filtering too (again PCF).

5. Why would you not use shadow samplers over normal samplers in shadow mapping?

Because the hardware might be optimize to do the comparision of the shadow values, and the hardware could support already some degree of PCF to soften your shadows. In comes all down to better out-of-the-box hardware support.

Edited by Ashaman73
1

##### Share on other sites

1. By having MIN/MAG filtering to GL_LINEAR, shouldn't I also have visibility = 0.5 for some samples? All I see is either 0.0 or 1.0.

The result of the texture sampler (you have a shadow sampler which do a depth comparision) is either in shadow or not in shadow (use PCF or other fitlerings to get soft shadows). The linear filtering is used to get the right depth comparision value, but the final result is only the binary result of the comparision of the filtered depth value and your reference value.

I'm not sure about in GL, but this is actually how you enable hardware-accelerated PCF in D3D. The comparison happens first, on 4 unfiltered depth values, and then those 0/1 comparison results are linearly filtered.

0

##### Share on other sites

Is there no way to increase the samples used for hardware-accelerated PCF? It still looks very blocky with GL_LINEAR for OpenGL. Doing 12 samples software-style, does that mean 12 texture lookups? sounds expensive?

0

##### Share on other sites
Yep. Probably good to have a few different shaders for low-end vs high-end GPUs.
But, 12 HW-PCF samples is actually 48 texels worth of data, so it's pretty good value ;)
1

##### Share on other sites

Doing 12 samples software-style, does that mean 12 texture lookups?

You can use branching to your benefit. First make a quick test with 4 lookups, if all are either inside or outside of the shadow, you can mark the pixel as shadowed/unshadowed. If you got a mixed result, just look up X more shadow texels. The benefit comes from how GPUs work (atleast some, depends on hardware). GPU often group the processing units into wavefronts, cells whatever, which process multiple input data as single process (they do the same work at the same time). This is a reason, that branching hurts sometimes, because if some units of this group needs to do something else, the rest of the group need to wait and the total amount of processing time of the whole group increases.

Nevertheless, in our case, we want to optimize from, lets say 20 texel to just 4 texel access (sometimes). That is, if just one unit of a wavefront hits a mixed shadowed result, all pixels need the same (worst case) time. But if all unit just need 4 texel accesses, you suddently save a lot of processing time for this wavefront. A pixel wavefront eg has a 8x8 block size, thought this really depends on the hardware architecture. And the probabilty, that a 8x8 block is completly inside or outside the shadow , is quite high. Instead of lets say 20 texel access for the worst case scenario, you suddenly have an average case of ~12 texel access (assumption 50% hit rate => 4 + 16*50%).

The benefit:

If you work on a console, you should have access to detailed GPU archtecture information and can utilize it accordingly. If you work on the PC with unknown hardware, just build in an option which let you choose your shadow smoothness (from 1 sample up to 48 samples) and the user is able to decide himself which is best (performance vs quality).

Edited by Ashaman73
0

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Followers 0

• ## Game Jobs

• ### Similar Content

• By Jon Alma
Some time ago I implemented a particle system using billboarding techniques to ensure that the particles are always facing the viewer.  These billboards are always centered on one 3d coordinate.
I would like to build on this and use billboarding as the basis for things like laser bolts and gunshots.  Here the difference is that instead of a single point particle I now have to draw a billboard between two points - the start and end of the laser bolt for example.  I appreciate that having two end points places limits on how much the billboard can be rotated to face the viewer, but I'm looking to code a best effort solution.  For the moment I am struggling to work out how to do this or find any tutorials / code examples that explain how to draw a billboard between two points ... can anyone help?
Thanks.
• By Sagaceil
It's always better to fight with a bro.
• By recp
Hi,
I'm working on new asset importer (https://github.com/recp/assetkit) based on COLLADA specs, the question is not about COLLADA directly
also I'm working on a new renderer to render (https://github.com/recp/libgk) imported document.
In the future I'll spend more time on this renderer of course, currently rendering imported (implemented parts) is enough for me
assetkit imports COLLADA document (it will support glTF too),
importing scene, geometries, effects/materials, 2d textures and rendering them seems working
My actual confusion is about shaders. COLLADA has COMMON profile and GLSL... profiles,
GLSL profile provides shaders for effects so I don't need to wory about them just compile, link, group them before render

The problem occours in COMMON profile because I need to write shaders,
Actually I wrote them for basic matrials and another version for 2d texture
I would like to create multiple program but I am not sure how to split this this shader into smaller ones,

Basic material version (only colors):
Texture version:
https://gist.github.com/recp/b0368c74c35d9d6912f524624bfbf5a3
I used subroutines to bind materials, actually I liked it,
In scene graph every node can have different program, and it switches between them if parentNode->program != node->program
(I'll do scene graph optimizations e.g.  view frustum culling, grouping shaders... later)

I'm going to implement transparency but I'm considering to create separate shaders,
because default shader is going to be branching hell
I can't generate shader for every node because I don't know how many node can be exist, there is no limit.
I don't know how to write a good uber-shader for different cases:

Here material struct:
struct Material { ColorOrTexture emission; ColorOrTexture ambient; ColorOrTexture specular; ColorOrTexture reflective; ColorOrTexture transparent; ColorOrTexture diffuse; float shininess; float reflectivEyety; float transparency; float indexOfRefraction; }; ColorOrTexture could be color or 2d texture, if there would be single colorOrTex then I could split into two programs,
Also I'm going to implement transparency, I am not sure how many program that I needed

I'm considering to maintain a few default shaders for COMMON profile,
1-no-texture, 2-one of colorOrTexture contains texture, 3-........

Any advices in general or about how to optimize/split (if I need) these shaders which I provied as link?
What do you think the shaders I wrote, I would like to write them without branching if posible,
I hope I don't need to write 50+ or 100+ shaders, and 100+ default programs

PS: These default shaders should render any document, they are not specific, they are general purpose...
I'm compiling and linking default shaders when app launched

Thanks

• Hi guys,
I would like to contribute to a game project as a developer (open source possibly). I have some experiences in C/C++ in game development (perso projects). I don't know either unreal or unity but I have some knowledges in opengl, glsl and shading theory as I had some courses at university regarding to that. I have some knowledges in maths and basic in physics. I know a little how to use blender to do modelling, texturing and simple game assets (no characters, no animation no skinning/rigging). I have no game preferences but I like aventure game, dungeon crawler, platformers, randomly generated things. I know these kind of projects involve a lot of time and I'd be really to work on it but if there are no cleary defined specific design goals/stories/gameplay mechanics I would like to not be part of it x) and I would rather prefer a smaller but well defined project to work on that a huge and not 'finishable' one.
CircleOfLight97

• Hi, I finally released KILL COMMANDO on gamejolt for free. It is a retro-funsplatter-shooter with C64 style. Give it a try.