Paraboloid mapping in view space

Recommended Posts

Viik    252
I can't get paraboloid mapping to start working in view space. It works as it should if both paraboloid TranslationRotation matrix and vertices positions in World space but not when I convert both into View space. Code is basically similar to Jason's greate article. Thank you Jason! Basically vertex shader looks like this, it works just fine if all in world space:
float4	toWorld = mul(float4(input.pos.xyz, 1), object_to_world);
float4	clip_pos = mul( toWorld, mWVP ); //mWVP is a paraboloids InverseTraslationRotation matrix

clip_pos = clip_pos / clip_pos.w;
clip_pos.z = clip_pos.z * direction;

float L = length( clip_pos.xyz );
clip_pos = clip_pos / L;

output.z_value = clip_pos.z; //used for culling

clip_pos.z = clip_pos.z + 1;
clip_pos.xy = clip_pos.xy / clip_pos.z;

clip_pos.z = L / 500;
clip_pos.w = 1;

output.clip_pos = clip_pos;

return output;


Now I want to do it in view space so it goes like this at the beginning:
float4	toWorld = mul(float4(input.pos.xyz, 1), object_to_world);
float4	toView = mul(toWorld, viewM); //viewM is a projects view matrix
float4	clip_pos = mul( toView, mWVPv); //mWVPv is a paraboloids InverseTraslationRotationMatrix multiplyed by View matrix


It does rasterized as a paraboloid but content of rasterization changes as I move camera, I don't uderstand why, as both vertex and parabloid should be in view space right now. Does anybode has an idea why and how it can be solved? Maybe I need to first move paraboloid position and direction into view space, then make a matrix from it and use it as a basis, not sure that it will work and basically it will be kind a similar to what I have right now, if I'm not missing something here.

Share on other sites
Jason Z    6436
What are you trying to do in view space? I'm not entirely clear what the actual symptoms of the issue are - can you further describe what isn't working?

About the article: I'm happy you were able to use it! Did you use thestand alone article, or the section from the d3d10 book?

Share on other sites
Viik    252
Quote:
 What are you trying to do in view space?

Shadow mapping in a light pre-pass. I'm restoring view space position of vetices later in the process, right now I'm just checking will it work in view space at all.

Quote:
 I'm not entirely clear what the actual symptoms of the issue are - can you further describe what isn't working?

Rasterized image of scene moves as I move camera, while it should remain static as nor vertices nor paraboloid basis is moving.

Quote:
 Did you use thestand alone article, or the section from the d3d10 book?

I think that was a standalone article here on gamedev.net, as I've implemented shadow mapping for omni lights using it, that was about a year ago.

I'm think that I really need to separately take position and rotation of paraboloid, move them into view space, then construct a matrix from them and use it as basis. Not just multiply paraboloids matrix with view matrix as I'm trying to do it now.

Share on other sites
Viik    252
Tryed to move both position and direction of paraboloid into view space and make kind of LookAt matrix from them. It behaives differently but image is still moving as camera moves.

I have suspicion that in view space vertex translation into paraboloid's space dosn't work as it should. By the idea of algorithm, translation turns paraboloid's position into new origin of the scene and vertices are positioned realtive to it, but it view space axis's are different and translation dosn't move vertex properly in relation to paraboloids position.

Share on other sites
Jason Z    6436
I haven't done much with deferred rendering/light pre-pass rendering, so unfortunately I can't comment too much about it. However, from what I remember, the paraboloid basis matrix basically generates a view space from the location of the paraboloid and looking out from the orientation of that paraboloid. So the geometry that gets rendered into the paraboloid map is already in its own 'paraboloid view space'.

Is this the view space that you are talking about? If not, perhaps you could post a screen shot or two to illustrate the problem.

Share on other sites
Viik    252
By View space I mean View space of project's camera, so it's view matrix of current camera. It stored separately. In gbuffer normals are stored in view space and fragment position reconstuction from depth is done into view space as well.

Basically when I render shadow map using parabaloid projection I can use world space position of vetices, but during sampling of shadow map I have only view space position. I can restore fragment position in world space but that would lead to some small overhead here and there that I'm trying to avoid.

1) is a what I get when vertices and paraboloid's position and rotation are in world space

2) vertices and paraboloid matrix multiplied by camera's view matrix

3) the same as 2, but camera orientation was changed

Share on other sites
Jason Z    6436
Ahhh - now I understand. I think what you want to do is to transform your view space position (from the G-buffer) back to world space, and then to paraboloid space. So you would want to multiply be the inverse of the view matrix (back to world space) and then multiply by the paraboloid matrix used to generate the paraboloid map.

Have you tried this yet?

Share on other sites
Viik    252
That for sure will work, I just didn't wanted to do it, simply to avoid inverse view matrix multiplication and save some performance. Just found a small trick where I would need to do this only ones for a fragment, not for every fragment that every light can effect.

Share on other sites
Jason Z    6436
If you are doing a paraboloid map lookup, you must calculate the fragment's location relative to the paraboloid itself. Without this, you wouldn't know which part of the map to look in - so you are already doing at least one matrix multiplication per paraboloid shadow/environment map.

Concatenating that matrix with the inverse of the camera's view matrix should not be a performance issue - it is small potatos compared to the rendering that is going on. How many lights are you targetting for this operation?

Share on other sites
patw    223
In matrix multiplication A * B != B * A, however, for an affine matrix, A * B == B^(-1) * A.

You can save yourself a matrix inversion by switching your multiplication order, provided you know the input data is going to be kosher (which it should be, for a view matrix).

Share on other sites
Viik    252
That's actually good idea, indeed it's faster to just provide matrix with inverse view already in it.

Quote:
 How many lights are you targetting for this operation?

Share on other sites
Viik    252
Trying to do another thing with paraboloids. Before, i've used them with VSM, right now it's not an option and at max what I can afford is a 2x2 PCF. Antoher limitation is the resolution of shadow maps that I need to keep quite small.

Issue that I'm having with small resolution shadow maps and paraboloid mapping look like this:

on the left side is a 2x2 PCF and non PCF is on the right. Both have very ugly shadow acne. Making bigger "epsilon" during depth comparison dosn't help that much, I still have shadow acne.

To my mind it's an issue with how depth values are stored in texture and how they measured in real time. Texture is a R32F, so precision should not be a problem.

During shadow map render, in vertex shader, vector from light's position to vertex is calculated and sent to pixel shader, where length of this light_to_vertex vector is taken and rendered into shadow map.
During shadow term calculation, in a similar way, vertex from fragment's position to light's position is build and it's length compared to one that is fetched from shadow map.

Share on other sites
Jason Z    6436
If you consider what you are doing with paraboloid mapping, I think you will understand why there is such surface acne. You are taking the depth information for an entire scene and stuffing it into two texture maps. This is very good for things that need a low resolution (like diffuse environment map lookup for reflections) and not so good for things that need high resolution (like standard shadow maps).

Have you tried to use a regular rectangular shadow map with comparable resolutions to see if the problem is really due to paraboloid mapping or if it is a fundamental shadow mapping limitation with the given resolution? I would venture to guess that the curved nature of the paraboloid, along with some interpolation issues with rasterizing, are exacerbating the resolution requirements for shadow mapping.

Just for testing, try using a super high resolution paraboloid map and see if the situation improves (i.e. is more responsive to the offset).

Share on other sites
Viik    252
Quote:
 Just for testing, try using a super high resolution paraboloid map and see if the situation improves (i.e. is more responsive to the offset).

Using higher resolution does improves it. Basically if I set sm size to at least 1024x1024, for current scene bugs are almost gone. But the issue here is that my shadow maps will have resolution only about 128x128 or 256x256.

One of the reasons for such strong shadow acne can be because during shadow map render, vertices are mapped into parabaloid space, but during shadow term calculation fragments are used and they are mapped much closer to a "perfect" paraboloid than vertices. Maybe, somehow, I can map rasterized fragment to a paraboloid during shadow map render instead of vertices. Other way to solve this is a hardware teselation, but unfortunately I'm limited to dx9.0

Or maybe I just use VSM as I did before, it does hides most of the shadow acne bug.

Share on other sites
Jason Z    6436
Trying to output depth manually at the fragment level will only work part of the time, since the triangles are rasterized linearly... there is inevitably some fragments that will be left out, and that won't work well with shadow mapping.

Have you considered doing software tessellation? All you are trying to do is to have a shadow map generation technique, so you only need to interpolate the positions within the triangle, which should be fairly quick. You would only need to create the higher resolution geometry once for static objects, so it could be a realtime solution. What do you think?

Share on other sites
Viik    252
Quote:
 Have you considered doing software tessellation? All you are trying to do is to have a shadow map generation technique, so you only need to interpolate the positions within the triangle, which should be fairly quick. You would only need to create the higher resolution geometry once for static objects, so it could be a realtime solution. What do you think?

Fortunately I'll be using paraboloid imperfect shadow maps, so linear rasterization of triangles won't be an issue. Just wanted to be sure that's indeed because of the rasterization and not the way how I store\measure depth in shadow maps.
Just cheked it with very small resolution (128x128) and exponential shadow maps. Works quite good, shadow acne is mostly gone and adding small prefiltering (fetche 4 texels of depth and divide sum by 4) alredy gives very smooth shadow term. In comparision, 2x2 PCF was looking quite ugly.

Share on other sites
venzon    256
Quote:
 Original post by Viikduring shadow map render, vertices are mapped into parabaloid space, but during shadow term calculation fragments are used and they are mapped much closer to a "perfect" paraboloid than vertices

So why couldn't you just (during shadow term calculation) calculate the paraboloid mapping per vertex and use the interpolated value in your fragment shader for the shadow map lookup and depth comparison?

Share on other sites
Jason Z    6436
Quote:
Original post by venzon
Quote:
 Original post by Viikduring shadow map render, vertices are mapped into parabaloid space, but during shadow term calculation fragments are used and they are mapped much closer to a "perfect" paraboloid than vertices

So why couldn't you just (during shadow term calculation) calculate the paraboloid mapping per vertex and use the interpolated value in your fragment shader for the shadow map lookup and depth comparison?

That is an interesting suggestion - I think it would work, and it would most likely be more efficient as well since you are calculating the paraboloid position in the vertex shader instead of the pixel shader.

Have you tried something like this before?

Share on other sites
venzon    256
Quote:
 Original post by Jason ZThat is an interesting suggestion - I think it would work, and it would most likely be more efficient as well since you are calculating the paraboloid position in the vertex shader instead of the pixel shader. Have you tried something like this before?

No, just a guess -- it seems most similar to the conventional shadow mapping way (i.e. do the matrix multiplication to get into light space in the vertex shader), and also since he was doing his shadow map generation pass that way it seems like matching it as much as possible during the lookup pass would yield fewer artifacts.

Share on other sites
Viik    252
Because I'm using kind of a light pre-pass render and fragment position is restored from it's depth, so it's not interpolated from position of vertices.