Archived

This topic is now archived and is closed to further replies.

DaJudge

Depth shadow mapping problem

Recommended Posts

Hi all, I am still working on my depth shadow mapping test app and I think I''m almost there. I have the scene rendered from the perspective of the light source onto a texture and when I do projective texture mapping with the backbuffer (the render target of the light) it looks pretty much like I think it''s supposed to do: All objects get projected onto other objects in the way you''d expect their shadow would do. Now my simple problem is that when I want to use the ZBuffer instead of the backbuffer (as done in the Hardware Shadowmapping demo that comes with the NVSDK) it seems like all texture lookups to the ZBuffer texture return black as color value. The result is that I end up with a completely black scene being drawn. Am I missing something? Maybe one of you can help me with that... Thanks, Alex

Share this post


Link to post
Share on other sites
What API are you using ? In any case, make sure to use a depth-texture as a render target, not an RGB one. If you try to directly visualize the contents of a depth texture as RGB, then the result can very well be all black. That depends on the z value range in the texture, that gets remapped from 24bit to 8bit - quite a dynamic range mismatch. 65535 depth values will actually all map to a single colour: black.

But depth textures aren''t meant to be directly rendered anyway. Just add the depth compare operation, and check the results of the shadowing.

Share this post


Link to post
Share on other sites
Hi,

quote:

What API are you using ?



DirectX 8

quote:

In any case, make sure to use a depth-texture as a render target, not an RGB one.



Hm, I can''t see how I''d get that working. For using a texture as a render target you have to specify D3DUSAGE_RENDERTARGET but for a depth buffer you have to specify D3DUSAGE_DEPTHSTENCIL. And as it seems to me specifying both flags always results in an error saying "Invalid format specified for texture". So does the attempt to create a texture with only D3DUSAGE_RENDERTARGET and a depth format.

I just don''t get this right. I DO have a completely working sample in the NVSDK. Howevery contrary to what you said it DOES set a RGB texture as render target and a depth texture as ZBuffer. It then selects the depth texture into texture stage one and does projective texture mapping with that plus a little fragment program for composing that with lighting information from the vertex shader.

As far as I can see I am doing exactly the same (a few code-structural differences are there of course) but only get a black scene. So there HAS to be some difference I just can''t seem to be able to find.


Thanks in advance,
Alex

Share this post


Link to post
Share on other sites
Oh well, I can''t help you very much with D3D. AFAIK, D3D8 was very limited when it came to advanced effects such as shadowmaps. You might want to switch over to D3D9. But I could also be wrong, I''ve never used D3D.

Fact is, that rfom a hardware point of view, any more or less modern GPU (GF3+) can select pretty much any combination of colour/depth/stencil planes as a render target. The API has to support that functionality, though. In OpenGL, you can simply select the planes you need, and you will get the appropriate pbuffer. For shadowmapping, you don''t need an RGB part, nor do you need a stencil part (although some hardware might enforce one, or ad 8bit padding). A z-texture is totally sufficient, everything else is wasted memory.

Now, if D3D forces you to allocate an RGB part in additional to the depth component, then a colour readback will obviously return the RGB part of that texture. Basically, the GPU needs to know which component layer it has to bind onto a texture sampler. For shadowmapping, it is the z-layer. In OpenGL, that''s simply a GL_DEPTH_COMPONENT16/24/32 internal format. In D3D, I don''t know, sorry.

Share this post


Link to post
Share on other sites
It HAS to work with DX8. The working sample is in DX8 as well...

Back on topic just explain that rendertarget / zbuffer mechanism in DX for clarity.

When you want to render to a texture you create a texture with a D3DUSAGE_RENDERTARGET flag. If you additionally want a Depth buffer you create another texture with a D3DUSAGE_DEPTHSTENCIL flag. Then you select both by basically calling


device->SetRenderTarget(rgb_surface, zbuffer_surface);


For verification I then select the rgb_texture into texture stage 0 and render it using projective texture mapping. That works perfectly well. All geometry gets projected as desired, so the transformation / texture coordinate generation part apparently works well. When I then select the depth texture into texture stage 0 all I get is a black image.

Any ideas?

Thanks,
Alex

Share this post


Link to post
Share on other sites
quote:
Original post by DaJudge
For verification I then select the rgb_texture into texture stage 0 and render it using projective texture mapping. That works perfectly well. All geometry gets projected as desired, so the transformation / texture coordinate generation part apparently works well. When I then select the depth texture into texture stage 0 all I get is a black image.


That's perfectly normal from the point of view of the GPU, as I explained in my post above. A depth component texture has no colour, it has a depth value. If the GPU binds a depth texture to a texture unit, then there is no colour defined. Most GPUs will perform a rough range remapping, for convenience, but you're likely to get black over a large range of depth values.

The shadowmapping algorithm requires you to do a compare between the dpeth component of the texture, and the third texture coordinate. The result of that comparison defines a colour, but not the depth component on its own.

Now, there are two possibilities to your problem. Either D3D requires you to manually activate the comparison operation, as in OpenGL. In that case, do so, otherwise you'll get a black image.

Or, D3D automatically enables the comparison, as soon as a depth texture is bound. In that case, there is most probably an error with the way the third texture coordinate is computed, ie. the shadow comparison fails.

Consult your D3D reference.

quote:

device->SetRenderTarget(rgb_surface, zbuffer_surface);


You mean, D3D8 requires you to define an RGB buffer, even if not needed ? Ouch.

Can't you simply say: device->SetRenderTarget(NULL, zbuffer_surface) ? The shadowmapping algorithm doesn't need an RGB surface, so basically, you're wasting a lot of memory there.


[edited by - Yann L on October 14, 2003 11:55:16 AM]

Share this post


Link to post
Share on other sites
No, I don''t think you could set a null render target. I believe in DX9 one can set render target and zbuffer independently but still I don''t think that the render target can be null. But that''s not really a that evil thing since you can disable color writes and reuse that texture somewhere else (I intend to do fullscreen postrender effects anyway...).
So I will just look around in the sample source code if there is a single line setting some render state but I don''t think I''ll find something.

Thanks for the moment,
Alex

Share this post


Link to post
Share on other sites
OK, when you create your shadow map texture, you use CreateTexture() for your depth surface (as well as your RGB surface of course). At this point you will have an RGBA texture and a Depth texture. Do your rendering into the shadow map as normal. When you want to use the shadow map, set the DEPTH texture to the texture stage, not the RGB texture.

//Setup:
m_pRGBA = m_pDev->CreateTexture(..., D3DFMT_R8G8B8A8, DUSAGE_RENDERTARGET,...);
m_pDepth = m_pDev->CreateTexture(..., D3DFMT_D24X8, D3DUSAGE_DEPTHSTENCIL, ...);

//Render ShadowMap:
m_pDev->SetRenderTarget(m_pRGBA, m_pDepth);
...Render Here....

//Use ShadowMap
m_pDev->SetTexture( dwStage, m_pDepth );

Then in the shader, assuming you have your texture matrix setup correctly, the comparison is automatically done given you a value from 0 to 1 in all channels.

Hope this clears it up a bit.


[edited by - blue_knight on October 14, 2003 12:55:41 PM]

Share this post


Link to post
Share on other sites
That is exactly what I am doing. For verification I tried to render the depth texture as a fullscreen quad which should give me, as far as I understand it, a greyscale image showing the depth values of the pixels in the scene.
Well, it doesn''t. ZBuffer DOES work during the rendering process (all intersecting / non z-ordered geometry ends up correctly) but it doesn''t seem to me the Z-Buffer is correctly used as a texture. Things start with B/W image that appears to be unwritten memory (more or less random rectangular shaped noise areas etc.) and after a couple of seconds (completely random) it ends up with a white image.

I just don''t get the point what I am doing wrong.


Thanks for your help so far and I hope someone can give me I hint how to get that thing right.

Thanks,
Alex

Share this post


Link to post
Share on other sites
Hmm, well, my opinion on this might not really be relevant, as this is all very D3D specific. So I can only speak from a hardware point of view. Static pixel noise generally means, that either the memory was not included in the primary pbuffer target surface, or that the buffer you bound was never assigned to a valid render target before. The former one is not probable, as you say your faces are correctly sorted when rendering the RGB portion. So it''s probably the later.

quote:

For verification I tried to render the depth texture as a fullscreen quad which should give me, as far as I understand it, a greyscale image showing the depth values of the pixels in the scene.


Under OpenGL, yes. But not under Direct3D. I just checked it, and D3D will automatically enable the depth compare as soon as a depth target is bound to the texture unit. Therefore, you''ll never get a greyscale image, but the comparison result between the depth texture and the third texture cooordinate. When rendering a fullscreen quad, then the result will be pretty much meaningless.

There surely is a way in D3D to copy the depth component of a depth texture into the RGB components, so that you could easily check it. It would also help if you posted your code. People more used to D3D might be able to quickly spot an error.

Share this post


Link to post
Share on other sites
There is no way to copy depth to a color in d3d.

Instead, you can rendering to another texture, where you map your screen space Z to some sort of color ramp texture instead, and visualize that.

Share this post


Link to post
Share on other sites
quote:
Original post by SimmerD
There is no way to copy depth to a color in d3d.


Hmm. What about a copy over system memory, doing the remapping (perhaps to a false colour palette) on the CPU ? It''s going to be slow as hell, but probably OK for some quick troubleshooting. Does D3D support texture or pbuffer readbacks of the depth component ?

* Note to self: refresh my totally outdated D3D knowledge someday *

Share this post


Link to post
Share on other sites
quote:

I just checked it, and D3D will automatically enable the depth compare as soon as a depth target is bound to the texture unit.



Do you have a link for that? Might be a worthwile read...

quote:

Therefore, you''ll never get a greyscale image, but the comparison result between the depth texture and the third texture cooordinate. When rendering a fullscreen quad, then the result will be pretty much meaningless.



Well, doesn''t that mean that if the fullscreen quad is pretty close to the near clipping plane that the comparisons (is that sth like ABS(zbuffer_z - pixel_z)?) WILL result in a greyscale Z-image?

Thanks,
Alex

Share this post


Link to post
Share on other sites
For illustration I put up a screenshot of my project. I can't post the code since it's way too big (4 seperate projects, hundreds of files) for now.

http://www.dajudge.com/dmap0.jpg

The top left one is the scene rendered completely standard.
Bottom right is the scene rendered from the light's view (quad).
Bottom left SHOULD be the depth map (quad).
Top right is the scene rendered with only the projected texture (bottom right) used as projection texture.

And honestly... Bottom left does NOT look like a reasonable depth rendering, does it?

Cheers,
Alex

[edited by - dajudge on October 15, 2003 2:53:11 AM]

Share this post


Link to post
Share on other sites
I had this problem when using direct3d debug runtime. Switching over to the retail version (via control_panel->directx) made it work. I''ve heard of others having this problem with nvidia shadowbuffers in debug runtime - everything goes to shadow. I don''t know what''s causing the problem. Sometimes the debug runtime works for me but hardly ever - retail runtime always works.

In directx9 you need to set non-null colour rendertarget.

Share this post


Link to post
Share on other sites
Hm, I tried but it didn''t help. Considered that the NVSDK sample works with both debug and retail runtime that''s not really a surprise. I just can''t see what I''m doing wrong...

Share this post


Link to post
Share on other sites
Okay, I read some more and do now have insight into the process of depth comparison in the texture pipeline performed by the driver when a depth texture is selected into a texture stage. I am following everything to the letter but I can't seem to get this right. I just don't know where to start looking for errors anymore since I am examining that code for over a day without ANY progress now...

For illustration now some code...

This is my final scene composition:

// Upadte the texture maps
shadow_map->Update(d3d, rinfo, 0);
light_mask->Update(d3d, rinfo, 0);

// Upload it
shadow_map->Set(d3d, 0);
light_mask->Set(d3d, 1);

// Enable material override
bool pshader_backup = rinfo.info->override_pshader;
RCPTR vshader_backup = rinfo.info->override_vshader;
bool upload_backup = rinfo.info->upload_textures;
rinfo.info->override_pshader = true;
rinfo.info->upload_textures = false;
rinfo.info->override_vshader = vshader;

// Upload pixel shader
HRESULT res = d3d->SetPixelShader(pshader->GetHandle());
EH_ASSERT(SUCCEEDED(res), "cannot set pixel shader");

//set special texture matrix for shadow mapping
RCPTR camera = rinfo.engine->GetLight(0)->CastCamera();
camera->UpdateMatrices();
float fOffsetX = 0.5f + (0.5f / (float)1024);
float fOffsetY = 0.5f + (0.5f / (float)1024);
int range = 0xFFFFFFFF >> (32 - 24);
float fBias = -0.001f * (float)range;
D3DXMATRIX texScaleBiasMat( 0.5f, 0.0f, 0.0f, 0.0f,
0.0f, -0.5f, 0.0f, 0.0f,
0.0f, 0.0f, (float)range, 0.0f,
fOffsetX, fOffsetY, fBias, 1.0f );
rinfo.engine->SetProjectiveMatrix(camera->GetViewMatrix() * camera->GetProjMatrix() * texScaleBiasMat);

// Render standard cam
CamTexture::Update(d3d,rinfo,entity);

// Restore material override
rinfo.info->override_pshader = pshader_backup;
rinfo.info->override_vshader = vshader_backup;
rinfo.info->upload_textures = upload_backup;
res = d3d->SetPixelShader(0);
EH_ASSERT(SUCCEEDED(res), "cannot unset pixel shader");


light_mask->Render() does nothing since it's a greyscale image loaded from a bitmap.
CamTexture::Update() renders the scene using a cam specified by the game - nothing special here.
shadow_map->Render() updates the shadow map (uh...). Here is the code:


{
if(rendering)
return;

rendering=true;

// Backup the old render info
RCPTR backbuffer_backup, zbuffer_backup;
RCPTR cam_backup;
RCPTR mstack_backup;
RCPTR cstack_backup;
D3DVIEWPORT8 viewport_backup;

d3d->GetViewport(&viewport_backup);
d3d->GetRenderTarget(backbuffer_backup.AddressOfPointer());
d3d->GetDepthStencilSurface(zbuffer_backup.AddressOfPointer());
cam_backup = rinfo.engine->GetCamera();
mstack_backup = rinfo.matrix_stack;
cstack_backup = rinfo.color_stack;

// Get the camera if not already done and set it up
RCPTR camera = rinfo.engine->GetLight(light_id)->CastCamera();
EH_ASSERT(camera, stringf("cannot find light #%d", light_id));
camera->UpdateMatrices();
rinfo.engine->SetCamera(camera);

// Disable color writes
//d3d->SetRenderState(D3DRS_COLORWRITEENABLE, 0);

// Setup fresh scene stacks
rinfo.color_stack = ColorStack::New();
rinfo.matrix_stack = MatrixStack::New();

// Get the render target level
RCPTR backbuffer;
Texture::GetTexture()->GetSurfaceLevel(0,backbuffer.AddressOfPointer());
EH_ASSERT(backbuffer, "cannot get render to texture surface");

// Get the backbuffer surface
RCPTR zbuffer_surface;
zbuffer->GetSurfaceLevel(0,zbuffer_surface.AddressOfPointer());
EH_ASSERT(zbuffer_surface, "cannot get ZBuffer surface");

// Set render targets
HRESULT res = d3d->SetRenderTarget(backbuffer.getobject(), zbuffer_surface.getobject());
EH_ASSERT(SUCCEEDED(res), "cannot set render target");

// Set new viewport
D3DVIEWPORT8 viewport;
viewport.X = 0;
viewport.Y = 0;
viewport.Width = 1024;
viewport.Height = 1024;
viewport.MinZ = 0;
viewport.MaxZ = 1;
d3d->SetViewport(&viewport);

// Clear
res = d3d->Clear(0, 0, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 0, 1.0f, 0);
EH_ASSERT(SUCCEEDED(res), "cannot clear render target");

// Render the scene
rinfo.game->Frame(rinfo);

// Restore old render config
rinfo.engine->SetCamera(cam_backup);
d3d->SetRenderTarget(backbuffer_backup.getobject(), zbuffer_backup.getobject());
d3d->SetViewport(&viewport_backup);
rinfo.matrix_stack = mstack_backup;
rinfo.color_stack = cstack_backup;

// Enable color writes
d3d->SetRenderState(D3DRS_COLORWRITEENABLE, D3DCOLORWRITEENABLE_BLUE |
D3DCOLORWRITEENABLE_GREEN |
D3DCOLORWRITEENABLE_RED |
D3DCOLORWRITEENABLE_ALPHA);

rendering=false;


I hope this wasn't too mush for a single posting but maybe one of you can spot the problem...

[edited by - dajudge on October 15, 2003 10:24:48 AM]

Share this post


Link to post
Share on other sites
There''s a paper that mentions that d3d automagically enables depth compare http://developer.nvidia.com/object/hwshadowmap_paper.html

When reading shadowbuffer you don''t use projective texturing. The auto-enabled depth compare does that for you - ie, it automatically compares (r/q) with shadowmap[(s/q),(t/q)].

Another point is the shadowbuffer and dummy colour render texture must be compatible (CheckDepthStencilMatch - usually means same bit-width, 16 or 32). When I didn''t do this the shadow went all strange and garbage-like - could explain the noise pattern you''re getting (ie just uninitialised video memory where shadow buffer texture is).

Share this post


Link to post
Share on other sites
Well, I am using A8R8G8B8 for backbuffer and D24S8 for ZBuffer (both used in the working NVSDK sample) and I use vertex and pixel shaders written using the NVSDK''s shaders as examples.

Vertex shader:

#include "constants.vsi"

struct VS_IN
{
float3 pos : POSITION;
float3 normal : NORMAL;
float2 tex0 : TEXCOORD0;
};

struct VS_OUT
{
float4 pos : POSITION;
float4 light0 : TEXCOORD0;
float4 color : COLOR0;
};

VS_OUT main(VS_IN IN,
uniform float4x4 MATRIX_WORLD,
uniform float4x4 MATRIX_VIEW,
uniform float4x4 MATRIX_PROJ,
uniform float4 LIGHTPOS[10],
uniform float4 LIGHTCOLOR[10],
uniform float4 CONSTCOLOR,
uniform float4x4 MATRIX_PROJECTIVE)
{
// Things one might want to make programmable
float3 ambient = float3(.4,.4,.4);


VS_OUT OUT;

// Generate a 3x3 world & view space matrix
float3x3 MATRIX_WORLD3;
MATRIX_WORLD3[0] = MATRIX_WORLD[0].xyz;
MATRIX_WORLD3[1] = MATRIX_WORLD[1].xyz;
MATRIX_WORLD3[2] = MATRIX_WORLD[2].xyz;
float3x3 MATRIX_VIEW3;
MATRIX_VIEW3[0] = MATRIX_VIEW[0].xyz;
MATRIX_VIEW3[1] = MATRIX_VIEW[1].xyz;
MATRIX_VIEW3[2] = MATRIX_VIEW[2].xyz;

// Transform pos
float4 pos;
pos.xyz = IN.pos;
pos.w = 1;
pos = mul(MATRIX_WORLD, pos); float4 tempPos = pos;
pos = mul(MATRIX_VIEW, pos);
pos = mul(MATRIX_PROJ, pos);
OUT.pos = pos;

// Lighting
float3 normal = normalize(mul(MATRIX_WORLD3, IN.normal)); // Transform normal to world space

float3 lightpos = LIGHTPOS[0].xyz;
float3 ldir = normalize(pos.xyz-lightpos);
float lvalue = max(-dot(normal,ldir),0);
float4 lcolor = LIGHTCOLOR[0]*lvalue;
OUT.color.rgb = lcolor.rgb * CONSTCOLOR.rgb + ambient.xxx;
OUT.color.a = CONSTCOLOR.a;

// Output texcoords
OUT.light0 = mul(MATRIX_PROJECTIVE, tempPos);

// Done
return OUT;
}


Pixel shader:

#include "constants.vsi"

struct v2f
{
float4 pos : POSITION;
float4 light0 : TEXCOORD0;
float4 color : COLOR0;
};

float4 main(
v2f IN,
uniform sampler2D ShadowMap,
uniform sampler2D SpotLight) : COLOR
{
float4 OUT;

float4 shadow = tex2D(ShadowMap, IN.light0);
float4 spotlight = tex2D(SpotLight, IN.light0);
float4 lighting = IN.color;

OUT = shadow * lighting;


return OUT;
}


Anyone?

Share this post


Link to post
Share on other sites
I know found another sample from NVIDIA themselves (the shadowmapping sample from the book ''Cg Tutorial'') that suffers from the same problem.
Maybe a driver issue?

Share this post


Link to post
Share on other sites
Hi DaJudge,
I have got the same problems you have. I am not able to create a renderable depth-texture. It does not depend on the format, it is always the same result. When I first implemented the standard shadow mapping alogrithm I was using a floating point target (Radeon9700 btw), which was very slow. And because I was using a color target I had to do all the comparisons of the depth values in the pixel shader, and I did not even know that d3d would do this for me if I had a depth target, because I expected d3d to have a renderstate to enable this feature, just l
ike OpenGL. And the filtering wasn't that nice, too.
Even Paul's shadow mapping demo (http://www.paulsprojects.net/direct3d/shadowmap/shadowmap.html) doesn't work here.

This thing really fu**ed me up (and other things in d3d too) ant that's why I decided to use OpenGL, because one really cannot do without shadow mapping these days and I dislike shadow volumes..
In GL I had no problems with shadow mapping, but there are still problems, though.

cu,
hWnd

[edited by - hWnd on October 16, 2003 8:15:33 AM]

Share this post


Link to post
Share on other sites
Hi hWnd,

well, since I intend to port my program to the XBOX I don''t really want to go for OpenGL. Although shadow mapping would really be worth the effort. I do dislike shadow volumes as well (silhouette generation, non closed meshes, shadow volume generation, clipping plane problems etc. etc.). Depth shadow mapping seems to solve pretty much all these problems at once.

You''re right Paul''s project does not work here either (would be interesting to get out which configuration he is using though). But here it''s not the problem I have (all shadow) but exactly the opposite (all in light => no shadows, just diffuse). So maybe he''s doing things some reverse way - didn''t have a look at his source (since it was defunct).

Any ideas?

Thanks,
Alex

Share this post


Link to post
Share on other sites
Are you using DirectX9? The depth range was changed from (0, MAX INTEGER) to (0,1). This means that all DirectX8 shadowmapping samples converted to DX9 will not work out of the box. But if you try a DirectX8 sample without converting the interface to DX9 it should work fine. Anyway, you need to change your shadow matrix for DX9 (I''ve got this to work but my code isn''t with me, to start try changing range from 0xFFFFFFFF >> (32 - 24) to 1.0f). Anyway, this is documented somewhere but you have to look for it to find it.

Share this post


Link to post
Share on other sites
Sorry I can''t help you more, my D3D knowledge is too sparse for that.

Want me to move this thread to the DirectX forum ? Perhaps you''ll get better responses there.

Share this post


Link to post
Share on other sites