Jump to content
  • Advertisement

Recommended Posts

I think I understand the idea behind Shadow Mapping, however I'm having problems with implementation details.

In VS I need light position - but I don't have one! I only have light direction, what light position should I use?

I have working camera class, with Projection and View matrices and all - how can I reuse this? I should put camera position, but how to calculate "lookAt" parameter?

Is this suppose to be ortographic or perspective camera?

And one more thing - when in the 3D piplene is the actual write to the Depth Buffer? In PS or somewhere earlier?

Br.,BB

Share this post


Link to post
Share on other sites
Advertisement
37 minutes ago, Bartosz Boczula said:

In VS I need light position - but I don't have one! I only have light direction, what light position should I use?

I would recommend using the position of the light.  Unless the light is representing the sun, in which case just the direction can be used, you will need the light position.  But you didn't say if the light is a spotlight, point light, etc.  I'll assume spotlight.

53 minutes ago, Bartosz Boczula said:

I have working camera class, with Projection and View matrices and all - how can I reuse this? I should put camera position, but how to calculate "lookAt" parameter?

Create a new camera for the light if you want.  Though it's easy enough from the position/direction to create the ortho/projection matrix to pass to the shader.  I don't know what programming language or graphics API you're using so keeping this generic.  You shouldn't need to calculate the look at, you have the direction, just set the camera direction to the light direction.

55 minutes ago, Bartosz Boczula said:

Is this suppose to be ortographic or perspective camera?

Spotlight would use a perspective camera.  A point light might generate a cubemap.  A sun would use orthographic.  etc.  Depends completely on the type of light source.

57 minutes ago, Bartosz Boczula said:

And one more thing - when in the 3D piplene is the actual write to the Depth Buffer? In PS or somewhere earlier?

The pixel shader.   The vertex shader just multiplies the vertex by the mvp matrix you pass in.  The pixel shader writes depth information to the depth buffer.

1 hour ago, Bartosz Boczula said:

I think I understand the idea behind Shadow Mapping

I'm not trying to be mean, but I think you were using poor resources as all your questions should have been answered by whatever you were reading/watching to learn from.

These are OpenGL resources but the concepts are the same for D3D, the video is Java + OpenGL but again translating the concepts to C++/C# should be quite simple:

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/

The last video is one of like 8 that show different techniques.

Share this post


Link to post
Share on other sites

Thanks @Mike2343. Just for clarification, I'm writing in C++ with DX11. My current version has only Phong lighting, with one "sunlight" component, no point lights or spot lights. My camera component doesn't have "direction" property, only LookAt, Up and Position. For lighting calculations, I only needed sunlight direction and that was ok. So my question was - what sunlight position should I use? I mean, should I like pick whatever or there are some "guidelines"?

P.S. This was my first post, thank you for not dissing it and explaining instead :)

Share this post


Link to post
Share on other sites

I have only limited experience in this, so I really don't honestly know if there are any good tricks (I'm sure more experts will be along soon), but I believe the general idea is to try and include all the objects in your frame and those outside it that might be casting shadows into the frame. Roughly speaking you could start by taking a point in the 'centre' of your scene as seen by the camera, use that as your lookat point, then minus the light direction from this to get the 'shadow camera' position.

You can then widen the field of view of the shadow matrix / move shadow camera further back to try and get everything in. Of course the trick is to try and get everything in without losing shadow map resolution, and this will depend on the game. There are techniques to try and do this plus decrease the shadow map resolution further from the view camera. See:

https://developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf

Share this post


Link to post
Share on other sites
10 hours ago, Bartosz Boczula said:

So my question was - what sunlight position should I use? I mean, should I like pick whatever or there are some "guidelines"?

The first video I posted is actually about only supporting sunlight.  Java is not that far from C++ and it's more about the concepts anyway so it should be a good video to watch.  You want large values anyway for the sun position and use an orthographic projection.  He explains and even shows the math for getting a bounding volume for the light which is handy.  I haven't used DX11 but again, this is fairly simple, I'm sure you can figure out the vertex/pixel shaders from the GLSL ones he makes as they're like ~8 lines each.

As for your camera not having a direction, you might want to add that, it's a fairly useful feature but not essential.  I think he covers using direction in the video also, been a while since I watched it and I was more using it for background noise than a lesson.  Let us/me know if you still have issues after watching/reading the links above. 

10 hours ago, Bartosz Boczula said:

P.S. This was my first post, thank you for not dissing it and explaining instead

No worries we all start someplace, no point in discouraging new blood into the game dev scene.  Nor did you copy and paste your non-functioning code and say, "fix this for me" :)

Share this post


Link to post
Share on other sites

Thank's guys, I'm making some good progress here. One question though - I changed my projection calculation from Perspective to Ortographic, but when I did that, it seems that camera position has no impact at all at the final image. I set (100.0f, -100.0f, 100.0f) and then (1.0f, -1.0f, 1.0f) and the result was exactly the same. The only way to change the output is to change viewWidth and viewHeight of this function:

DirectX::XMMatrixOrthographicLH(128.0f, 128.0f, 0.1f, 100.0f);

So when I set it to 1024x1024 the object gets really small, and when I set it to 128x128 the object gets bigger. Why is that?

Share this post


Link to post
Share on other sites

That is exactly what an orthographic camera does. If you hold the direction same, changing the position of the camera does not change the 'image', only where the view is centred. This is the closest I could find to an ortho depth map pulled off google.

If direction is the same and you move the camera position, you would just be scrolling up and down through a portion of e.g. the following image. If you move the camera further away, there is no effect, as the rays are parallel in an ortho camera.

I would encourage you to try moving an ortho camera in e.g. blender to see how it works, before trying to use it in shadow mapping. I'm guessing the 128x128 figure you are quoting is the scale of the camera, which determines how much is fitted into the view. Fit more in and each object is going to be smaller.

11244749674_9c21955ae4_b.thumb.jpg.fe40a829fbe7e635d78d5c1bb7ca3e0b.jpg

Share this post


Link to post
Share on other sites

@lawnjelly Ok, I decided to take a step back - I'm using my Perspective camera, the one that I know that works. I created a blitter render pass, which should render my already filled depth buffer on the screen, however the result is all red. I'm guessing this has something to do with the formats. My depth texture is in R16_TYPELESS format, Shader Resource View in R16_UNORM, but my Render Target is in R8G8B8A8_UNORM - how would 16-bit value will be written to a 32-bits slot?

 

Share this post


Link to post
Share on other sites

Hello Bartosz,

I am not a DX god but at the end this is how I understood it. You are wondering what is the position of the light because you want to position the camera correctly for the depth pass right? In my case I had to either assume the bounds of my scene or pre-calculate it. Knowing the bounds of my scene i was able to assume a position for my light that would make sure no 3D object are behind the projection. Knowing the bounds we are also able to calculate the minimum range of near/far values for our projection to keep an optimal depth precision while having all the scene objects inside the projection. With those bounds you can also figure out the left/top/right/bottom of your otho projection to cover the whole scene. Of course it requires some math to do.

 

Your lookAt can safely be the Position that you gave to the light minus the light direction. LookAt is always a position where your camera is pointing to relative to it's current position.

 

Also, as mentioned above, an orthographic projection should be used for directional lights as it is supposed to mimic a light that is so far away that all the rays appear to go in the same direction. The orthographic project does exactly that.

 

On 1/29/2018 at 8:09 AM, Bartosz Boczula said:

@lawnjelly Ok, I decided to take a step back - I'm using my Perspective camera, the one that I know that works. I created a blitter render pass, which should render my already filled depth buffer on the screen, however the result is all red. I'm guessing this has something to do with the formats. My depth texture is in R16_TYPELESS format, Shader Resource View in R16_UNORM, but my Render Target is in R8G8B8A8_UNORM - how would 16-bit value will be written to a 32-bits slot?

 

 

First of all I've never heard of a 16 bits depth format. I either use

DXGI_FORMAT_R32_FLOAT

for full depth precision or 

DXGI_FORMAT_R24_UNORM_X8_TYPELESS

in case I am using depth + stencil.

See this link : https://msdn.microsoft.com/en-us/library/windows/desktop/ff476464(v=vs.85).aspx

When you set your render target you specify both your render target AND the depth buffer to use. So your render target may be set to a R8G8B8A8_UNORM texture but your depth buffer must also be set to your R16_UNORM texture. The depth will be written to the depth buffer while your pixel shader will write colors to the R8G8B8A8_UNORM texture. Those are two completely different texture.

 

It is possible to ommit the render target and specify only a depth buffer in case you only want to do a depth pass with vertex shader which is what you should probably do when generating the depth map of your shadows.

Edited by ChuckNovice

Share this post


Link to post
Share on other sites

Hey guys, thanks for all the support, thanks to that I made some progress! My blitter shader looks like this:

float4 main(PS_INPUT input) : SV_TARGET
{
    float4 val = shaderTexture.Sample(sampleType, input.textureCoordinates);
    return float4(val.b, frac(val.g * 10), frac(val.r * 100), 1.0);
}

This seem to work, I'm now able to see my D16 texture as R8G8B8A8 Render Target. But of course, this can't be too easy, can it :)

This is my result:

image.thumb.png.d1876636cfade1b2472e3765b6b3f145.png

I might add that this looks normal with Perspective camera. @ChuckNovice did you have such issue?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!