Jump to content
  • Advertisement
Sign in to follow this  
Sk8ash

Question about cubic shadow mapping

This topic is 2607 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi im currently implementing cubic shadow mapping into a university project of mine and I'm a little confused about the algorithm,

I know in the cube map you render the scene from the light's position and store the length of the vector between the pixel in world space and the light position, you then compare that same vector when rendering the scene, the only thing I don't quite understand is that surely a pixels world space position will be the same no matter where you render it from, as its world space, so how is comparing the two going to work ? Is it something to do with clipping from the light projection matrix ?

Cheers

Share this post


Link to post
Share on other sites
Advertisement
There are plenty of articles that describe how shadow mapping works, e.g. this, or this. Just google for "introduction shadow mapping", or "shadow mapping basics", etc.

Without trying to rewrite what a hundred websites have already, here's the quick summary.

1. Render the scene from light point of view. For each rendered fragment, store the distance of that fragment from the light source. This distance value tells how far the light reaches in that direction.

2. Render the scene from main camera point of view. For each rendered fragment, compute the distance of that fragment from the light source. Because the main camera and the light source "see" different objects at different distances, you can compare the distance of the rendered fragment to the value in the shadow map. If the fragment rendered from camera point of view is farther from the light than what is specified by the value in the shadow map, it must be hidden from the viewpoint of the light, since there was some other object that was closer. If the distances are equal, it means that the light also sees that fragment.

Try googling around, there are articles that have good images, e.g. the wikipedia one I linked to.

Share this post


Link to post
Share on other sites
The articles you just posted are about standard post-projection z/w shadow mapping which is a technique I am very familiar with, however I asked about cubic shadow mapping, which is the common technique used for omni-directional light sources like point lights, when we do cubic shadow mapping instead of storing a post-projection depth we store a 3D vector length (pixel position in WS - light position). Im talking about This.

Only my implementation has some errors, as its a university project I cant actually post code but I thought I might be missing something to do with the algorithm, even if we are rendering the scene from different views, the world space position of a pixel is not view dependant, Im transforming my pixels using a world view matrix due to this and I think iv seen this done in some of the examples, and then getting the vector from the light, however it's still not right so I'm guessing this isnt correct.

Cheers

Share this post


Link to post
Share on other sites
With a cube map, nothing changes. You can store the post-projection depth, or you can store the actual 3d vector length (or better, the squared length). The logic is still exactly the same.

Share this post


Link to post
Share on other sites
I'm not really understanding the issue you're having either. It's identical functionality to the standard spotlight style shadow mapping just with a cube depth map. You're not calculating distances for the same points in space you're calculating them for two points that lie along an identical vector from the light source.

Imagine this basic ASCII diagram:

(Light Source) ---------light--------->(point a)- - - - - - - - - ->(point b)

So in your depth map rendering point a will be rendered to your depth map with the distance of the solid line. Then when rendering point b with your camera you will multiple the world position of b by first the camera view matrix to get the correct screen position of the point then you multiply it by the light view matrix to get the position of the point in the cube map. You will sample the cube map at that point and read the depth of the solid line. You will compare that to the depth of b from the light source (solid line + dotted line) and realise it's longer therefore it must be in shadow.

Share this post


Link to post
Share on other sites

(Light Source) ---------light--------->(point a)- - - - - - - - - ->(point b)

So in your depth map rendering point a will be rendered to your depth map with the distance of the solid line. Then when rendering point b with your camera you will multiple the world position of b by first the camera view matrix to get the correct screen position of the point then you multiply it by the light view matrix to get the position of the point in the cube map. You will sample the cube map at that point and read the depth of the solid line. You will compare that to the depth of b from the light source (solid line + dotted line) and realise it's longer therefore it must be in shadow.


Im not sure I quite understand you there Darg, you say when rendering point B with my main camera that I first use the view matrix to put the position into camera space but then also multiply it by the light view matrix in the same pass ? I wouldn't of thought you use the camera's view matrix because then the shadows would differ and move when you move the camera around.

I've pretty much implemented the algorithm MJP describes in the second post here.

I've created my 6 camera world view matrices and my projection matrix, I then use the geometry shader to render to all 6 sides of the cube map in one draw call and store the vector between the light's world position and the pixel's world view position from the camera.

Then in my main shader where I sample the cube map I sample the vector between the pixels world space position and the light's world position, I then use this vector to sample my cubemap, I then compare the two lengths and if the distance in the cubemap is shorter than the distance im calculating within my main shader then there must be some sort of occlusion and the pixel does not receive light.

Basically my shadowing is looking like this:


cubeshadowmap.png

Uploaded with ImageShack.us

Share this post


Link to post
Share on other sites
ok, the only fact is a "Z test". nothing else.

you know shadow mapping do 2 pass.

first pass ....render to the texture. this is full render, so output was Z tested.

second pass.... no Z test because still in pixel shader(or something) .

so there might be "difference" between first-pass nad 2nd-pass. i.e. theres obstacle polygon between light and current pixel...etc

then you just shadow it, thats all.

They Key is "Z TEST in first pass"

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!