Shadow mapping dilemma

Started by
15 comments, last by Hybrid 21 years, 7 months ago
quote:Original post by Yann L An idea: if you make sure, that all (translucent) depth tag entries are sorted with respect to the light, then you can actually add coloured shadows (if your transparent surfaces are tinted). That can look absolutely awesome. If eg. you have sheets of coloured translucent plastic moving around infront of a lightsource, imagine the interesting shadows you can get.

Sorted with respect to the light? As in distance from the light right? Because that''s my intention anyway.

Coloured shadows and lighting - ooooooooh, I can see it now. It''s like heaven Thanks for that idea Yann.

Yann - If you''re holding back a screenshot... well... you gotta show us. I saw that one with the two apples and grass. It was stunning - gotta see more screens like that, it''s inspirational you know
Advertisement
quote:
Sorted with respect to the light? As in distance from the light right? Because that''s my intention anyway.

Well, then all you have to do is keep track of 3 coverage factors (RGB) instead of a single monochrome one. Granted, it takes more memory for the zbuffer, but I think it''s worth it.

Can you post a screenshot, when you''re done ? It''d be interesting to see the results.

quote:
Yann - If you''re holding back a screenshot... well... you gotta show us.

Heh, it doesn''t really fit into this topic. Perhaps on another thread someday, when we discuss, uh.. dunno, pixelshader stuff

/ Yann
quote:Original post by Yann L
Can you post a screenshot, when you're done ? It'd be interesting to see the results.

Yeah, of course. But as I said, it's just in the design stage right now and no coding just yet. I will begin coding when I get back to university (three weeks time). That will give me some time to get my head around some of the coding issues due to this and other things I want the engine to have. I'm relatively new to this, so it's not a quick process for me.

So expect a screenshot in months, rather than weeks or days


[edited by - Hybrid on August 28, 2002 11:43:02 AM]

There is something you can do in real-time or quasi-real-time. Well, I haven''t built an engine yet, so someone correct me if I am wrong, please.

If you shoot several images per frame and take the average, there are several effects you can get:
- anti-aliasing: move the image a little bit so that you take several samples per pixel.
- depth-field: model the eye as a little circle instead of a point; use different points in the eye so that things very near and very far can be out of focus.
- motion-blur: shoot at different times, so that moving things appear blurry.
- soft shadows: model the lights as spheres instead of points; use different points in the spheres to get soft shadows.

The bad news is that taking the average of eight scenes takes about eight times as much time as shooting one. The good news is that you can get all the effects combined for the same price. I guess most people in this forum don''t need further explanation.

Well, I have an addition to the list:
- alpha-shadows: use a different alpha-threshold in each shoot.

I think that can do the trick.

quote:Original post by alvaro
Well, I have an addition to the list:
- alpha-shadows: use a different alpha-threshold in each shoot.

Alpha shadows is what we've been discussing above. The slightly modified 'deep shadow mapping' method, does not require multiple rendering of each frame using different alpha thresholds. It's done in one render, with full 'float' alpha values, so there will be no shadow banding due to the threshold system.

I'm going to tackle anti-aliasing in my software engine by supersampling the image at double resolution and then shrink down the image at the end. Despite my engine being non-realtime, I will still include a LOT of optimisations like bounding box culling for objects, quadtrees, etc... That way the extra time needed to render things like environment maps, supersampling and shadow maps, will be brought down to normal rendering times (hopefully) by the extra optimisations.

[edited by - Hybrid on August 29, 2002 10:54:36 AM]
I understand the difference between alpha shadows and deep shadow mapping. My point was that you can get the effect for free if you are already doing multiple rendering of each frame to get any of the other effects.

I thought implementing deep shadow mapping would be hard, as you lose hardware support. But if you can implement it and make it reasonably fast, I agree it''s better.

Well, I haven''t seen these suggestions, So I will take a stab. Keep in mind I do shadow volumes, so any of these ideas would have to be evolved upon for shadow mapping.

The stencil buffer is attached to the Z buffer and can be used to test whether or not the Object is in the alpha(clear).

When you are rendering your alpha enabled objects, set the stencilwrite to flip a bit in the stencil buffer indicating that the object is alpha enabled. Also, you could probably turn off Z-Writes for the alpha stuff, although, personally, I would render the scene, render the shadows, then add in the Alpha enabled geometry.

Then you could test the stencil buffer to see if you are rendering on a Alpha surface or not.

You could even do this:
Render the entire scene (minus alpha geometry) with the lights turned off and Z-writing enabled. Do this to setup your z-buffer and clear your stencil.

Turn off z-writing and turn on the stencil write. Render all of your alpha geometry (again with lights off) and increment your stencil buffer.

Then render the entire scene that has a stencil value equal to zero with all of your lighting effects (shadows of all the geometery)

Now render the entire scene with the stencil test of greater than zero. Only when you render it, render it with a shader that distorts the pixels like a alpha enabled object would normally do of the pixels behind it, and multiply them into the alpha''d texture you are using.

-James

This topic is closed to new replies.

Advertisement