#### Archived

This topic is now archived and is closed to further replies.

This topic is 5583 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I''m writing a software graphics engine (well designing it right now rather than actually coding it). I want to have shadows and will us the shadow mapping algorithm of creating z-buffers from each light source I have. They will be point sources, so I will be using cube maps. It''s not real-time so there''s no speed issues at hand. The problem is this: It''s easy to create z-buffers from the light sources if the object is solid... but what if the object is slightly transparent due to an alphamap being applied to it. Does anyone know of a modified shadow mapping method that allows for transparent textures. Basically if I have a 50% alpha polygon then the resulting shadow ''darkness'' should only be 50%. Even more problems arise when you have two or more semi-transparent polygons in front of each other from the light. You can''t simply add the results together with a z-buffer. Do I have to just have a threshold shadow value, say 50% alpha being the cut-off value for producing a shadow or not? P.S. Yes I did try searching the forums, some results came close but nothing covered transparency in shadow mapping. Thanks in advance!

##### Share on other sites
I guess there isn''t a solution then? I did read in a Computer Graphics book that z-buffer does not support transparency, and it failed to mention any potential solutions.

This is annoying. I want to include both shadows and transparency. The only way I can think of to show off both is keep all objects solid, and save transparency for special effects like lens flares, which can be applied afterwards.

##### Share on other sites

You could use a raytracer for a photon mapper for your lighting. They support both transparency and shadowing.

I don''t think you''re going to get shadow maps to do what you want in the general case.

You *can* do some tricks using multiple depth buffers per light combined with an ''alpha'' buffer, but it''s slow, memory intensive, and fails when you stack up too many translucent objects.

##### Share on other sites
That''s very problematic. The standard shadowmapping algorithm does not support transparency, because (as you said) the zbuffer compare is either ''in-shadow'' or ''not in-shadow''.

The approach that professional render packages use, is raytracing. Eg. if you have a lightsource (set to use shadow maps) in 3DS Max, and you have semi-transparent objects in your scene, you will notice that max treats them like opaque objects. The only solution here is to use shadowmapping on opaque geometry, and use a different algorithm (eg. raytracing) on transparent surfaces. That can be a pain.

But there are other solutions. A common approach is to use a multilayer depth buffer. Instead of storing just the nearest pixel, it stores the 2 or 3 (or more) nearest pixels, sorted by depth and including their coverage factor. However, as you can imagine, this algorithm is very slow, but on non-realtime renderers it might be interesting.

The concept of multilayered zbuffers in shadowmapping has been extended to ''deep shadow maps''. This algorithm has been developed by Pixar for use in their animated movies. It can handle semi transparent surfaces (such as smoke), and also very fine antialiased objects (hair, fur, etc). There are lots of papers about how to implement this, just run a Google search on the term ''deep shadow maps''.

/ Yann

##### Share on other sites
Thanks for the ideas. However I am unable to use raytracing in this project as it''s for a specific university module (ray tracing module is later ). However the features of this software engine are chosen by me, so I''m aiming fairly high with shadows, env. cube mapping, bump mapping, specular mapping etc... Trying to implement as many visual effects as I can.

Shadows with transparency would be the ''holy grail'' for this project in my opinion. But I think I''ll have to choose the safe option I mentioned above - leave transparency to effects applied afterwards (lens flares etc..)

##### Share on other sites
quote:
Original post by Yann L
The concept of multilayered zbuffers in shadowmapping has been extended to ''deep shadow maps''. This algorithm has been developed by Pixar for use in their animated movies. It can handle semi transparent surfaces (such as smoke), and also very fine antialiased objects (hair, fur, etc). There are lots of papers about how to implement this, just run a Google search on the term ''deep shadow maps''.

Cool, thanks for this idea. I was completely unaware of such a method. I''ll look through some sites about it now - hopefully it''s not too complicated that it goes over my head

##### Share on other sites
Okay, I read through the Deep Shadow Mapping technique .pdf. Very interesting read, seems more of a raytracing thing though. However I have adapted it slightly for my software engine. Basically I will use the following structure to make up my Z-Buffer array/image...

typedef struct zDepthTag{ 	float		zValue;        float           alphaValue;	struct		zDepthTag     *next;} zDepth;

When rendering from the light, I will record the alpha values and Z position, rather than just Z. I can add new zDepth''s onto the list for each pixel.

Once an alpha of 0.0 is reached (solid, opaque) I don''t have to record anymore values after that, so it will keep the memory usage to a minimum. Could also free any parts of the list if an opaque polygon pops in front of the other ones.

The scene will be made up of mainly solid objects anyway, so I predict low memory usage. For a 512x512 Z Buffer image I anticipate around 2-4 MB of z buffer usage. Obviously radidly increasing if (a) there are more lights and (b) if the lights are point lights which will require cube maps (6 z buffers).

What do you think? Is it a good solution? Any obvious flaws?

Thanks guys. Appreciate it.

##### Share on other sites
Looks good. At every pixel, you would then iterate through the appropriate (projected) zDepthTag structure (from the starting depth on to the lightsource), and accumulate the coverage factor.

An idea: if you make sure, that all (translucent) depth tag entries are sorted with respect to the light, then you can actually add coloured shadows (if your transparent surfaces are tinted). That can look absolutely awesome. If eg. you have sheets of coloured translucent plastic moving around infront of a lightsource, imagine the interesting shadows you can get.

/ Yann

##### Share on other sites
Wow, that would be an amazing scene in realtime, but I doubt it would be possible for a while - a church with stained glass windows down both sides, with coloured shadows that shine in on the pews, and a marble floor that is highly polished and reflective, the whole scene bumpmapped and pixelshaded. Wow.

Agreed that that would be completely awesome?

##### Share on other sites
Agreed... and it is possible

Using other techniques though, like projective texturing and lots of pixelshaders. Deep shadow maps are not possible on current hardware, since they would require looping and conditional jumps in pixelshaders.

[I will not post a screenshot ... I will not post a screenshot ... I will not post a screenshot ... aargh, selfcontrol can be so hard ]

/ Yann

[edited by - Yann L on August 27, 2002 5:04:47 PM]

##### Share on other sites
quote:
Original post by Yann L An idea: if you make sure, that all (translucent) depth tag entries are sorted with respect to the light, then you can actually add coloured shadows (if your transparent surfaces are tinted). That can look absolutely awesome. If eg. you have sheets of coloured translucent plastic moving around infront of a lightsource, imagine the interesting shadows you can get.

Sorted with respect to the light? As in distance from the light right? Because that''s my intention anyway.

Coloured shadows and lighting - ooooooooh, I can see it now. It''s like heaven Thanks for that idea Yann.

Yann - If you''re holding back a screenshot... well... you gotta show us. I saw that one with the two apples and grass. It was stunning - gotta see more screens like that, it''s inspirational you know

##### Share on other sites
quote:

Sorted with respect to the light? As in distance from the light right? Because that''s my intention anyway.

Well, then all you have to do is keep track of 3 coverage factors (RGB) instead of a single monochrome one. Granted, it takes more memory for the zbuffer, but I think it''s worth it.

Can you post a screenshot, when you''re done ? It''d be interesting to see the results.

quote:

Yann - If you''re holding back a screenshot... well... you gotta show us.

Heh, it doesn''t really fit into this topic. Perhaps on another thread someday, when we discuss, uh.. dunno, pixelshader stuff

/ Yann

##### Share on other sites
quote:
Original post by Yann L
Can you post a screenshot, when you're done ? It'd be interesting to see the results.

Yeah, of course. But as I said, it's just in the design stage right now and no coding just yet. I will begin coding when I get back to university (three weeks time). That will give me some time to get my head around some of the coding issues due to this and other things I want the engine to have. I'm relatively new to this, so it's not a quick process for me.

So expect a screenshot in months, rather than weeks or days

[edited by - Hybrid on August 28, 2002 11:43:02 AM]

##### Share on other sites

There is something you can do in real-time or quasi-real-time. Well, I haven''t built an engine yet, so someone correct me if I am wrong, please.

If you shoot several images per frame and take the average, there are several effects you can get:
- anti-aliasing: move the image a little bit so that you take several samples per pixel.
- depth-field: model the eye as a little circle instead of a point; use different points in the eye so that things very near and very far can be out of focus.
- motion-blur: shoot at different times, so that moving things appear blurry.
- soft shadows: model the lights as spheres instead of points; use different points in the spheres to get soft shadows.

The bad news is that taking the average of eight scenes takes about eight times as much time as shooting one. The good news is that you can get all the effects combined for the same price. I guess most people in this forum don''t need further explanation.

Well, I have an addition to the list:
- alpha-shadows: use a different alpha-threshold in each shoot.

I think that can do the trick.

##### Share on other sites
quote:
Original post by alvaro
Well, I have an addition to the list:
- alpha-shadows: use a different alpha-threshold in each shoot.

Alpha shadows is what we've been discussing above. The slightly modified 'deep shadow mapping' method, does not require multiple rendering of each frame using different alpha thresholds. It's done in one render, with full 'float' alpha values, so there will be no shadow banding due to the threshold system.

I'm going to tackle anti-aliasing in my software engine by supersampling the image at double resolution and then shrink down the image at the end. Despite my engine being non-realtime, I will still include a LOT of optimisations like bounding box culling for objects, quadtrees, etc... That way the extra time needed to render things like environment maps, supersampling and shadow maps, will be brought down to normal rendering times (hopefully) by the extra optimisations.

[edited by - Hybrid on August 29, 2002 10:54:36 AM]

##### Share on other sites
I understand the difference between alpha shadows and deep shadow mapping. My point was that you can get the effect for free if you are already doing multiple rendering of each frame to get any of the other effects.

I thought implementing deep shadow mapping would be hard, as you lose hardware support. But if you can implement it and make it reasonably fast, I agree it''s better.

##### Share on other sites
Well, I haven''t seen these suggestions, So I will take a stab. Keep in mind I do shadow volumes, so any of these ideas would have to be evolved upon for shadow mapping.

The stencil buffer is attached to the Z buffer and can be used to test whether or not the Object is in the alpha(clear).

When you are rendering your alpha enabled objects, set the stencilwrite to flip a bit in the stencil buffer indicating that the object is alpha enabled. Also, you could probably turn off Z-Writes for the alpha stuff, although, personally, I would render the scene, render the shadows, then add in the Alpha enabled geometry.

Then you could test the stencil buffer to see if you are rendering on a Alpha surface or not.

You could even do this:
Render the entire scene (minus alpha geometry) with the lights turned off and Z-writing enabled. Do this to setup your z-buffer and clear your stencil.

Turn off z-writing and turn on the stencil write. Render all of your alpha geometry (again with lights off) and increment your stencil buffer.

Then render the entire scene that has a stencil value equal to zero with all of your lighting effects (shadows of all the geometery)

Now render the entire scene with the stencil test of greater than zero. Only when you render it, render it with a shader that distorts the pixels like a alpha enabled object would normally do of the pixels behind it, and multiply them into the alpha''d texture you are using.

-James

• ### Forum Statistics

• Total Topics
628645
• Total Posts
2984021

• 9
• 9
• 10
• 21
• 20