#### Archived

This topic is now archived and is closed to further replies.

This topic is 5927 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I''m writing a software graphics engine (well designing it right now rather than actually coding it). I want to have shadows and will us the shadow mapping algorithm of creating z-buffers from each light source I have. They will be point sources, so I will be using cube maps. It''s not real-time so there''s no speed issues at hand. The problem is this: It''s easy to create z-buffers from the light sources if the object is solid... but what if the object is slightly transparent due to an alphamap being applied to it. Does anyone know of a modified shadow mapping method that allows for transparent textures. Basically if I have a 50% alpha polygon then the resulting shadow ''darkness'' should only be 50%. Even more problems arise when you have two or more semi-transparent polygons in front of each other from the light. You can''t simply add the results together with a z-buffer. Do I have to just have a threshold shadow value, say 50% alpha being the cut-off value for producing a shadow or not? P.S. Yes I did try searching the forums, some results came close but nothing covered transparency in shadow mapping. Thanks in advance!

##### Share on other sites
I guess there isn''t a solution then? I did read in a Computer Graphics book that z-buffer does not support transparency, and it failed to mention any potential solutions.

This is annoying. I want to include both shadows and transparency. The only way I can think of to show off both is keep all objects solid, and save transparency for special effects like lens flares, which can be applied afterwards.

##### Share on other sites

You could use a raytracer for a photon mapper for your lighting. They support both transparency and shadowing.

I don''t think you''re going to get shadow maps to do what you want in the general case.

You *can* do some tricks using multiple depth buffers per light combined with an ''alpha'' buffer, but it''s slow, memory intensive, and fails when you stack up too many translucent objects.

##### Share on other sites
That''s very problematic. The standard shadowmapping algorithm does not support transparency, because (as you said) the zbuffer compare is either ''in-shadow'' or ''not in-shadow''.

The approach that professional render packages use, is raytracing. Eg. if you have a lightsource (set to use shadow maps) in 3DS Max, and you have semi-transparent objects in your scene, you will notice that max treats them like opaque objects. The only solution here is to use shadowmapping on opaque geometry, and use a different algorithm (eg. raytracing) on transparent surfaces. That can be a pain.

But there are other solutions. A common approach is to use a multilayer depth buffer. Instead of storing just the nearest pixel, it stores the 2 or 3 (or more) nearest pixels, sorted by depth and including their coverage factor. However, as you can imagine, this algorithm is very slow, but on non-realtime renderers it might be interesting.

The concept of multilayered zbuffers in shadowmapping has been extended to ''deep shadow maps''. This algorithm has been developed by Pixar for use in their animated movies. It can handle semi transparent surfaces (such as smoke), and also very fine antialiased objects (hair, fur, etc). There are lots of papers about how to implement this, just run a Google search on the term ''deep shadow maps''.

/ Yann

##### Share on other sites
Thanks for the ideas. However I am unable to use raytracing in this project as it''s for a specific university module (ray tracing module is later ). However the features of this software engine are chosen by me, so I''m aiming fairly high with shadows, env. cube mapping, bump mapping, specular mapping etc... Trying to implement as many visual effects as I can.

Shadows with transparency would be the ''holy grail'' for this project in my opinion. But I think I''ll have to choose the safe option I mentioned above - leave transparency to effects applied afterwards (lens flares etc..)

##### Share on other sites
quote:
Original post by Yann L
The concept of multilayered zbuffers in shadowmapping has been extended to ''deep shadow maps''. This algorithm has been developed by Pixar for use in their animated movies. It can handle semi transparent surfaces (such as smoke), and also very fine antialiased objects (hair, fur, etc). There are lots of papers about how to implement this, just run a Google search on the term ''deep shadow maps''.

Cool, thanks for this idea. I was completely unaware of such a method. I''ll look through some sites about it now - hopefully it''s not too complicated that it goes over my head

##### Share on other sites
Okay, I read through the Deep Shadow Mapping technique .pdf. Very interesting read, seems more of a raytracing thing though. However I have adapted it slightly for my software engine. Basically I will use the following structure to make up my Z-Buffer array/image...

typedef struct zDepthTag{ 	float		zValue;        float           alphaValue;	struct		zDepthTag     *next;} zDepth;

When rendering from the light, I will record the alpha values and Z position, rather than just Z. I can add new zDepth''s onto the list for each pixel.

Once an alpha of 0.0 is reached (solid, opaque) I don''t have to record anymore values after that, so it will keep the memory usage to a minimum. Could also free any parts of the list if an opaque polygon pops in front of the other ones.

The scene will be made up of mainly solid objects anyway, so I predict low memory usage. For a 512x512 Z Buffer image I anticipate around 2-4 MB of z buffer usage. Obviously radidly increasing if (a) there are more lights and (b) if the lights are point lights which will require cube maps (6 z buffers).

What do you think? Is it a good solution? Any obvious flaws?

Thanks guys. Appreciate it.

##### Share on other sites
Looks good. At every pixel, you would then iterate through the appropriate (projected) zDepthTag structure (from the starting depth on to the lightsource), and accumulate the coverage factor.

An idea: if you make sure, that all (translucent) depth tag entries are sorted with respect to the light, then you can actually add coloured shadows (if your transparent surfaces are tinted). That can look absolutely awesome. If eg. you have sheets of coloured translucent plastic moving around infront of a lightsource, imagine the interesting shadows you can get.

/ Yann

##### Share on other sites
Wow, that would be an amazing scene in realtime, but I doubt it would be possible for a while - a church with stained glass windows down both sides, with coloured shadows that shine in on the pews, and a marble floor that is highly polished and reflective, the whole scene bumpmapped and pixelshaded. Wow.

Agreed that that would be completely awesome?

##### Share on other sites
Agreed... and it is possible

Using other techniques though, like projective texturing and lots of pixelshaders. Deep shadow maps are not possible on current hardware, since they would require looping and conditional jumps in pixelshaders.

[I will not post a screenshot ... I will not post a screenshot ... I will not post a screenshot ... aargh, selfcontrol can be so hard ]

/ Yann

[edited by - Yann L on August 27, 2002 5:04:47 PM]

1. 1
Rutin
49
2. 2
3. 3
4. 4
5. 5

• 10
• 28
• 20
• 9
• 20
• ### Forum Statistics

• Total Topics
633410
• Total Posts
3011727
• ### Who's Online (See full list)

There are no registered users currently online

×