Quality shadows in large environments

Started by
10 comments, last by Yann L 21 years, 10 months ago
Hi everybody, has anyone some experience with realtime dynamic shadowing of large environments ? I''m currently on a visualization job, and trying to add high quality sun-shadows to the 3D system. The system already has a precalculated (static) radiosity system (by lightmapping). Now I would like to include a full dynamic sun. Eg. you can specify the time of day, and the lighting (incl. all shadows) will move in the appropriate positions. The shadows should also be fully interactive (eg. project on moving objects). I thought about the best shadow solution for this particular situation, and I guess I''m more or less stuck with shadow maps. The geometry is very complex, so extruding and rendering shadow volumes would be incredibly costly (and thus is no option). Shadowmaps are very interactive (and rather fast, since target HW is a GF4), but the quality isn''t that good. They are just too fuzzy. I already use 2048²/24bit depthmaps, so I can''t go much higher. And since the environment is huge, I constantly need to regenerate new depthmaps, that''s not good for the framerate either. I recently found this paper about "perspective shadowmaps", it''s rather interesting. They distort the shadowmap by using the camera projection, in order to squeeze the maximum amount of resolution for the current view out of it. Has anyone tried something similar to this ? Or another idea how to manage large sets of shadowmaps, perhaps through image based rendering of depthmaps ? I''m just looking for some new ideas, I''d be interesting to hear how other people managed this particular problem. / Yann
Advertisement
Have you seen last year''s SIGGRAPH paper about adaptive shadowmaps? It solves the resolution issue, but I don''t think it was implemented in real time. Maybe you can adapt it somehow...
quote:
Have you seen last year''s SIGGRAPH paper about adaptive shadowmaps? It solves the resolution issue, but I don''t think it was implemented in real time. Maybe you can adapt it somehow...

No, I haven''t seen it. It sounds interesting. Do you have a reference or a link ?
You can also take a look at Perspective Shadow Maps which will appear in this year''s 2002 SIGGRAPH.

The paper can be found at http://www-sop.inria.fr/reves/publications/data/2002/SD02/
Adaptive shadow maps
Thanks sjelkjd, I''ll take a look at it.

Digicube: this is the paper I was talking about in my post
If you come up with anything, let me know =)
The paper sounded really interesting, and I walked out of there thinking "I''m sure you could get this working on programmable hardware somehow..."
A little off topic, but when are graphics cards going to have higher resolution depth buffers? Trying to cramp 80 bits of data into a 24 bit depthbuffer is not exactly a picnic We already get 32 bits on the RGBA channel, why do we have to cramp depth and stencil into 32 bits? Ideally they would provide us with a float for depth channel and another float for stencil channel. When do you think we''re going to see that happening?
I''m guessing we''ll see 32 bit depth on nvidia hardware when they _REALLY_ want to compete with 3dlabs.

And what do you need more than 8 bits of stencil for?
Because stencil buffer can be used as an accumulation buffer, so you''ll just use it for whatever needs you have.

This topic is closed to new replies.

Advertisement