• Advertisement
Sign in to follow this  

i need a favor re: imperfect shadow maps

This topic is 1905 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

imperfect shadow map reference:
[url="http://www.youtube.com/watch?v=Pdp3rfyFF14"]http://www.youtube.com/watch?v=Pdp3rfyFF14[/url]
[url="http://www.mpi-inf.mpg.de/resources/ImperfectShadowMaps/"]http://www.mpi-inf.mpg.de/resources/ImperfectShadowMaps/[/url]
[url="http://levelofdetail.wordpress.com/2008/12/19/imperfect-shadow-maps/"]http://levelofdetail.wordpress.com/2008/12/19/imperfect-shadow-maps/[/url]

point-based GI code sample:
[url="http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter14.html"]http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter14.html[/url]

i want GI and radiosity done cheaply in real-time. accuracy is not crucial, but [i]look[/i] and function are. imperfect shadow maps seem an ideal solution, but i've never seen in practice in a real-world context.

that brings me to the favor i have to ask: does anyone have a modest game up and running they could transplant some of that sample code into? i'd love to see this stuff up and working in a game i could actually play, or at least watch. it'd be a hugely appreciated effort.

Share this post


Link to post
Share on other sites
Advertisement
I'm in the process of making a point based GI/lighting technique.

It's pretty similar to this:
[url="http://graphics.pixar.com/library/PointBasedGlobalIlluminationForMovieProduction/Slides.pdf"]http://graphics.pixar.com/library/PointBasedGlobalIlluminationForMovieProduction/Slides.pdf[/url]

Which I guess is pretty similar to the NVidia doc you linked.

Mine works instead with a constant buffer filled with the most relevant vertices to whatever is being rendered.

Instead of turning models into disks, I just work with vertices.

Mine is all a vertex shader, so it's vertex lighting only. As a plus though, I can free up the pixel shader for other effects.

You could make some fairly dense vertex models to get better lighting.

I'll probably share my shader when I'm done with it, I think it still needs some polish.

Share this post


Link to post
Share on other sites
Something I just thought about. My vertex shader is primarily for doing GI/Shadows. Direct diffuse light could be done in the pixel shader mixed with the vertex shading. So normal maps could still be used.

Share this post


Link to post
Share on other sites
Ruled out light propagation volumes? That's shipped in games and so long as you have sufficient voxel resolution you can get pretty nice results.

Share this post


Link to post
Share on other sites
not ruled it out so much as i've never seen convincing results. and i don't know much about the technique. but what games use it besides crysis 2 and a handful of upcoming ce3 games? even crysis 3's indirect lighting is remarkably flat, but i imagine their tradeoffs are steep.

if i experiment with the sdk can i hope to see results like what i posted?

Share this post


Link to post
Share on other sites
looks like LPVs are an optimized variant of point-based/fuzzy GI? i was convinced they were doing general light transfer without considering the shadow problem, but i guess that's kind of impossible given if you have enough bounces in any solution you're gonna get some indirect shadow, right? though that might depend on how you handle your ambient light.

i don't yet have enough programming experience in this area to properly make sense of these techniques. i mean, i realize crytek is dealing with vast geometry and effect overhead, so whatever the effect its use is gonna be limited in their games, but it seems i was pretty mistaken about their methods.

^edited this a bunch. sorry if it's a mess for anyone reading. Edited by inlimbo

Share this post


Link to post
Share on other sites
[quote name='inlimbo' timestamp='1350700425' post='4992024']
imperfect shadow map reference:
[/quote]
Notice the severe Peter Panning effect at 3 minutes. The result is that it looks as if the ring is flying above the table, never touching it.

Edit: The paper actually states that "While ISMs may contain incorrect depth values, the resulting errors in the indirect illumination are
small but the computational gains...". Inaccurate depth values leads to the requirements of high depth test delta, which leads to Peter Panning. But I suppose the purpose is to use this for indirect illumination, which is indeed what is stated, in which case it may be ok? There are some cases that have to be avoided, like thin geometry. Edited by larspensjo

Share this post


Link to post
Share on other sites
from what i can tell the ring isn't touching that surface at all, which is not merely an illusion of the reduced shadow accuracy since it looks like they're using direct and indirect shadow maps there. i mean, you can see a mild peter pan effect during the camel's run, for example, but that's acceptable as far as i'm concerned. Edited by inlimbo

Share this post


Link to post
Share on other sites
Whoops, forgot to get back to you! LPVs aren't *quite* the same thing as point-cloud GI. You're basically chopping up the world into a voxel grid and figuring out how much light is flowing through each voxel; you inject some source light via reflective shadow maps or just plain point emitters (as I understand it, this was experimented with in Crysis, though I guess it was dropped. Apparently Crytek don't know how do do specular with it or something?) and then diffuse everything out across the volume.

In practice, you can get [url="http://www.youtube.com/watch?v=QQfQMNMFGmg"]pretty fantastic results[/url], though there are still a few areas that can stand to be improved over what shipped in Crysis/possibly the video. The big one, like you point out, is the lack of specular highlights-- this isn't so much a shortcoming of the technique inasmuch as Crytek never seem to have put the effort into doing it. While the idea had been around since around the turn of the millennium, you can project BRDFs into the spherical harmonic basis and use that to compute reflectance by convolving it with the sampled LPV result. [url="http://www.bungie.net/images/Games/Reach/images/screenshots/ReachCampaign_m10_01.jpg"]Halo 3/Reach actually do something fairly similar[/url], though they actually sample the lightmap at a character's feet-- things are also a little smoothed over since you're using the same lighting data for the whole model.

The other issue is that light can bleed around corners somewhat, since you can't actually linearly blend between SH samples! The Dangerous Curves breakdown by the ATI demo team goes into detail on this, but you actually need to use irradiance gradients for proper blending.

Share this post


Link to post
Share on other sites
but crytek doesn't do any preprocess for LPVs, right? so they're generating that voxel grid in real-time by deconstructing/resampling gemoetry?

Share this post


Link to post
Share on other sites
[quote name='inlimbo' timestamp='1351227615' post='4994036']
but crytek doesn't do any preprocess for LPVs, right? so they're generating that voxel grid in real-time by deconstructing/resampling gemoetry?
[/quote]
Not *exactly,* no. The specific process that survived its way into Crysis 2 was to generate a reflective shadow map that described the amount of reflected light, then inject every texel from said RSM into the light propagation volume. This is specifically how global illumination was achieved, and the reason why there aren't any indirect shadows; visibility was entirely implicit. You can run a separate grid that contains information on how much light (if any) is allowed to pass through a certain voxel and use that to help propagate light more correctly, but this wasn't done in Crysis. The results can still look quite acceptable when combined with SSAO or alternate techniques that [url="http://kayru.org/articles/dssdo/"]generate SH visibility directly[/url].

EDIT: Here's the actual [url="http://www.crytek.com/cryengine/cryengine3/presentations/cascaded-light-propagation-volumes-for-real-time-indirect-illumination"]LPV paper[/url]. Edited by InvalidPointer

Share this post


Link to post
Share on other sites
with LPV everything is doable without precomputation in 10ms per frame for 2 cascades depending on the quality of your injections. though, geometry injection is difficult to get right and theoritically flawed because of density issue from buffers with perspective. this is not mentioned in the papers but its a real flaw and issue that makes camera GV unsuables and therefore GV is almost useless. It works more or less from at least 1 million or 2 millions blockers (points injected) which takes more than 50ms per frame per cascade which is not acceptable. nvidia demo even uses peeling up to 4, multiplying this cost by 4 ! of course, to remain reasonable they reduce injection quality and the result is a flcikering solution with camera movments. also quite frankly the propagation is poorly controllable and in many demo i've seen doesn't seem to respect a correct energy attenuation. the paper says they respect perfect energy conservation, which from experience, I don't believe they actually have. Andreas Kirsh mentioned some mistakes, and he himself made some mistakes in his anotations (c.f. radiance formula that misses surface term). Mixing SH and Radiance/Intensity/Flux into one technique is the best recipe for getting a mix of un-understandable units and magnitudes of the values you manipulate. c.f Sebastien Lagarde "to Pi or not to Pi in your game lighting equation", c.f. "radiance from irradiance" [ramamoorthi] which is a very complex paper that has brought confusion in the mind of many since its existence, and making people mix the both terms like if they were equivalent much too often.
Also, I have nowhere seen a clear explanation about units that stores SH distributions representing light (is it radiance, intensity, flux...) and the normalization that are necessary or not (pi, 2*pi, 4*pi, smthg else ?)
the result of all these complications make that LPV can be good or bad depending on how much you rely on your math (probably false) or empirical values (probably false as well, but at least artist tunable).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement