Sign in to follow this  

Inifinite shadow volumes without Carmack's reverse...

This topic is 4466 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Changing the direction of the depth test is logically equivalent to changing the stencil operation from pass to fail. However, most GPUs will disable hierarchical z-buffering for the rest of the frame if you change the direction of the depth test, so doing that will probably cause a performance hit.

Share this post


Link to post
Share on other sites
Shoot me too if my approach is also already well known. For my shadow volumes (using the big gray square overlay method), I've changed the stencil operation from Compare.Less to Compare.GreaterEqual when rendering the overlay for the case that the camera is in the shadow volume.

This seems to invert the shadow, so it's giving the correct effect, though I haven't implemented a 'camera in shadow check' yet. So maybe a reason for still using Z-fail might be that it is more forgiving on the cam-shadow check, a check which is (as far as I understood) only needed to increase performance by switching to the Z-pass method when possible.

My method would require an exact computation of whether the camera is in shadow or not, but on the other hand it doesn't require capped shadow volumes so the computational expense should be about equal to the Z-fail approach, maybe even faster with optimisations (i.e. only calculating cam-shadow intersection when shadow and/or cam pos changed).

I'm using this technique for my demo, which renders an outdoor scene. For this it seems to be the way to go, since the dimensions of the landscape require a very 'far' far clipping plane, so this is not an issue for my shadow volumes. Maybe this approach is ignored because it just doesn't work well for indoor environment shadows?

Could anyone with some more theoretical background shed some light on this, if it really would work? I just realized I don't even know exactly why the Z-pass algorithm fails when the camera is inside a shadow volume, only that it does.

Share this post


Link to post
Share on other sites
Quote:
Could anyone with some more theoretical background shed some light on this, if it really would work? I just realized I don't even know exactly why the Z-pass algorithm fails when the camera is inside a shadow volume, only that it does.

Shadow volumes make use of the property that, for a closed volume, a ray that intersects the volume will pass through a back facing polygon for every front facing polygon it travels through. An area receiving shadow is where the ray intersects another surface before leaving the volume. Thus, by counting the number of front and back facing polygons between the camera and the shadow receiver, you can calculate which parts, if any, are in shadow.

The z-pass method effectively compares the numbers of front and back facing planes between the camera and the receiver. The receiver is in shadow if they are not equal. Have a look at the diagram from this article and convince yourself that, for any point outside the volume, a ray to any shadowed area passes through only one plane, and a ray to any part outside passes through both a front and back facing plane.

This is no longer the case when the camera enters the volume, and typically the values calculated by the stencil buffer are reversed.

The z-fail method instead counts the number of planes between the shadow receiver and "infinity". Thus it does not matter how many planes lie between the camera and the receiver, so the camera can enter the volume.

Quote:
My method would require an exact computation of whether the camera is in shadow or not

You could still have problems here. There are cases where the volume intersects the viewport, so that half of the screen lies within a volume and half without. This may not be a problem depending on the type of game your engine is designed for, but for a typical FPS you will probably notice the effects.

Share this post


Link to post
Share on other sites
Thanks for the info, MumbleFuzz.

I am only using a directional light (for the sun) and an overview camera for my terrain scene, so the volumes are typically extruded away from the camera with no risk of intersection with the viewport. In fact, I haven't had to use anything else than Z-pass so far and chances are I never will, but I just wanted to check my approach.

I just re-read the topic start however and noticed Name_Unknown's comment that everyone is using shadow maps now-a-days. Is this true? I did some quick reading on shadow maps and the results do look good, but it seems quite an expensive technique and very hard to (efficiently) implement for < SM2 hardware... Would that assumption be correct? :)

Share this post


Link to post
Share on other sites
Shadow maps are becoming popular, but I haven't yet implemented them myself. If done well the visual quality is probably a little higher than using volumes (soft shadows are easier to implement, for example). Depending on your engine requirements, implementing shadow maps well may be harder than implementing shadow volumes, but I think in your case you shouldn't have many problems.

I can't tell you much more than that. For more, I recommend you search the forums for Yann L's comments on shadow maps.

Share this post


Link to post
Share on other sites
Quote:
Original post by Name_Unknown
ok, I see. Thanks Mr. Lengyel ;-)

I realized it was sort of the same idea as reversing it (logically) which is why I tried it... I had this idea that if I did that, well.. it wasn't using the patented algorithm anymore ;-)

I guess not.


Wait... you mean it's literally "patented"? Is that even legally possible? Can you actually patent a shadowing method?

Share this post


Link to post
Share on other sites
Quote:
Original post by Leo_E_49
Wait... you mean it's literally "patented"? Is that even legally possible? Can you actually patent a shadowing method?


yes, creative has patented the zfail shadow-technique, but i wouldn't fear it. first of all, there exist prior art on this, so a sue would proabebly be quite dodgeable. second, a lot of patents are issued to protect yourself from getting sued, not to sue other people.

besides, why would creative, who make gpus that benefit from robust shadows, want to sue a developer for implementing it?

Share this post


Link to post
Share on other sites
I just read an article saying that Creative was going to sue Carmack but they settled it out of court by having Carmack license Creative technology for Doom 3. I might well just avoid using shadow volumes in future...

Share this post


Link to post
Share on other sites
Quote:
Original post by Leo_E_49
I just read an article saying that Creative was going to sue Carmack but they settled it out of court by having Carmack license Creative technology for Doom 3. I might well just avoid using shadow volumes in future...

Software patents are a very bad idea, and decidedly dodgy in my eyes. They lead to things like the stunt Creative (allegedly) pulled on id. However, they only apply in the US.

Share this post


Link to post
Share on other sites
IANAL, but I invented this technique before Creative or Carmack, and presented it at GDC 99 during the advanced d3d tutorial day, in my talk "Using the Stencil Buffer", available on the nvidia website.

Some would say that this represents 'prior art', and would make the creative patent invalid. Also, if you read the patent, they don't patent zfail shadow volumes, they patent zpass greater shadow volumes, which, although logically similar, they are worse b/c they can screw up z cull hardware.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
SimmerD:

I have read the patent and agree with you. They DONT talk about changing the stencil-op with a single word. Instead they _explicitly_ talk about reversing the zbuffer comparsion mode. So I think, the usual implementation of Carmack´s Reverse (Or should we call it Sim´s Reverse? :-) is not covered by the patent. But why did idSoftware give in?

Excerpt:
"A key step in the present process is the definition of the newZTest (X, P) function which inverts the z-test comparison used in the prior art. In the usual test, pixels having depth (z) values less than the depth (z) values stored in the z-buffer pass the z-test. In newZTest( ), pixels having depth (z) values greater than the corresponding depth (z) value stored z-buffer values pass the new z-test. "

Share this post


Link to post
Share on other sites
I'm willing to bet the lawsuit would cost more than licensing Creative technology, if they were willing to go with that. Also it might have marred the company's image to take the issue to court. I'm of the opinion that Creative makes some decent stuff anyway so it's not a bad tradeoff imho.

Share this post


Link to post
Share on other sites
I can't speculate on why id chose that route.

I have mixed feelings on patents - having several issued and pending myself.

Remember that, at the time, board companies were trying very hard to have exclusive content, so they could maintain margins vs their competitors, since the real value was in the 3d chip, not in the board or the warranty. So, I can understand why they persued the patent at that time.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I'm not sure if I remember this right - but wasn't the problem at ID that Creative threatend them with a withdrawel of a license to the Soundblaster EAX stuff? In that case ID could have forgotten all about writing Soundblaster (EAX) compatible on Doom III titles, pretty bad marketing scheme IMHO.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
I'm not sure if I remember this right - but wasn't the problem at ID that Creative threatend them with a withdrawel of a license to the Soundblaster EAX stuff? In that case ID could have forgotten all about writing Soundblaster (EAX) compatible on Doom III titles, pretty bad marketing scheme IMHO.


Actually, I heard that iD hadn't been planning on using EAX, but the agreement they reached with Creative was something like "you can use the shadow technique if you implement EAX." AFAIK you don't need a license to write EAX stuff, so there's nothing Creative could have withdrawn. But of course, this is all rumour; only the folks at iD know for sure.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
creative's actions sound like a company willing to stop at nothing to gain the edge in the cut throat world of graphics accelerators, probably taking hints from nvidias method of using awesome demos to promote their accelerators

Share this post


Link to post
Share on other sites
nah, That's not fair for nVidia. unlike most other companies, nVidia is a developer's best friend. The community support, the amount of material and tools on their developer site compared to, say, ATi, and the depth of available, registration-free tools (see NVPerfKit, I don't see ATi doing a similar thing anytime soon) - Are the best. not to mention that SimmerD and JavaCoolDude on these forums are both nVidia-ns(tm).

Share this post


Link to post
Share on other sites
Quote:
Original post by Code-R
nah, That's not fair for nVidia. unlike most other companies, nVidia is a developer's best friend. The community support, the amount of material and tools on their developer site compared to, say, ATi, and the depth of available, registration-free tools (see NVPerfKit, I don't see ATi doing a similar thing anytime soon) - Are the best. not to mention that SimmerD and JavaCoolDude on these forums are both nVidia-ns(tm).


I'm not arguing against you, but I'd just like to point out ATI's RenderMonkey Toolsuite; a free tool for developing shaders. ATI do their share as well.

Share this post


Link to post
Share on other sites
I'm not saying ATi isn't doing something, because RenderMonkey is definitely one of the coolest shader tools out there. It's that ATi provides far less material/tools and papers than nvidia, and they don't expose a lot to the users. case in point: NVPerfHUD and NVPerfKit. Besides, ATi's flaky Cg support, even when the compiler frontend is opensource from nvidia is just being stubborn :).
In short, ATi is working, but nvidia makes it appear so that it's not working hard enough ;)

Share this post


Link to post
Share on other sites

This topic is 4466 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this