• Advertisement
Sign in to follow this  

Overlapping Effects

This topic is 3623 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm having some issues figuring out how to do multiple effects. For instance, Lets say you have a Character on a plane. I am using a HLSL Vertex/Pixel shader to create a shadow map, then render the scene using that shadow map. Problem comes in when I now want the character to blur when moving. I ONLY want the character to blur, but I also want to have a shadow for him lets say. The shadow map renders to a texture first, then renders the scene applying the shadow map. How do people go about handling multiple effects like that? Do you have a shader designed to handle both (shadow map & blur) based on flags, like:
m_Player.SetBlur(true);
ApplyEffect("GenShadowMap");
DrawScene();

ApplyEffect("CombineShadowMapANDBlur"); ??? // another words, both are in one shader
DrawScene();

OR, is there a way to generisize (its a made up word) it so I can have it more like this:
ApplyEffect("GenShadowMap");
DrawScene();

ApplyEffect("DrawWithShadowMap");
DrawScene(); //problem is, player is drawn

ApplyEffect("Blur");
m_Player.Draw(); // player is drawn again with blur but was first with shadow??


Thanks Jeff.

Share this post


Link to post
Share on other sites
Advertisement
From what I gather many 3D games have a system for compiling shaders from fragments of HLSL, and use this to create many combinations of shaders that can be used depending on what's being rendered and under which settings. But of course with this you still need to strike a balance...you usually can't compile every combination, since each new fragment can result in a large increase of permutations. Plus depending on what type of hardware and shader model you're targeting, you may frequently bump against the instruction limit.

And it's probably intuitive to think that combining effects into one pass will always be more efficient than combining them, that isn't always the case. For example, if for some reason it's necessary to render your character more than once it would be cheaper to draw it without the blur multiple times than it would be if you were using a huge ubershader. Plus there's also the problem of GPU's rendering groups of pixels simultaneously (usually quads at the smallest level, but there are larger groups as well). A GPU will get it's best efficiency when rendering to all four pixels that make up a quad at once, and when you're rendering complex geometry this won't happen quite frequently around the edges. This is a big reason why deferred shadowing is popular these days, where shadows are applied to the scene in a full-screen pass after models are rendered.

Share this post


Link to post
Share on other sites
MJP,

So, are you basically saying having them in one shader with flags is the "norm". I was leaning that way myself. I don't have tons of shaders so a system of combining shaders from fragments would be WAY overkill for anything I'm doing. I do like the deferred shading method you suggested, but for this application, I think I will stick with shadow maps.

Thanks again,
Jeff.

Share this post


Link to post
Share on other sites
Quote:
Original post by webjeff
MJP,

So, are you basically saying having them in one shader with flags is the "norm". I was leaning that way myself. I don't have tons of shaders so a system of combining shaders from fragments would be WAY overkill for anything I'm doing. I do like the deferred shading method you suggested, but for this application, I think I will stick with shadow maps.



I honestly couldn't tell you what the norm is, I'm just a hobbyist myself. However from discussions here and also some papers and discussions of commercial engines (such as the one CryTek put out for CryEngine 2) I get the idea that the technique of combining shader fragments is, at the very least, fairly common. As such you shouldn't take anything I say as gospel or anything, I was just trying to share some of my experiences with you to give you an idea of what the common sticking points are when dealing with a shader/material system.

And as Ashkan said, I wasn't talking about deferred shading. Deferred shadows still use shadow maps and you can still do it with regular forward lighting calculation, the only difference is that the part where you compare the depths and come up with an occlusion factor isn't done at the same that you render the geometry that the shadow actually affects. Also I was mainly just using that as an example of how there really isn't a "this approach is best for scenarios" solution to this problem at least not one that I know of. [smile]

Share this post


Link to post
Share on other sites
MJP,

Ah yes, I was confused! Thanks for both of you clearing that up. I did some quick searches and couldn't find any obviously useful information about it. How does Deferred Shadows store its values for use later if not at draw time?

Technically, I build a shadow map by the lights position and draw a depth image to test against later with the actual geometry. That's a two pass approach.

Are you saying you build a shadow map (store it). Render your scene with other effects. Then as a post-process step you compare pixels against your entire scene??? (I searched google and couldn't find any useful articles). Maybe you know of one or two.

This is a little off-topic of my original question, but this could be a better solution overall if I look into doing other effects and shadows that I don't have to build tons of shaders to handle the shadow portion of the effect.

Thanks again,
Jeff.

Share this post


Link to post
Share on other sites
Yeah it's not a technique you're going to find tutorials on unfortunately, because AFAIK it's still fairly new and hasn't been documented through a paper or anything like that. It first came to my attention when it was mentioned that Unreal Engine 3 uses this technique. nAo later discussed it over a beyond3d and said it was a big win for Heavenly Sword. I remember it coming up in a few threads over in graphics programming and theory, but I can only find this one at the moment.

You don't need to store the shadow occlusion factor. Instead the idea is to render the scene, then access the z-buffer (or render your own depth buffer is that's not practical) and reconstruct the world-space position of that pixel using the depth value. You then use that world-space position to calculate the pixel's depth from the light's point of view, which you compare to the shadow map's depth value to come up with the occlusion factor you use to darken the pixel.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement