Sign in to follow this  
Chris_F

Material ID

Recommended Posts

Chris_F    3030
I'm considering which approach would be better for implementing materials in a deferred renderer. Either use a shader which branches based off of material ID, or use the 3D light attenuation texture lookup method.

On modern hardware (ie Fermi and HD5000+) which is likely to be a better solution? Is anyone currently using one of these, or better yet, tried both?

Share this post


Link to post
Share on other sites
evanofsky    2913
What do you mean by "light attenuation texture lookup"?

In my deferred renderer, I have a few different buffers with material properties, like specular brightness and smoothness. That way you avoid expensive shader branching bit still maintain some control. It's a fairly common approach.

Share this post


Link to post
Share on other sites
Chris_F    3030
[quote name='et1337' timestamp='1326133589' post='4901005']
What do you mean by "light attenuation texture lookup"?
[/quote]

[url="http://www.catalinzima.com/tutorials/deferred-rendering-in-xna/multiple-materials/"]http://www.catalinzi...iple-materials/[/url]

[quote name='et1337' timestamp='1326133589' post='4901005']
In my deferred renderer, I have a few different buffers with material properties, like specular brightness and smoothness. That way you avoid expensive shader branching bit still maintain some control. It's a fairly common approach.
[/quote]

Yeah, that's the typical solution if you aren't too concerned about a high level of material variation. You have just one BRDF. I'm just not ready for that kind of commitment. I'm a multiple BRDF kind of guy. ;)

Share this post


Link to post
Share on other sites
Madhed    4095
I haven't implemented a deferred renderer myself, but just a thought: Couldn't you, instead of a material ID, store texture coordinates in the g-buffer which index into a separate "material texture" with additional properties? Considering that a draw call would most probably have the same or similar material coordinates this might even be pretty cache friendly.

Share this post


Link to post
Share on other sites
froop    642
[quote name='Madhed' timestamp='1326135718' post='4901018']
I haven't implemented a deferred renderer myself, but just a thought: Couldn't you, instead of a material ID, store texture coordinates in the g-buffer which index into a separate "material texture" with additional properties? Considering that a draw call would most probably have the same or similar material coordinates this might even be pretty cache friendly.
[/quote]

The material ID can represent an index into a 1D texture.

Share this post


Link to post
Share on other sites
evanofsky    2913
[quote name='Chris_F' timestamp='1326134819' post='4901015']
[url="http://www.catalinzima.com/tutorials/deferred-rendering-in-xna/multiple-materials/"]http://www.catalinzi...iple-materials/[/url]
[/quote]

That's neat. I guess it depends on what your needs are. For me I only need a few different kinds of material.

What's nice about deferred rendering is that most brand-new engines are using it now, and developers LOVE to make PowerPoints about it. Incidentally, I love to read them! [url="http://www.guerrilla-games.com/publications/dr_kz2_rsx_dev07.pdf"]This one on Killzone 2[/url] was especially interesting and helpful in laying out the G-buffers, if you decide to forgo the material ID method.

Anyways, good luck whichever way you decide! Make sure to post about it when it's done. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

Share this post


Link to post
Share on other sites
speciesUnknown    527
Light prepass is another option. Within your second pass you can use any lighting model you want. This is the method I'm playing with at the moment.

In summary, you do one pass to generate a g-buffer of only normal and depth, and then from this you generate a single RGB buffer with the diffuse light response of each surface. You then do a second pass, using the diffuse light buffer. This second pass can be any type you wish, provided it results in the same geometry.

Share this post


Link to post
Share on other sites
Chris_F    3030
[quote name='speciesUnknown' timestamp='1326186657' post='4901232']
Light prepass is another option. Within your second pass you can use any lighting model you want. This is the method I'm playing with at the moment.

In summary, you do one pass to generate a g-buffer of only normal and depth, and then from this you generate a single RGB buffer with the diffuse light response of each surface. You then do a second pass, using the diffuse light buffer. This second pass can be any type you wish, provided it results in the same geometry.
[/quote]

Yes, I know about light pre-pass, and I have chosen not to use it b because I think people vastly overestimate the variety of lighting that it allows for, it requires 2x geometry passes, and you won't have access to normals for post processing.

I think the merits of light pre-pass is for systems where MRT is not supported, or where memory bandwidth is extremely critical. Neither of which is the case for me.

Share this post


Link to post
Share on other sites
MJP    19755
[quote name='speciesUnknown' timestamp='1326186657' post='4901232'] Light prepass is another option. Within your second pass you can use any lighting model you want. [/quote]

Umm...what? [img]http://public.gamedev.net//public/style_emoticons/default/wacko.png[/img]

You're ultimately restricted to the lighting model you used [i]when you calculate the lighting[/i], which in a light prepass renderer happens before your second geometry pass. If you rendered out Blinn-Phong specular to your lighting buffer, you're not going to magically transform it to anisotropic. I would say that you're in just as bad a position as traditional deferred rendering with regards to multiple BRDFs, but that would be inaccurate because you're actually in a [i]worse[/i] position. This is because your lighting pass needs to render out seperate diffuse + specular lighting terms since your G-Buffer doesn't contain enough information to combine them. It also means using two render targets if you don't want to use monochrome specular, which is just plain bad.

Anyway to respond to the OP...modern GPU's are getting pretty good at branching. If your branch is choosing between two long-ish sections of code then you'll definitely want some coherency, but fortunately a material ID should be at least somewhat coherent in screenspace since it's going to be the same for groups of adjacent triangles. For a long lighting shader there may also be some performance loss from additional register pressure. You can also use other means of handling the branching if you use a tile classification approach, which can help minimize the performance hit.

Either way I think using real BRDF's is the way to go, as opposed to faking them with lookup textures. Using a good physically-based BRDF makes it a lot easier to make more consistently realistic materials.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this