Material ID

Started by
7 comments, last by MJP 12 years, 3 months ago
I'm considering which approach would be better for implementing materials in a deferred renderer. Either use a shader which branches based off of material ID, or use the 3D light attenuation texture lookup method.

On modern hardware (ie Fermi and HD5000+) which is likely to be a better solution? Is anyone currently using one of these, or better yet, tried both?
Advertisement
What do you mean by "light attenuation texture lookup"?

In my deferred renderer, I have a few different buffers with material properties, like specular brightness and smoothness. That way you avoid expensive shader branching bit still maintain some control. It's a fairly common approach.

What do you mean by "light attenuation texture lookup"?


http://www.catalinzi...iple-materials/


In my deferred renderer, I have a few different buffers with material properties, like specular brightness and smoothness. That way you avoid expensive shader branching bit still maintain some control. It's a fairly common approach.


Yeah, that's the typical solution if you aren't too concerned about a high level of material variation. You have just one BRDF. I'm just not ready for that kind of commitment. I'm a multiple BRDF kind of guy. ;)
I haven't implemented a deferred renderer myself, but just a thought: Couldn't you, instead of a material ID, store texture coordinates in the g-buffer which index into a separate "material texture" with additional properties? Considering that a draw call would most probably have the same or similar material coordinates this might even be pretty cache friendly.

I haven't implemented a deferred renderer myself, but just a thought: Couldn't you, instead of a material ID, store texture coordinates in the g-buffer which index into a separate "material texture" with additional properties? Considering that a draw call would most probably have the same or similar material coordinates this might even be pretty cache friendly.


The material ID can represent an index into a 1D texture.

http://www.catalinzi...iple-materials/


That's neat. I guess it depends on what your needs are. For me I only need a few different kinds of material.

What's nice about deferred rendering is that most brand-new engines are using it now, and developers LOVE to make PowerPoints about it. Incidentally, I love to read them! This one on Killzone 2 was especially interesting and helpful in laying out the G-buffers, if you decide to forgo the material ID method.

Anyways, good luck whichever way you decide! Make sure to post about it when it's done. smile.png
Light prepass is another option. Within your second pass you can use any lighting model you want. This is the method I'm playing with at the moment.

In summary, you do one pass to generate a g-buffer of only normal and depth, and then from this you generate a single RGB buffer with the diffuse light response of each surface. You then do a second pass, using the diffuse light buffer. This second pass can be any type you wish, provided it results in the same geometry.
Don't thank me, thank the moon's gravitation pull! Post in My Journal and help me to not procrastinate!

Light prepass is another option. Within your second pass you can use any lighting model you want. This is the method I'm playing with at the moment.

In summary, you do one pass to generate a g-buffer of only normal and depth, and then from this you generate a single RGB buffer with the diffuse light response of each surface. You then do a second pass, using the diffuse light buffer. This second pass can be any type you wish, provided it results in the same geometry.


Yes, I know about light pre-pass, and I have chosen not to use it b because I think people vastly overestimate the variety of lighting that it allows for, it requires 2x geometry passes, and you won't have access to normals for post processing.

I think the merits of light pre-pass is for systems where MRT is not supported, or where memory bandwidth is extremely critical. Neither of which is the case for me.
Light prepass is another option. Within your second pass you can use any lighting model you want.


Umm...what? wacko.png

You're ultimately restricted to the lighting model you used when you calculate the lighting, which in a light prepass renderer happens before your second geometry pass. If you rendered out Blinn-Phong specular to your lighting buffer, you're not going to magically transform it to anisotropic. I would say that you're in just as bad a position as traditional deferred rendering with regards to multiple BRDFs, but that would be inaccurate because you're actually in a worse position. This is because your lighting pass needs to render out seperate diffuse + specular lighting terms since your G-Buffer doesn't contain enough information to combine them. It also means using two render targets if you don't want to use monochrome specular, which is just plain bad.

Anyway to respond to the OP...modern GPU's are getting pretty good at branching. If your branch is choosing between two long-ish sections of code then you'll definitely want some coherency, but fortunately a material ID should be at least somewhat coherent in screenspace since it's going to be the same for groups of adjacent triangles. For a long lighting shader there may also be some performance loss from additional register pressure. You can also use other means of handling the branching if you use a tile classification approach, which can help minimize the performance hit.

Either way I think using real BRDF's is the way to go, as opposed to faking them with lookup textures. Using a good physically-based BRDF makes it a lot easier to make more consistently realistic materials.

This topic is closed to new replies.

Advertisement