Materials System & Per-Pixel Lighting

Started by
5 comments, last by Zemedelec 17 years, 10 months ago
Hi, I'm is the process of writing the Material System for my 3D Engine. I've already decided that the system will be scripted rather than use dynamically loaded .DLL libraries. When it comes to lighting, I want to be able to do per-pixel lighting using forward rendering (I don't have a good enough card to experiment with DS yet :(). The inputs to the lighting algorithm are the usual diffuse, specular and normal map textures. My problem is that I would like to be able to support using a complex shader as the diffuse component of the lighting algorithm. This is desirable as it would be far more flexable, and would allow things like my terrain splatting to be written in the same system rather than being a special case. I'm not sure how to support this though. I can't directly use a complex shader as the diffuse component because I'm pushing it with the number of texture units as it is. Can I do the multiply of the diffuse component with the lighting in the frame buffer is some way? Some form of MODULATE glTexEnv state? Another approach I thought of was to render the whole scene without lighting, but store the result into a texture. This texture could then act as the diffuse component when rendering the lighting by using texture coords which project it back onto the geometry. This would only require one extra rendering pass for the enitre scene which can be reused when rendering each light. This has been a bit of a ramble but I'm looking for other suggestions or holes in my current ideas. Thanks in advance, Digitalblur
DigitalBlur
Advertisement
What about rendering the diffuse component to a texture and then using that as the input to your lighting algorithm?

Thanks for your reply nts. I also initially thought that this would be a good solution. But not all shaders can be easily rendered to a texture without loosing some detail.

The texture splatting used for the texturing of my terrain is a prime example. A shader is used to combine 4 different textures into one, effectively producing one very large and detailed texture. If I were to render this to a texture before using it in the light equation, I would either use a huge amount of memory or loose detail.

That is why I thought I would render the diffuse component into a texture that is in screen space rather than model space. This way, if the texture is the same resolution as the screen I don't have to worry about loosing any detail. All I have to do is project the texture back onto the geometry in a second pass.

I haven't yet implemented this yet, so I'm not totally sure if this would work. I'm also not totally sure that this is the best solution either. Any thoughts?

Cheers,
Digitalblur
DigitalBlur
Quote:Original post by DigitalBlur
That is why I thought I would render the diffuse component into a texture that is in screen space rather than model space. This way, if the texture is the same resolution as the screen I don't have to worry about loosing any detail. All I have to do is project the texture back onto the geometry in a second pass.

That *is* deferred shading :) And it's a good idea.

PS: I'm mildly curious as to how you're pushing 16 textures to compute the diffuse component. Are you "choosing" between textures in the shader or actually using them all? That said, one solution is to make a texture atlas.
Cheers AndyTX. When I said that I was pushing the number of textures in my first post I should have said I was pushing the number of textures for *my* gfx card. which is an old GeForce4 4400 Ti that only has four texture units.

As for the Deferred shading thing... I had not really thought of it like that. I guess it is, sort of :p. I wish I could play with a REAL DS algorithm, but my card doesn't support floating pbuffers. Nor does it do much other than register combiner sort of operations. That is why im stuck actually rendering the light using forward rendering rather than doing the whole lot in image space.

I'm also not really sure what you mean by Texture Atlas? Do you mean pack more than one texture together? If I do that, how do I take care of repeating textures?

Thanks
Digitalblur
DigitalBlur
Quote:Original post by DigitalBlur
...which is an old GeForce4 4400 Ti that only has four texture units.

Ahh I see. Well if it helps any you can pick up a GF6600 or X1600-type card for pretty good prices nowadays. Either of them will be easily enough for a full deferred shading implementation (they have the same features as the current top-end cards).

Quote:Original post by DigitalBlur
I wish I could play with a REAL DS algorithm, but my card doesn't support floating pbuffers.

Floating point buffers are usually used to store positions, normals, etc. They're not necessary for colour buffers... unfortunately the current MRT rules often force us to store them with unnecessary precision.

Quote:Original post by DigitalBlur
I'm also not really sure what you mean by Texture Atlas? Do you mean pack more than one texture together? If I do that, how do I take care of repeating textures?

Yes, packing textures is a good way of reducing texture unit usage AND improving batching. NVIDIA has a texture atlas tool IIRC that will create texture atlases automatically.

Regarding repeating textures you have to deal with them with shader code (maybe register combiners will work... I've not used them much). i.e. wrap/clamp/etc. your texture coordinates before converting them to texture atlas space.
Quote:Original post by DigitalBlur
...


First, try to write the system in the complete forward way - all rendering and shadowing in single pass per layer (and of course optimize layers by individual index buffers).
Two options from then, is the model space cache, and screen-space cache. The latter one must be used every single frame, and you must use MRT or multipass.
The first one can be slow as you want, since it will be used rarely.
But putting the splatting into the material system... I mean, putting something like these caches into material system sounds not very trivial. If it isn't very clear to you - don't do it (of course if you don't want to dive into research of some type :).

Note, that the model space cache can't be used for some sorts of terrain texturing, like vertical texturing and all its variations, but DS-scheme will do just fine.

Just do custom shaders for these methods, and you can do the code sharing by smart design of your functions and headers inside your shaders.
For example, make two functions, that extract diffuse and normal from surface to work with 1-N layers, then make the number of layers configurable and their outputs too. Also, put lighting equation into single function. That way, custom shaders will be minimal code, something like arrangement of inputs and outputs of actual shader code.
Because, cached shading and forward one will differ seriously in the way, how they handle diffuse and normal data and break them into passes.

This topic is closed to new replies.

Advertisement