So I'm about to start writing a scene manager and renderer for a game and I'm going to be using deferred shading. My scenes are going to be populated by lots of objects so I'll be using instancing to cut down on draw calls. I also want to try and defer material and texture sampling by having my objects output a material ID into a single-channel FB. Some kind of mega-shader would then convert the material ID into a diffuse and normal sample (probably by having a texture store all material attributes and using the material ID as a v coordinate, and by using texture arrays to get access to different textures).
This raises an issue with material blending. My scene will also feature a terrain at all times (other than when the player is looking up) and I want the textures on my terrain to blend from one to another across a tile. Of course, since the mapping between a pixel and material ID is 1-1, there's no way to represent blending between multiple materials using the setup described above. The way I see it, I have 2 options:
- Instead of using a single material ID per pixel, use multiple material IDs and have a second output which would be the weights of the materials to blend with in the mega-shader.
- Draw the terrain seperately straight into the GBuffer after the mega-shader has been run on the scene, using the depth-buffer from the material ID FB. Texture sampling would occur normally here. Lights would have to be rendered in a seperate pass from the mega-shader.
Both options have pros and cons. With option 1, the advantages of drawing into a relatively small FB are lost (now we have 2 textures to render into with weights) and you can only fit up to 4 materials to blend at once (though this might be enough for terrain blending?). Additionally, the uber-shader now has to make up to 4 samples into the material texture and corresponding diffuse/normal textures and blend everything together, potentially taking 4x as long to draw. I'm not sure if it's possible to optimise this somehow such that only 1 sample is made if only 1 material is used with full weight.
With option 2, the uber-shader remains relatively light-weight but there's an extra pass involved and there's probably some caveat of drawing everything in a strange order. Also benefits of the deferred rendering in this case are lost for the terrain (which will have a very large surface area and is also likely to be occluded by many polygons (trees, grass etc) and so would probably benefit from deferred rendering the most).
My question is are there any other options I haven't considered? There is surpsingly a lack of documentation and literature on this matter floating around. If there are no other better options, which option do you think is best for me to choose? Alternatively, is it worth me using deferred rendering at all and should I spend my time making a more intelligent batching system/render queue and try and squeeze more frames out of there?
If it makes any difference I'll be using OpenGL core profile 3.3. My host language is C# / OpenTK.