Mixing multiple shaders, occlusion culling

Started by
6 comments, last by xantier 9 years, 8 months ago

Since I gave up on deferred rendering because of performance and compatibility problems (plus transparency, AA, independent materials etc.), I got back to forward rendering. The problem is, I was able to apply as many as I want post processing effects in deferred rendering, as I kept normals, depth, color as textures. Now, I am able to have different materials for different objects but since there is no OOP logic in shaders, I can't make them inherit, for example, fog shader or apply global effects on them. Each object should use duplicate code to achieve this. Or I don't know how to do it yet.

One more question about occlusion culling, what is the difference between occlusion query and stenciling with z test? When should I use which one?

Advertisement

Each object should use duplicate code to achieve this.

I use a pre-processor (mcpp) to handle this. In my case, I implement methods in certain libraries (simple text fiels containing common functions), which are included in my shader files. A small script generates the final opengl shaders using mcpp. There are many ways to do it.


what is the difference between occlusion query and stenciling with z test?

When using occlusion query, you render a object and checks afterwards, if opengl acutally really rendered at least a few pixel of it, or if every single pixel is occluded by others.

Stencil operations work on a special buffer (stencil buffer) on a per pixel basis.

The difference is, that occlusion queries gives feedback to the application (ok, check if object is visible), whereas stencil is more or less a masking tool without direct feedback to the appilcation. Stencilbuffers are used when rendering multiple passes, eg. in the first pass mark all pixels which represents a mirror surface (set stencil value to 1) and render in a second pass the mirrored scene to only marked pixels (stencil check).


Each object should use duplicate code to achieve this.

I use a pre-processor (mcpp) to handle this. In my case, I implement methods in certain libraries (simple text fiels containing common functions), which are included in my shader files. A small script generates the final opengl shaders using mcpp. There are many ways to do it.


what is the difference between occlusion query and stenciling with z test?

When using occlusion query, you render a object and checks afterwards, if opengl acutally really rendered at least a few pixel of it, or if every single pixel is occluded by others.

Stencil operations work on a special buffer (stencil buffer) on a per pixel basis.

The difference is, that occlusion queries gives feedback to the application (ok, check if object is visible), whereas stencil is more or less a masking tool without direct feedback to the appilcation. Stencilbuffers are used when rendering multiple passes, eg. in the first pass mark all pixels which represents a mirror surface (set stencil value to 1) and render in a second pass the mirrored scene to only marked pixels (stencil check).

About shaders, is it what modern engines do these days?

And occlusion queries, do they still make you process invisible pixels in fragment shader or does it prevent them from being unnecessarily processed?

With an occlusion query you can render a simplified bounding (say bounding) mesh offscreen (relatively quick, no materials etc) and get back if it would be visible if rendered normally. If not you can skip processing it completely in your code, so you dont even send it to both the VS and PS

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Ahhhh I think I got the idea now, you just draw the bounding volume of an object with color and depth mask off, GPU checks if it would be visible and tells the results in the next frame, if it is not visible, you just don't draw it. But I think it wouldn't be a pixel perfect solution right? What if I have a tree model and as you can guess its bounding volume would also fill empty spaces which would cause unnecessary occlusion?

Correct, accuracy depends on the bounding and actual shape of a mesh, so always make sure that the bounding representation covers the whole actual mesh (and accept the bit of accuracy loss).

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Ahhhh I think I got the idea now, you just draw the bounding volume of an object with color and depth mask off, GPU checks if it would be visible and tells the results in the next frame, if it is not visible, you just don't draw it.

Actually, it can take more than one frame to get the result back. You can however use conditional rendering to use the outcome of the occlusion test within the same frame, without having the result make the roundtrip from the GPU to the CPU and back to the GPU. The downside of this is that your CPU will still spend time making draw calls for the occluded objects.

I got most of the points but I am actually still stuck at applying multiple effects. It seems as if something is wrong, I have to switch shader states or create a general shader that includes every possible shader effects like parallax mapping, skeletal anim, bump mapping, ambient occlusion, fog, reflection, directional and spot lights etc. which made me implement deferred rendering because it allowed me to separate effects as layers. Every example I see on internet implements those effects individually but I have never seen any example that mixes those and create something meaningful.

This topic is closed to new replies.

Advertisement