• Advertisement
  • entries
    232
  • comments
    1463
  • views
    959932

Pipeline system description

Sign in to follow this  

542 views

I just realized that i never described the pipelining system of my engine.. one of the designs i'm the most proud of. So this time you won't get any eye candy :)

My graphics engine is designed around the concept of pipes. Each pipe is an object responsible of rendering a subset of the scene graph with the same type of "processing". You can think of a pipe as a pass, although it's a bit more subtle than that.

Pipes are linked together as a tree. Each pipe has one parent (except for the root pipe), and can have many children. Each pipe has two main functions: setup and render.

The setup function is responsible of two things: preparing the rendering by calculating on the CPU the set of objects that will have to be rendered (and with which shaders); and potentially, rendering something to a texture. In that last case, the setup function is allowed to call the render function of its child pipes. The setup function is taking in argument a camera.

The render function is responsible of rendering the objects calculated in the previous setup phase.

In addition to this, each pipe has a name. The names are propagated in the child pipes (appended together), and then propagated to the shader system once rendering takes place. A shader can use conditionals to apply itself, based on the type of pipes that are being used.

The objects in the scene graph have many shaders applied to them. When rendering objects of a pipe, i use the shader whose name matches with the pipe's name.

Now, an example, and all will become clear:



The Scene Pipe is a pipe that, in its setup phase, performs culling with the given camera and stores all the visible objects in an internal array. It does not perform any rendering.

The Standard Pipe does not perform any setup operation, but when rendering, it uses the objects calculated in the scene pipe. It renders these objects with their "Standard" shader, and if an object doesn't have any "Standard" shader, it is skipped.

Let's imagine for a second that the Standard shader is only applied to opaque objects, and that a shader called "Transparency" is applied to transparent objects, at scene initialization time.

The pipes system can easily be expanded to this:



And as you see, the culling performed by the scene pipe can be reused by the transparency pipe, in a matter of seconds. The transparency pipe will only render objects with their "Transparency" shader. Note that, as i said in the beginning, an object can have many shaders - so theorically, you could have one object which has both "Standard" and "Transparency" shaders (even if in that scenario it'd be a bit incoherent).

But the best is coming. Now, imagine that you want to render reflections in the water. To do that, you need a pipe that will render the reflected scene into a water texture. That's quite simple. In the setup phase:
- call the setup phase of the children, but with the reflected camera
- enable render to reflected texture, and call the render phase of the children.
And ignore the render phase of the reflected scene.



The reflected texture can then be assigned to objects of the scene graph (like the ocean mesh), and rendered normally by the second standard pipe.

Implementing HDRI ? This is how i did it:



The HDRI pipe enables render-to-texture (with a floating point format) in the setup phase. The rest of the pipeline is rendered into the HDRI texture, but with a tweak: the sub-pipeline shaders have the name "HDRI" appended to them. So, in the Standard pipe, the shader called "Standard_HDRI" will be applied, while in the Transparency pipe, the shader called "Transparency_HDRI" will be applied.

In the render phase of the HDRI pipe, a bloom filter and a tone-mapping operator can be used on the HDRI texture, and applied to a full-screen quad.

Another advantage of my pipeline system in addition to its flexibility, is that it can be built at run-time (or even dynamically updated). Which means, if your video card does not support HDRI, the previous pipeline will be used instead.. and so on.

Long and technical article, but i hope you enjoyed reading it :)
Sign in to follow this  


6 Comments


Recommended Comments

Quote:

.. hope you enjoyed it.


Absolutely! I'm going to let this sink in for a while. And maybe come back with question. Although your explanation is very simple, I suspect there's a lot more to it.

As I understand it, you don't apply pipes to a specific object do you? So what happens if an object wants a dynamic cubemap?

Share this comment


Link to comment
Quote:
you don't apply pipes to a specific object do you?


No, pipes are processing sets of objects. Although technically, nothing prevents you from creating one pipe per object - but then, the relationship between pipes would make everything very complex. A pipe is a very lightweight object - it has close to no cpu overhead and no memory overhead (when it's "naked", obviously. After it implements things like culling, it's more complex).

Quote:
So what happens if an object wants a dynamic cubemap?


In that case, you'd need a pipe, responsible of rendering to the cube map, which in its setup() phase does a loop for each face, creates a temporary camera matching the cube face's view, then calls the setup() and render() phases on each of the child pipes.

Then, you assign this cube map to the objects that will render with it.

I'm using this system for per-pixel lighting with omni cube shadow maps.

Share this comment


Link to comment
Ok, but how does the cubemap pipe know where to position the camera (ie for which object to create the cubemap)? And one of the things I've had problems with in my own renderer, is that a dynamic cubemap will want to render the entire scene, so do you reuse the set of objects you already have, or create a new set (better, but slower)?

Share this comment


Link to comment
Quote:
Ok, but how does the cubemap pipe know where to position the camera (ie for which object to create the cubemap)?


Well, that completely depends on how you implement it. You can add a method "addObject" to the pipe itself, and let the user decide. Or parse in the scene graph all the objects with a dynamic cube map shader. Etc..

Quote:

a dynamic cubemap will want to render the entire scene, so do you reuse the set of objects you already have, or create a new set (better, but slower)?


There are many solutions too. You cannot reuse the results of the scene pipe for the global camera, because the cube map faces cameras have different positions/orientations/field of views. So, either you create a new scene pipe which handles the 6 new cameras (the best, but requires to perform 6 times frustum culling), or you create a new scene pipe which determines all the scene graph objects which are at a distance lower than a given threshold, and for each cube face, you brute-force render all these objects (and most of the objects will be outside the screen, so they will be rendered for nothing).

Share this comment


Link to comment
Guest Anonymous Poster

Posted

Hmm, think one must about this. Thanks for sharing it!

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement