Good render loop organization resources

Started by
2 comments, last by _paf 9 years, 8 months ago

Hello everyone,

I am currently working on a prototype version of an OpenGL based Android game (using Java, not the NDK).

I already got the basics down: Loading VBO's with my vertex data (positions, normals, texture coordinates, colors), binding to uniforms, etc.

Basically, I'm able to render multiple models with transformations and lighting applied.

However, I am a bit concerned about the efficiency of my render loop, since it's all of my own design.

Most of the resources out there teach you the basics: How to load your models, use, shaders, ... but they never go into the topic of actually organizing you rendering process. In particular: in which order should I bind my objects, how to chain transformation matrices properly, ...

I've been reading "Game Coding Complete" in which they describe a way of rendering your scene by using a render tree, chaining transformation matrices as you move down the tree. All of it is written for DirectX, but I think I can apply most of it to my OpenGL render loop as well.

I was just wondering if anyone knows of some other good resources that could teach me more about implementing a decent medium/high-performance render loop for OpenGL. I'm especially interested in any resources that would teach my how to organize my models efficiently, which types of render passes I should use, any way of sorting my models from front to back efficiently, how to chain my transformation matrices efficiently, etc.

Any help would be greatly appreciated. Thanks in advance!

Advertisement

The question you're asking has little to do with OpenGL itself. It is further a question with many possible answers. The following is one of them...

First of, there is not really a "render loop". Instead, (graphical) rendering is the very last step in the so-called game loop. Before rendering there are updates on input processing, AI, animation, physics, collision detection and correction, and perhaps others. This coarse layout of the game loop is usually understood as a sequence of so-called sub-systems. In this sense rendering is a sub-system.

When looking at the various sub-systems, question is how they all work. Is it possible that all of them have the same requirements? Unlikely! Instead, for different tasks during processing the game loop there are different data structures that are suitable to perform the particular tasks. This means that a data structure like the classic scene graph is probably not suitable.

I write this because the scene graph approach is often taught in books, and it seems me to be the case here, too. A scene graph is a single structure and looks promising on the first look, but it tends to become a nightmare the more tasks are tried to be solved with it. You asked for "high-performance", and such a scene graph does not belong to the same set of tools. This does not mean that scene graphs are bad per se; if the scene graph is used for a single purpose then it is as good as another structure.

Now, with respect to rendering the above thought has several implications. As can be seen from the sequence of sub-system processing, all the game objects must already be placed properly in the world, or else collision detection and correction would not have been done meaningfully. That means that "chaining of transformation matrices" is absolutely no thing of rendering at all. Instead, the process of rendering can be seen as follows:

1.) Iterate all objects in the scene and determine which are visible.

2.) For all objects that passes the visibility test above, put a rendering job into one of perhaps several lists. Here several lists may be used to a-priorily distinguish between opaque and non-opaque objects, for example. Such a rendering job should hold enough informations to later on let the low-level rendering do what it has to do.

2a.) Skin meshes may be computed just now, i.e. after it has been determined that they are visible.

3.) The lists will then be sorted by some criteria, e.g. considering the costs of resource switching (texture, mesh, shader, whatever) and, in the case of non-opaque objects, especially their order w.r.t. the camera view.

4.) The low-level rendering then iterates the sorted lists in given order, uses the data from each rendering job to set-up the rendering state (blending mode, binding textures, VBOs, shaders, ..., as much as needed but as less as possible) and invokes the appropriate OpenGL drawing routine.

You can see from the above again that OpenGL itself is not in the foreground, even we are directly discussing rendering now.

The question for rendering passes is the question for which kind of rendering you want to implement. Forward shading, deferred shading, ..., which kind of shadowing algorithm you want to use, and whether you want to support non-opaque objects. Besides this, each rendering pass is more or less the same as described above but obviously with different set-up and rendering states.

Organization of game objects can be done in various ways. However, from the above it should be clear that different aspects of game objects should be handled differently. A generally good approach is to prefer composition of game objects (instead of god classes or a wide spread inheritance tree).

Well, all this is perhaps not want to wanted to read, and I know that it is mostly vague. However, you must understand that a full fledged solution has many many aspects, and discussing them in a single post (or even thread) is not possible. This is, by the way, a reason why books tend to suggest the usage of scene graphs. It can also be understood as a hint for beginners to keep with the scene graph approach for now. In the end it's up to you to think about which way you want to go. However, decoupling things makes re-factoring easier. Decoupling is at least something you should consider.

Looking out for your answer ... ;)

Thanks a million for your answer! It finally gave me the (high level) insight I was looking for.

Rendering a single model was pretty straightforward, but I was looking for some good overview of how the rendering process should be divided up.

Reading about them in Game coding complete made me think they were the industry's answer to efficiently organize your objects that need to be rendered.

After I posted my question, I came across various other resources discussing scene graphs:

Scene graphs - Just say no

http://gamedev.stackexchange.com/a/8281

Like you said: They are not the answer to everything. I'm going to try and implement my rendering process the way you described:

1. Perform visibility checks on objects to be rendered

2. Divide visible items in specific render batches based on their main characteristics (e.g. opaque - transparant)

3. Perform sorting of those lists to minimize render time, e.g. by minimizing state changes between draws

4. Perform the drawing

Again, thanks a lot. I'll have a lot of coding to do this weekend happy.png

Another great article about what Scene Graphs are, and how they should be used, can be found at: http://lspiroengine.com/?p=566

You should also check the other articles in that site, as they are quite useful.

Here are some more notes on the rendering part.

You may want to consider trying/experimenting using Z-Prepass/Early Z-Test.

It basically involves sorting your renderable objects by their position, front-to-back, and writing to the Z-Buffer only, and not writing to the Color Buffer.

This way, you avoid lighting/rendering pixels multiple times if an object is occluded by another object.

Then after this, you'd sort the renderable objects by material, and render them while switching rendering state as little as possible.

Keep in mind that the above only works for opaque objects.

For translucent/transparent objects, you have to sort them by position, back-to-front.

Here, you must accept the cost of material change, because if translucent/transparent objects aren't sorted back-to-front, visual anomalies will ensue.

Also, if your scene contains alot of lights, you may want to consider using deferred rendering, as forward rendering's performance degrades quickly when many lights are added/exist (as each pixel will be lit multiple times, as oposed to deferred rendering, where regardless of the number of lights, each pixel is only lit once).

Of course there are other considerations to take into account, like the need to use multiple framebuffers in deferred rendering, etc.

As a last note, if you plan on having a 2D GUI/HUD on your project/game, then the 2D stuff would be drawn after your 3D scene is rendered.

You may already now this stuff, and if so, ignore it, otherwise, i hope it helps.

This topic is closed to new replies.

Advertisement