render state system

Started by
8 comments, last by Jason Z 18 years, 7 months ago
Hey guys Ive been working on a render state system for my project for a few days now, but im just not happy with the design. So, for a few hours I have been rethinking things and came up with a few alternatives, but Im curious to see how you guys approach this. Basically, right now what I have is a simple renderstate class which has a list of which states are enabled and thier properties. For example: mBlendingEnable... mBlendSrcFunc, mBlendDstFunc... When you want to change how something is rendered, you would simply create a new RenderState instance, set the values accordingly and set it as the current rendering state to use. Now, I had originally had a few renderstate management functions in my rendersystem class, such as push/pop state, getcurrent etc. However, I wasn't considering state changes and so, im thinking im going to need to create a much more complex system to incorporate state sorting and such. However, what would be cool about this method is that, I could have a "RenderState modifier" class which would simply define it's own renderstate instance and then set the linked object's current renderstate with the modified. Alrighty, well, im rambling on so i'll get to the point.. How do you guys approach state swapping and sorting? I figure that I could always overload the == operator to check if which states are the same and then group all of the entities together like that, but... idk. Hmmmm.
Advertisement
I go for a simple solution. In my scene graph, 'geometry' nodes contain two things: a list of 'shapes' (a shape is a set of vertices, normals, tangents, per vertex colors, etc.) and a single 'appearance' (which is pretty much identical to your state class, in that it is just a set of enable/disabled state variables like diffuse/specular/ambient front and back colors, blending stuff, textures, shaders etc.).
After traversing the scene graph, each visible geometry node is submitted to the renderer. For every geometry node, this render first inspects the associated appearance node and compares it to it's internal state. It then calls the appropriate render functions for each attribute that differs and updates the internal state to match the changes. If the geometries have been properly sorted by most expensive state change, this will give good performance.

Tom
Hey, thanks Tom. Im like your idea and I may use a simular approach. Im curious though, how does your class hierarchy chain look? ie. what derived from the geo nodes, your mesh classes, static world meshes, etc?
Quote:Original post by Jinx3d1
Im curious though, how does your class hierarchy chain look? ie. what derived from the geo nodes, your mesh classes, static world meshes, etc?


I only derive complex objects which have dynamic content from the geometry class (for example a class that maintains a mesh of viewplane aligned slices through a cube to do volume rendering of 3D textures). For simple static shapes (like spheres, cubes, etc.) I don't make classes. I just have functions (like CreateSphere(..), CreateCube(..), etc.) that return a base geometry node containing the corresponding static shape or mesh if you prefer that term. The same approach is used for loading models (3DS, PK3, OBJ, LWO): a function call LoadModel(..) returns a geometry node or a complete sub-graph if the file contains multiple models with different shapes, like for example a Quake 3 level.

Tom
I had to reinvent the weil with this. My renderer does not depend on scene graph or anything else that can serve in the same manner.

Every entity has its 32 bit wide (it can be 64) number:

0-11 bits = Material ID
12-22 bits = Renderstate ID
23-31 bits are reserved

Later, you can sort entities by their assigned numbers (goes really fast)...
An evening with nothing to do, I'll give this a shot [smile]

When walking through the scene graph for rendering, renderables are inserted one by one into a render queue. Each renderable has a material which in turn has a shader, which in this case is a bucket of code that sets up the rendering API for a surface. The renderer sorts the queue to group similar shaders and then shaders themselves do some low-level sorting of renderables, because they know best what happens during rendering. The renderer is happily unaware of what the shaders do, except it controls stages and splitting shader-work for multipassing, although that's all based on reports from the shaders. This approach proved extremely useful when it came to "extending" the renderer; I had properly sorted transparent objects in a min without a hint of transparency anywhere before.

There is no actual state sorting for the rendering API, but since similar shaders are grouped, states will be shared among all objects using the same shader. Implementing full state-sorting over all shaders will be pretty hefty, force some general interfaces and I'd guess the low number of state switch wins won't outweigh the extra cost of sorting.

The assignment of shaders to materials is based on visual descriptions with weights, the shader database looks up registered shaders and calculates weights for each shader compared to the wanted description given by the current material. After that, it's pretty easy to pick out the best fitting shader for a material. This database can be updated very quickly, so one should be able insert new graphical effects during runtime (good thing I put all shaders in plugins...).

I prefer never to go down into implementation details as it depends heavily on the rest of the code surrounding a renderer.
I actually use effect files for my renderstate setting (for DX, if using OGL then CgFX). I make a wrapper class for the effect file, and each instance of an effect is given an ID when created. All textures are also given an ID when loaded.

The scene is traversed similar to what coelurus describes. After the render queue is created, it is sorted by geometry type first (static, dynamic, alpha, post) then by effect, then by textures. This seems to work pretty good, even though I am not sorting all renderstate changes (since they are all in the effect files themselves).

Let us know what you decide on!
Hey guys! Sorry I didn't reply earlier. Ive been in the process of moving (blech). Anyway, Jason's approach seems really cool. However, it does lead me to another question: when using shaders, is it possible to do basic effects such as simple blending without ever using the api's common function?

My dilema right now is that the only system I have to code on, does not support shaders and I can not replace the gfx card (because it's a laptop). So as far as doing anything with shaders, I have to play a guessing game as to if thier going to work or not (of course I can always test on my friends systems) but it makes development a total pain (this problem has kept me from learning anything about shader programming).

So with that said, im going to just do my sorting based on materials and textures. Owell, the game that me and my friends are going to be working on, won't really require a very complex engine anyway (think spyro gfx :)

(sorry for necroing the thread if I did).
Quote:Original post by Jinx3d1
My dilema right now is that the only system I have to code on, does not support shaders and I can not replace the gfx card (because it's a laptop). So as far as doing anything with shaders, I have to play a guessing game as to if thier going to work or not (of course I can always test on my friends systems) but it makes development a total pain (this problem has kept me from learning anything about shader programming).

If you use DirectX, you can use Direct3D reference device instead of hardware. It's a software implementation of Direct3D, and it's useful for testing advanced stuff that's not supported by hardware. Reference device is enabled by using D3DDEVTYPE_REF instead of D3DDEVTYPE_HAL in IDirect3D9::CreateDevice().

Or, if you use OpenGL, you can use Mesa3D software drivers.

centipede is right, you can develop with the REF device if you have to work with features that aren't supported in hardware. However, it does run extremely slow, which also makes it a pain for development.

But to build a framework around the effect files you don't need to use shaders. The effect framework allows you to set all fixed function renderstates as well, so it effectively encapsulates both fixed function and programmable hardware. In fact, you can actually set up a fixed function fallback if your user doesn't have programmable hardware - so it has a simple shading level-of-detail system built in.

I like it because it makes a one-man development team ( me! ) much more productive.

This topic is closed to new replies.

Advertisement