Geometry...effects...help!

Started by
10 comments, last by spek 16 years, 2 months ago
Hi everyone at GD, I'm doing some steps ahead with my demo. Now I can render meshes made with 3D studio, all the geometry and the textures...ouch! I bumped with a effect! :p The problem is that I don't know how to handle different effects and their logical link. Defining an effect as the combination of texture, material, passes and techniques (I think like the effect framework of the directx), which is the heavier operation to do (and to avoid) during the render cycle? I think that one of the best thing to do is to sort the geometries in a per effect base...is it correct? Thanks in advice!
---------------------------------------http://badfoolprototype.blogspot.com/
Advertisement
I'm not sure I understand exactly what the question is, but I'll answer what I *THINK* your question is with an example of the framework that I use.

I have a bunch of different render objects. They are;

  • Effect - Pretty much a collection of shaders and render states (essentially equivalent to a D3DXEffect)
  • Material - Effect with Textures/other values (basically, material-specific parameters that are passed to the effect)
  • Mesh - Vertex buffer/index buffer/attribute set (i.e. which ranges of indices render with different materials). Basically a D3DXMesh
  • Model - Mesh + Material(s)
  • RenderObject - Model + Positional information + object-specific textures/effect variables (such as a realtime reflection cubemap or a color value)

Example:
  • Effect - Bumped With Specular
  • Material - Shiny Oak (uses "Bumped With Specular" with oak diffuse/normal maps)
  • Mesh - Wall
  • Model - OakWall (uses "Wall" + "Shiny Oak" as its only material)
  • Object - OakWall1023 ("OakWall" in a specific location/orientation).


Each instance of one of these render objects is only ever loaded once (that is, there is only ever one copy of "OakWall" or "Bumped With Specular" in memory at once). Thus, when building the list of objects to render, the list is built up in a specific order.

I sort by Effect, then Material, then Mesh. This sorting can be accomplished in a number of ways - I've used a cascading series of linked lists (which is very efficient but very ugly in code) and a red/black (autobalancing) binary tree (less efficient, but very easy to maintain/modify). I try to only do sorting when an object becomes newly visible (i.e. an insertion sort) instead of doing a full sort pass on the list.

Also note that there are exceptions to this sorting - objects that use effects that are alpha blended or otherwise transparent (refraction, for instance) are sorted back-to-front instead of by render state efficiency metrics - this ensures correct drawing order for such objects (at the probable expense of additional render state changes when there are many such objects). There are probably other cases I'm forgetting - I'm about 2000 miles away from my source right now.

I hope that helps at least give you an example of one system that exists. I am in no way stating that my system is maximally efficient, but it certainly seems to have a good set of performance characteristics, especially related to minimizing render state setting. Good luck, and I hope that at least touches on your question.

Thank you...this helps me a lot!
One other question if I may...do you have multi-effect geometries?

I'll explain it...for me a geometry is a pair of vertex and index buffers. The vertexes contains informations about the position, normals, color and texture mapping...

When you make a geometry with 3D studio (for example) you can assign different materials (materials of 3D studio) to different faces...so when you render it, you have to split the geometry choosing the vertexes influenced by certain material.
So you have submeshes with different textures/vertex shaders/pixel shaders...

How do you handle that?

Thank you again!
---------------------------------------http://badfoolprototype.blogspot.com/
Yes, each mesh can have a number of "subsets" which can be given different materials in a model.

The sorting actually sorts each subset individually (so an object is sorted into N places and drawn N times, where N is the number of subsets that object's mesh has).
Great...I begin to understand all clearly! I think I will adopt a similar approach to have multi-material geometries...
So you draw in a per-object base, and every object N times...what about performance?
And what about vertex buffers? you have one per object, and one big for the static geometry?

Thank you again...yuo're kind and helpful!

Gabriel
---------------------------------------http://badfoolprototype.blogspot.com/
Don't know if this helps, but I ussually group my models with a certain name. For example, a human body could have the 'material groups' "uniform", "head" and "sunglasses". I'm almost sure that you can group a model in 3D studio like this as well. Later on, when I load the model, I use these group names as ID names to link up with a certain material from my database. In my case a material means a set of shaders, parameters, textures, etc.

The advantage is that you don't need to load/use all kind of difficult material/texture settings from your 3D program, the only thing that really matters is the material name. Downside is that you easily type a wrong name, and that your 3D program does not share the shaders/parameters from your material database.

Just my 50 cents :)
Rick
Not really 50 cents, for me are 1000 dollars!
But you have 1 pair of vertex/pixel shader per material?

What about performance of such a system?

Say that you have n objects, every of them with m materials...you have n * m call to draw object, set pixel and vertex shaders, set texture per shader...

And more: how do you handle light?
You have done a shader per kind of light (point, spotlight) and if there is some objects with light you simply add the "light effect" to the objects?

Thanks!

---------------------------------------http://badfoolprototype.blogspot.com/
Well, before we get confused, a material in my case is a set of parameters for a vertex/fragment shader. Or it could also be a simple texture without shaders. For example, material "Brick1" could be linked to a vertex/fragment shader that uses bumpMapping. Those shaders require some parameters such as
- diffuse texture
- normalMap texture
- specular color
- shininess
- ...
And in my case each shader could have a couple of options too (like pre-compiling). For example, "parallax on/off, reflections on/off", and so on. Otherwise I need to write a different shader for each possible combination.

So basically I have 3 libraries:
- textures (bitmaps, tga's, mipmap settings, transparency settings)
- shaders (each shader with pre-compiler options)
- materials (linked to a vertex/fragment shader, possibly with textures as parameters, and/or other values such as factors, colors, etc.)

Each material gets its own unique "ID" name. Ussually I use the same name as the diffuse texture (where I make sure each image has an unique name as well). When loading the image in 3D studio (or Lightwave in my case), you might automatically get the right name. So, if you divide your model into 3 groups/images, you get 3 different names as well. Later on when loading the model in your engine, use those names to find the right material again.


As for the rendering, I ussually render per group. To prevent alot of switching between shaders/textures, I also group them. First render all geometry chunks with material 'A', then everything with material 'B', and so on. So if my model uses 3 different materials, I get switch 3 times. The performance is fine, as long as you don't switch too much (hundreds and hundreds of times). Therefore, try to keep things sorted/grouped.

I'm not sure if this is the most flexible or fast system, but it works fine for me.


As for the lighting... I'm still searching. A year ago I did everything with static lightMaps, like Halflife2 did. Only the dynamic objects (characters, chairs, barrels, etc.) were using 2 point lights (the most nearby). Now I'm making a deferred shading engine, with shadowMaps. Works cool, but it eats more energy than a lightMap of course. On the other hand, when doing deferred shading you don't have to care about which object gets lit by which light. But even more challenging is the ambient lighting part. You get that for free in a lightMap, but when doing realtime lighting with shadowMaps or stencil shadows (Doom3), there is no ambient light. As you might have seen, I'm asking about that in another post here :)

Greetings,
Rick
Good...this also really help me.
I'm trying to implement something similar - materials, textures and shaders.
Basically I get shaders in assembly (written in Cg and then compiled), a set of parameters and textures, and all is done!
But this works only in per object basis...do you merge the different objects in chunks and THEN sort it per material?
---------------------------------------http://badfoolprototype.blogspot.com/
Yep, why not? The world itself is grouped per material, and so are my objects. Although in practice, most (small) objects can do with 1 material. But for example, I could seperate the clothes and the skin parts from a human model into 2 groups (or lower body + head). First I apply the transformations/rotations/scaling for the object, then I render all geometry that belongs to group1, then group2, and so on. In my case most objects are using the same vertex shader as well, so I only have to switch the fragment program and its parameters.

And thing could be a problem though. Let's say you have an object that uses 2 different materials. And you have 100 of them placed in your world. If you render per object, you will get 2 x 100 material switches. In that case it might be more efficient to first render group1 of all these 100 objects, and then group2. Then you would only need 2 switches. But you also need to sort/manage everything, which could be difficult as well.

It's not a bad idea to use the same shader/texture/material for objects that are used a lot. Stuff like boxes or foliage for example. Draw all the different textures into 1 big image, and just shift the texture coordinates ('atlas texture'). Then you could draw alot of different objects with the same material/parameters/shaders.

Greetings,
Rick

This topic is closed to new replies.

Advertisement