# OpenGL Designing a render queue

This topic is 5070 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm trying to design a more or less generic and extendable rendering system. So far, the design looks like this: 1. All geometry is queued up as primitives (points, lines, triangles, quads, ...) in form of vertex/normal/texture coordinate lists with corresponding index lists and shader status. There are two queues, one for solid objects and one for blended objects 2. The blended geometry is then (roughly) sorted back to front 3. (The only part that should be different between an OpenGL, D3D or software renderer) The stuff is actually drawn somewhere. So far, I'm pretty sure that I'll get into problems with the combinations of steps 1 and 3, since that needs a pretty extensive data format with facilities for a wide range of features that might not be needed on every object. For example, the possibility of multitexturing would need more data for texture coordinates, blending modus, ... for every vertex in a multitextured primitive. For clarification, this is a very basic structure for a single vertex (the texture itself is specified in a separate "Primitive" class which maintains a list of those elements):
  struct PrimitiveElement {
Vector3* V; // vertex position
Vector3* N; // normal
TexCoord* T; // texture coordinate, has u and v attribute

PrimitiveElement( void ) : V(0), N(0), T(0) {}
PrimitiveElement( Vector3* _v, Vector3* _n, Vector3* _t ) :
V(_v), N(_n), T(_t)
{}
}

Now, to have multitexturing, I'd need multiple texture coordinates for each element and the already mentioned Bunch Of Information in the Primitive class. An approach that would somehow circumvent that problem would be to have the geometry not queued in an "interchange" format first, but drawn directly after performing the depth sort directly on the geometry objects, but that would put the rendering code in so many more different methods instead of just one for every type of primitive. Am I thinking way to difficult there? Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures? (Edit: commented V, N, T attributes)

##### Share on other sites
I'm no expert here, but in OGRE, they had a RenderQueue, which has a queue of RenderGroups, which in turn have Renderables, sorted by the material used.

The idea is that you could store all 2D stuff in a RenderGroup and terrain in a different one and the rest in another. In each RenderGroup, Renderables are sorted by the material used in order to minimise state transitions.

##### Share on other sites

I don't think your vertex structure should be described in that way. In the past, in the purpose of trying to have a API independant renderer, I came up with the idea of having a class which actually describes the vertex using code. Without the details, this was something like that :

class VertexDesc{public:    void beginStream();    void endStream();    void position3f();    void normal3f();    void texture1f();    void texture2f();    void texture3f();};

And so on.

Using this vertex descriptor (which nicely maps to the DX8/9 vertex descriptin style) was very simple :

// this code describes a 2 stream vertex, with position/normal in// the first stream and 2 2D texture coord in the second streamVertexDesc desc;desc.beginStream();desc.position3f();desc.normal3f();desc.endStream();desc.beginStream();desc.texture2f();desc.texture2f();desc.endStream();

The VertexBuffer object was created using this vertex description - this was also the cas of the VertexIterator which was used to iterate through the vertex buffer ^^

This was very nice to create on-the-fly vertice, depending on the data I read in the model file. When I wanted some fixed vertex descriptor, I usually derived a class from VertexDesc and put the code I wanted in the inline constructor of that class.

Hope this helps,

##### Share on other sites
Darkor,

I'm sure this will come in handy at some point, but the problem right now is that I need a compact data format to move some attributes and data around in a way that's manageable in at least OpenGL and Direct3D. The idea of that hierarchy is to minimize the number of state changes, but right now I'm looking for a way to control state changes.

Emmanuel,

sorry if I seem stubborn, but your approach doesn't seem all that different to me, it just looks like a syntactic difference. In your code, the rendering (pushing vertices to the hardware) would be written into the VertexDesc class, while my approach would leave it to the renderer to do something with the values. The problem with a ton of specific attributes per-vertex wouldn't change too much.

Another thought that just came up is hierarchical transformations with objects not stored as a whole. Would it be a good idea to store a quaternion for each "component" of a renderable object and then have the primitives refer back to that? Some kind of linked transformation list that just calls, for example, glMultMatrix( ComponentTransform ) for each parent up the tree? So, if I have, say, a space scene in which all ships have a quaternion containing their position and rotation and I wanted to draw a turret on one of those ships, a turret's primitive would point to the ship's transformation which would be done before the primitive's own was done. And since this sounds confusing to me, here's a short form:

draw( object ) {  Stack transformations;  transformations.push( object.transform );  // put all parents' transformations on the stack until we are  // at the root object  parent = object;  while( (parent = object.parent) != NULL )    transformations.push( parent.transform );  // multiply all transformations from the stack to the view  // matrix  while( transform = transformations.pop() )    MultiplyModelview( transform );}

##### Share on other sites

Regarding my last post, I think you misunderstood me. Not too difficult since I did not explain anything :)

Quote:
 Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures?

I was answering to that question. If you look at my structure, it doesn't say who does the render. Actually, the rendering is done in the renderer->renderOperation() method (the Renderer class is abstract and may exist for DX, OGL or whatever you want). This method fetch the VertexBuffer and using the VertexDesc knows how to setup the API state in order to correctly draw the data. So it's still the same approach as you (the drawing part is centralized in step 3 and is API specific).

The point is that using this non-static data structure, you have a way to describe any kind of vertex data you want without the need to know how it is organized.

Consider your case. You define the PrimitiveElement class to contain position, normal and texture coord. If you want to support shadowmaps, you may add a new texture coord set to your vertex description, but this is not needed for most of your mesh. So what would you do ? Having tons of rendering methods (one for each kind of vertices, adding code complexity - this is solution 1) or getting everything in one vertex class and use only the data you want to use (adding a big data overhead when you don't want to use all the vertex attributes - this is solution 2).

It seems to me that this was your problem. So I described a potential solution, which is to use a vertex descriptor - using it, the renderer knows which way it must interpret the vertex streams (you'll need to do that in solution 2 anyway) without the need to have all the data in the vertex stream (which avoid the solution 2 data overhaed I speak about).

Of course, the major concern may be writing easyness. Using this data representation in conjunction with a VERTEX structure which defines the vertex attributes using the classical way is a very bad idea because you'll have problems if you don't keep your VERTEX structure in sync with the vertex description. That's why I added iterators to the whole thing. The iterator points to data in the vertex stream. It is created using the vertex desc. Basically, it looks like this :

template <typename TYPE> class VertexIterator{private:    byte *_data;    int _vsizepublic:    VertexIterator(byte *data, int vsize) :         _data(data), _vsize(vsize)    {    }    inline TYPE operator*() const { return *(TYPE *)_data; }    inline TYPE& operator*() { return *(TYPE *)_data; }    inline void operator++() { _data += _vsize; }    inline void operator++(int) { _data += _vsize; }    // and so on... the classical iterator operators};

The vertex iterator itself is created upon demand by the VertexBuffer :

VertexIterator<Vector3> nit;nit = vertexBuffer.getIterator<Vector3>(VA_NORMAL, 0 /*stream*/, 0 /*tex coord set, unused in this case*/);

If there is no normal attribute in the vertex you can either throw an exception in the getIterator<> function or have a isValid() method in the VertexIterator<> class.

I know this may sound a bit crypic, but it is rather simple, and very powerfull.

And I hope I'm not completely off topic :)

##### Share on other sites
What I've been doing and it seems to work good is:

RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

##### Share on other sites
Quote:
 Original post by mitchwWhat I've been doing and it seems to work good is:RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

If it automatically gets put in the correct group and shaderlist, then that means you create all the groups and shaderlists when the program starts, and that you have enough lists for all the possible materials/shaders???

---------------

BTW, how do you guys sort your renderqueues? what sorting algorithm (radix?) and on what parameters (textures, shaders??). Thanks

##### Share on other sites
Emmanuel, thanks a lot for your effort.

I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?

##### Share on other sites
I sorta used a simplified version of OGRE's VertexData class (yeah I guess I had better give some credit to the OGRE team if so much is based on their infrastructure).

Basically VertexData stores VertexArrays, which could store the actual vertex array, index data, texture coordinates, normal arrays etc. Each VertexArray identifies as one of these types and memory is allocated and destroyed or sent to the renderer based on which type it is.

Look it up in OGRE docs.

##### Share on other sites
Quote:
 Original post by ShadowdancerEmmanuel, thanks a lot for your effort.I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?

Thanks for your thanks :) Let me clearify the whole stuff.

The data is not streamed in the vertex desc - the vertex desc is just that : a vertex description. The vertex data is stored in a VertexBuffer object which contains :

* the VertexDesc object
* a list of VertexStream
* some other stuff not really important in our case

The VertexStream actually points to the vertex data.

When you tell that I created my own 3D vertex creation API it is a bit exagerated :) The only thing I really did is that I pushed the D3D vertex declaration one step further. Is you look at my description of the VertexDesc object, it is nothing more than an easy runtime creation of D3DVERTEXELEMENT9.

The Device->createVertexBuffer() method takes a VertexDesc paramaters, interpret it (and optionnaly creates an intermediate representation as in D3D - since I must use a D3DVE9 structure to use the encapsulated vertex streams) and using the interpreted informations creates the vertex streams - which are then stored in the VertexBuffer object.

This is done so that I do not need to know the vertex layout at compile time (allowing me tu support any kind of vertice) while not wasting memory space and bandwidth. I autorised me to do this in this way because the vertex buffer creation should not be a time critical operation.

To summarize, let's see the D3D pseudo code :

1) create a D3DVERTEXELEMENT9 array from the VertexDesc and the associated IDirect3DVertexDeclaration92) associate this to the new VertexBuffer objectfor each stream declaration in teh VertexDesc object    3) compute the length (in byte) of the stream    4) create the IDirect3DVertexBuffer9 for that stream    5) associate this VB with a new VertexStream object    6) add this VertexStream object the VertexBuffer's listend for

Of course, if there is only one stream I can try to compute the stream FVF instead of creating a IDirect3DVertexDeclaration9.

The rendering code fetch the IDirect3DVertexDeclaration9 (or the FVF) and setup the device with the fectched data, initialize the streams via SetStreamSource() (+ a bunch of other thing, of course) and call DP or DIP.

Voilà. That's all :)

a) i do not store the vertex data in the VertexDesc
b) i store a dynamic list of stream
c) i do not use any VertexAttribute class (although this is giving me ideas... I should think about it :)

Anyway, I hope I'm clearer and I hope I helped you :)

1. 1
2. 2
3. 3
4. 4
Rutin
17
5. 5

• 11
• 32
• 12
• 12
• 11
• ### Forum Statistics

• Total Topics
631409
• Total Posts
2999929
×