Sign in to follow this  
Shadowdancer

OpenGL Designing a render queue

Recommended Posts

Shadowdancer    319
I'm trying to design a more or less generic and extendable rendering system. So far, the design looks like this: 1. All geometry is queued up as primitives (points, lines, triangles, quads, ...) in form of vertex/normal/texture coordinate lists with corresponding index lists and shader status. There are two queues, one for solid objects and one for blended objects 2. The blended geometry is then (roughly) sorted back to front 3. (The only part that should be different between an OpenGL, D3D or software renderer) The stuff is actually drawn somewhere. So far, I'm pretty sure that I'll get into problems with the combinations of steps 1 and 3, since that needs a pretty extensive data format with facilities for a wide range of features that might not be needed on every object. For example, the possibility of multitexturing would need more data for texture coordinates, blending modus, ... for every vertex in a multitextured primitive. For clarification, this is a very basic structure for a single vertex (the texture itself is specified in a separate "Primitive" class which maintains a list of those elements):
  struct PrimitiveElement {
    Vector3* V; // vertex position
    Vector3* N; // normal
    TexCoord* T; // texture coordinate, has u and v attribute
    
    PrimitiveElement( void ) : V(0), N(0), T(0) {}
    PrimitiveElement( Vector3* _v, Vector3* _n, Vector3* _t ) :
      V(_v), N(_n), T(_t)
    {}
  }
Now, to have multitexturing, I'd need multiple texture coordinates for each element and the already mentioned Bunch Of Information in the Primitive class. An approach that would somehow circumvent that problem would be to have the geometry not queued in an "interchange" format first, but drawn directly after performing the depth sort directly on the geometry objects, but that would put the rendering code in so many more different methods instead of just one for every type of primitive. Am I thinking way to difficult there? Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures? (Edit: commented V, N, T attributes)

Share this post


Link to post
Share on other sites
Darkor    134
I'm no expert here, but in OGRE, they had a RenderQueue, which has a queue of RenderGroups, which in turn have Renderables, sorted by the material used.

The idea is that you could store all 2D stuff in a RenderGroup and terrain in a different one and the rest in another. In each RenderGroup, Renderables are sorted by the material used in order to minimise state transitions.

Share this post


Link to post
Share on other sites
Hello Shadowdancer,

I don't think your vertex structure should be described in that way. In the past, in the purpose of trying to have a API independant renderer, I came up with the idea of having a class which actually describes the vertex using code. Without the details, this was something like that :



class VertexDesc
{
public:
void beginStream();
void endStream();
void position3f();
void normal3f();
void texture1f();
void texture2f();
void texture3f();
};




And so on.

Using this vertex descriptor (which nicely maps to the DX8/9 vertex descriptin style) was very simple :



// this code describes a 2 stream vertex, with position/normal in
// the first stream and 2 2D texture coord in the second stream

VertexDesc desc;

desc.beginStream();
desc.position3f();
desc.normal3f();
desc.endStream();
desc.beginStream();
desc.texture2f();
desc.texture2f();
desc.endStream();




The VertexBuffer object was created using this vertex description - this was also the cas of the VertexIterator which was used to iterate through the vertex buffer ^^

This was very nice to create on-the-fly vertice, depending on the data I read in the model file. When I wanted some fixed vertex descriptor, I usually derived a class from VertexDesc and put the code I wanted in the inline constructor of that class.

Hope this helps,

Share this post


Link to post
Share on other sites
Shadowdancer    319
Darkor,

I'm sure this will come in handy at some point, but the problem right now is that I need a compact data format to move some attributes and data around in a way that's manageable in at least OpenGL and Direct3D. The idea of that hierarchy is to minimize the number of state changes, but right now I'm looking for a way to control state changes.

Emmanuel,

sorry if I seem stubborn, but your approach doesn't seem all that different to me, it just looks like a syntactic difference. In your code, the rendering (pushing vertices to the hardware) would be written into the VertexDesc class, while my approach would leave it to the renderer to do something with the values. The problem with a ton of specific attributes per-vertex wouldn't change too much.




Another thought that just came up is hierarchical transformations with objects not stored as a whole. Would it be a good idea to store a quaternion for each "component" of a renderable object and then have the primitives refer back to that? Some kind of linked transformation list that just calls, for example, glMultMatrix( ComponentTransform ) for each parent up the tree? So, if I have, say, a space scene in which all ships have a quaternion containing their position and rotation and I wanted to draw a turret on one of those ships, a turret's primitive would point to the ship's transformation which would be done before the primitive's own was done. And since this sounds confusing to me, here's a short form:


draw( object ) {
Stack transformations;

transformations.push( object.transform );

// put all parents' transformations on the stack until we are
// at the root object
parent = object;
while( (parent = object.parent) != NULL )
transformations.push( parent.transform );

// multiply all transformations from the stack to the view
// matrix
while( transform = transformations.pop() )
MultiplyModelview( transform );
}

Share this post


Link to post
Share on other sites
Shadowdancer,

Regarding my last post, I think you misunderstood me. Not too difficult since I did not explain anything :)

Quote:

Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures?


I was answering to that question. If you look at my structure, it doesn't say who does the render. Actually, the rendering is done in the renderer->renderOperation() method (the Renderer class is abstract and may exist for DX, OGL or whatever you want). This method fetch the VertexBuffer and using the VertexDesc knows how to setup the API state in order to correctly draw the data. So it's still the same approach as you (the drawing part is centralized in step 3 and is API specific).

The point is that using this non-static data structure, you have a way to describe any kind of vertex data you want without the need to know how it is organized.

Consider your case. You define the PrimitiveElement class to contain position, normal and texture coord. If you want to support shadowmaps, you may add a new texture coord set to your vertex description, but this is not needed for most of your mesh. So what would you do ? Having tons of rendering methods (one for each kind of vertices, adding code complexity - this is solution 1) or getting everything in one vertex class and use only the data you want to use (adding a big data overhead when you don't want to use all the vertex attributes - this is solution 2).

It seems to me that this was your problem. So I described a potential solution, which is to use a vertex descriptor - using it, the renderer knows which way it must interpret the vertex streams (you'll need to do that in solution 2 anyway) without the need to have all the data in the vertex stream (which avoid the solution 2 data overhaed I speak about).

Of course, the major concern may be writing easyness. Using this data representation in conjunction with a VERTEX structure which defines the vertex attributes using the classical way is a very bad idea because you'll have problems if you don't keep your VERTEX structure in sync with the vertex description. That's why I added iterators to the whole thing. The iterator points to data in the vertex stream. It is created using the vertex desc. Basically, it looks like this :


template <typename TYPE> class VertexIterator
{
private:
byte *_data;
int _vsize

public:
VertexIterator(byte *data, int vsize) :
_data(data), _vsize(vsize)
{
}
inline TYPE operator*() const { return *(TYPE *)_data; }
inline TYPE& operator*() { return *(TYPE *)_data; }
inline void operator++() { _data += _vsize; }
inline void operator++(int) { _data += _vsize; }
// and so on... the classical iterator operators
};



The vertex iterator itself is created upon demand by the VertexBuffer :


VertexIterator<Vector3> nit;
nit = vertexBuffer.getIterator<Vector3>(VA_NORMAL, 0 /*stream*/, 0 /*tex coord set, unused in this case*/);



If there is no normal attribute in the vertex you can either throw an exception in the getIterator<> function or have a isValid() method in the VertexIterator<> class.

I know this may sound a bit crypic, but it is rather simple, and very powerfull.

And I hope I'm not completely off topic :)

Share this post


Link to post
Share on other sites
mitchw    162
What I've been doing and it seems to work good is:

RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

Share this post


Link to post
Share on other sites
Hybrid    138
Quote:
Original post by mitchw
What I've been doing and it seems to work good is:

RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

If it automatically gets put in the correct group and shaderlist, then that means you create all the groups and shaderlists when the program starts, and that you have enough lists for all the possible materials/shaders???

---------------

BTW, how do you guys sort your renderqueues? what sorting algorithm (radix?) and on what parameters (textures, shaders??). Thanks

Share this post


Link to post
Share on other sites
Shadowdancer    319
Emmanuel, thanks a lot for your effort.

I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?

Share this post


Link to post
Share on other sites
Darkor    134
I sorta used a simplified version of OGRE's VertexData class (yeah I guess I had better give some credit to the OGRE team if so much is based on their infrastructure).

Basically VertexData stores VertexArrays, which could store the actual vertex array, index data, texture coordinates, normal arrays etc. Each VertexArray identifies as one of these types and memory is allocated and destroyed or sent to the renderer based on which type it is.

Look it up in OGRE docs.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shadowdancer
Emmanuel, thanks a lot for your effort.

I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?


Thanks for your thanks :) Let me clearify the whole stuff.

The data is not streamed in the vertex desc - the vertex desc is just that : a vertex description. The vertex data is stored in a VertexBuffer object which contains :

* the VertexDesc object
* a list of VertexStream
* some other stuff not really important in our case

The VertexStream actually points to the vertex data.

When you tell that I created my own 3D vertex creation API it is a bit exagerated :) The only thing I really did is that I pushed the D3D vertex declaration one step further. Is you look at my description of the VertexDesc object, it is nothing more than an easy runtime creation of D3DVERTEXELEMENT9.

The Device->createVertexBuffer() method takes a VertexDesc paramaters, interpret it (and optionnaly creates an intermediate representation as in D3D - since I must use a D3DVE9 structure to use the encapsulated vertex streams) and using the interpreted informations creates the vertex streams - which are then stored in the VertexBuffer object.

This is done so that I do not need to know the vertex layout at compile time (allowing me tu support any kind of vertice) while not wasting memory space and bandwidth. I autorised me to do this in this way because the vertex buffer creation should not be a time critical operation.

To summarize, let's see the D3D pseudo code :


1) create a D3DVERTEXELEMENT9 array from the VertexDesc and the associated IDirect3DVertexDeclaration9
2) associate this to the new VertexBuffer object
for each stream declaration in teh VertexDesc object
3) compute the length (in byte) of the stream
4) create the IDirect3DVertexBuffer9 for that stream
5) associate this VB with a new VertexStream object
6) add this VertexStream object the VertexBuffer's list
end for



Of course, if there is only one stream I can try to compute the stream FVF instead of creating a IDirect3DVertexDeclaration9.

The rendering code fetch the IDirect3DVertexDeclaration9 (or the FVF) and setup the device with the fectched data, initialize the streams via SetStreamSource() (+ a bunch of other thing, of course) and call DP or DIP.

Voilà. That's all :)

So to finnaly answer to your questions :

a) i do not store the vertex data in the VertexDesc
b) i store a dynamic list of stream
c) i do not use any VertexAttribute class (although this is giving me ideas... I should think about it :)

Anyway, I hope I'm clearer and I hope I helped you :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now