Sign in to follow this  
Shadowdancer

OpenGL Designing a render queue

Recommended Posts

I'm trying to design a more or less generic and extendable rendering system. So far, the design looks like this: 1. All geometry is queued up as primitives (points, lines, triangles, quads, ...) in form of vertex/normal/texture coordinate lists with corresponding index lists and shader status. There are two queues, one for solid objects and one for blended objects 2. The blended geometry is then (roughly) sorted back to front 3. (The only part that should be different between an OpenGL, D3D or software renderer) The stuff is actually drawn somewhere. So far, I'm pretty sure that I'll get into problems with the combinations of steps 1 and 3, since that needs a pretty extensive data format with facilities for a wide range of features that might not be needed on every object. For example, the possibility of multitexturing would need more data for texture coordinates, blending modus, ... for every vertex in a multitextured primitive. For clarification, this is a very basic structure for a single vertex (the texture itself is specified in a separate "Primitive" class which maintains a list of those elements):
  struct PrimitiveElement {
    Vector3* V; // vertex position
    Vector3* N; // normal
    TexCoord* T; // texture coordinate, has u and v attribute
    
    PrimitiveElement( void ) : V(0), N(0), T(0) {}
    PrimitiveElement( Vector3* _v, Vector3* _n, Vector3* _t ) :
      V(_v), N(_n), T(_t)
    {}
  }
Now, to have multitexturing, I'd need multiple texture coordinates for each element and the already mentioned Bunch Of Information in the Primitive class. An approach that would somehow circumvent that problem would be to have the geometry not queued in an "interchange" format first, but drawn directly after performing the depth sort directly on the geometry objects, but that would put the rendering code in so many more different methods instead of just one for every type of primitive. Am I thinking way to difficult there? Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures? (Edit: commented V, N, T attributes)

Share this post


Link to post
Share on other sites
I'm no expert here, but in OGRE, they had a RenderQueue, which has a queue of RenderGroups, which in turn have Renderables, sorted by the material used.

The idea is that you could store all 2D stuff in a RenderGroup and terrain in a different one and the rest in another. In each RenderGroup, Renderables are sorted by the material used in order to minimise state transitions.

Share this post


Link to post
Share on other sites
Hello Shadowdancer,

I don't think your vertex structure should be described in that way. In the past, in the purpose of trying to have a API independant renderer, I came up with the idea of having a class which actually describes the vertex using code. Without the details, this was something like that :



class VertexDesc
{
public:
void beginStream();
void endStream();
void position3f();
void normal3f();
void texture1f();
void texture2f();
void texture3f();
};




And so on.

Using this vertex descriptor (which nicely maps to the DX8/9 vertex descriptin style) was very simple :



// this code describes a 2 stream vertex, with position/normal in
// the first stream and 2 2D texture coord in the second stream

VertexDesc desc;

desc.beginStream();
desc.position3f();
desc.normal3f();
desc.endStream();
desc.beginStream();
desc.texture2f();
desc.texture2f();
desc.endStream();




The VertexBuffer object was created using this vertex description - this was also the cas of the VertexIterator which was used to iterate through the vertex buffer ^^

This was very nice to create on-the-fly vertice, depending on the data I read in the model file. When I wanted some fixed vertex descriptor, I usually derived a class from VertexDesc and put the code I wanted in the inline constructor of that class.

Hope this helps,

Share this post


Link to post
Share on other sites
Darkor,

I'm sure this will come in handy at some point, but the problem right now is that I need a compact data format to move some attributes and data around in a way that's manageable in at least OpenGL and Direct3D. The idea of that hierarchy is to minimize the number of state changes, but right now I'm looking for a way to control state changes.

Emmanuel,

sorry if I seem stubborn, but your approach doesn't seem all that different to me, it just looks like a syntactic difference. In your code, the rendering (pushing vertices to the hardware) would be written into the VertexDesc class, while my approach would leave it to the renderer to do something with the values. The problem with a ton of specific attributes per-vertex wouldn't change too much.




Another thought that just came up is hierarchical transformations with objects not stored as a whole. Would it be a good idea to store a quaternion for each "component" of a renderable object and then have the primitives refer back to that? Some kind of linked transformation list that just calls, for example, glMultMatrix( ComponentTransform ) for each parent up the tree? So, if I have, say, a space scene in which all ships have a quaternion containing their position and rotation and I wanted to draw a turret on one of those ships, a turret's primitive would point to the ship's transformation which would be done before the primitive's own was done. And since this sounds confusing to me, here's a short form:


draw( object ) {
Stack transformations;

transformations.push( object.transform );

// put all parents' transformations on the stack until we are
// at the root object
parent = object;
while( (parent = object.parent) != NULL )
transformations.push( parent.transform );

// multiply all transformations from the stack to the view
// matrix
while( transform = transformations.pop() )
MultiplyModelview( transform );
}

Share this post


Link to post
Share on other sites
Shadowdancer,

Regarding my last post, I think you misunderstood me. Not too difficult since I did not explain anything :)

Quote:

Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures?


I was answering to that question. If you look at my structure, it doesn't say who does the render. Actually, the rendering is done in the renderer->renderOperation() method (the Renderer class is abstract and may exist for DX, OGL or whatever you want). This method fetch the VertexBuffer and using the VertexDesc knows how to setup the API state in order to correctly draw the data. So it's still the same approach as you (the drawing part is centralized in step 3 and is API specific).

The point is that using this non-static data structure, you have a way to describe any kind of vertex data you want without the need to know how it is organized.

Consider your case. You define the PrimitiveElement class to contain position, normal and texture coord. If you want to support shadowmaps, you may add a new texture coord set to your vertex description, but this is not needed for most of your mesh. So what would you do ? Having tons of rendering methods (one for each kind of vertices, adding code complexity - this is solution 1) or getting everything in one vertex class and use only the data you want to use (adding a big data overhead when you don't want to use all the vertex attributes - this is solution 2).

It seems to me that this was your problem. So I described a potential solution, which is to use a vertex descriptor - using it, the renderer knows which way it must interpret the vertex streams (you'll need to do that in solution 2 anyway) without the need to have all the data in the vertex stream (which avoid the solution 2 data overhaed I speak about).

Of course, the major concern may be writing easyness. Using this data representation in conjunction with a VERTEX structure which defines the vertex attributes using the classical way is a very bad idea because you'll have problems if you don't keep your VERTEX structure in sync with the vertex description. That's why I added iterators to the whole thing. The iterator points to data in the vertex stream. It is created using the vertex desc. Basically, it looks like this :


template <typename TYPE> class VertexIterator
{
private:
byte *_data;
int _vsize

public:
VertexIterator(byte *data, int vsize) :
_data(data), _vsize(vsize)
{
}
inline TYPE operator*() const { return *(TYPE *)_data; }
inline TYPE& operator*() { return *(TYPE *)_data; }
inline void operator++() { _data += _vsize; }
inline void operator++(int) { _data += _vsize; }
// and so on... the classical iterator operators
};



The vertex iterator itself is created upon demand by the VertexBuffer :


VertexIterator<Vector3> nit;
nit = vertexBuffer.getIterator<Vector3>(VA_NORMAL, 0 /*stream*/, 0 /*tex coord set, unused in this case*/);



If there is no normal attribute in the vertex you can either throw an exception in the getIterator<> function or have a isValid() method in the VertexIterator<> class.

I know this may sound a bit crypic, but it is rather simple, and very powerfull.

And I hope I'm not completely off topic :)

Share this post


Link to post
Share on other sites
What I've been doing and it seems to work good is:

RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

Share this post


Link to post
Share on other sites
Quote:
Original post by mitchw
What I've been doing and it seems to work good is:

RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

If it automatically gets put in the correct group and shaderlist, then that means you create all the groups and shaderlists when the program starts, and that you have enough lists for all the possible materials/shaders???

---------------

BTW, how do you guys sort your renderqueues? what sorting algorithm (radix?) and on what parameters (textures, shaders??). Thanks

Share this post


Link to post
Share on other sites
Emmanuel, thanks a lot for your effort.

I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?

Share this post


Link to post
Share on other sites
I sorta used a simplified version of OGRE's VertexData class (yeah I guess I had better give some credit to the OGRE team if so much is based on their infrastructure).

Basically VertexData stores VertexArrays, which could store the actual vertex array, index data, texture coordinates, normal arrays etc. Each VertexArray identifies as one of these types and memory is allocated and destroyed or sent to the renderer based on which type it is.

Look it up in OGRE docs.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shadowdancer
Emmanuel, thanks a lot for your effort.

I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?


Thanks for your thanks :) Let me clearify the whole stuff.

The data is not streamed in the vertex desc - the vertex desc is just that : a vertex description. The vertex data is stored in a VertexBuffer object which contains :

* the VertexDesc object
* a list of VertexStream
* some other stuff not really important in our case

The VertexStream actually points to the vertex data.

When you tell that I created my own 3D vertex creation API it is a bit exagerated :) The only thing I really did is that I pushed the D3D vertex declaration one step further. Is you look at my description of the VertexDesc object, it is nothing more than an easy runtime creation of D3DVERTEXELEMENT9.

The Device->createVertexBuffer() method takes a VertexDesc paramaters, interpret it (and optionnaly creates an intermediate representation as in D3D - since I must use a D3DVE9 structure to use the encapsulated vertex streams) and using the interpreted informations creates the vertex streams - which are then stored in the VertexBuffer object.

This is done so that I do not need to know the vertex layout at compile time (allowing me tu support any kind of vertice) while not wasting memory space and bandwidth. I autorised me to do this in this way because the vertex buffer creation should not be a time critical operation.

To summarize, let's see the D3D pseudo code :


1) create a D3DVERTEXELEMENT9 array from the VertexDesc and the associated IDirect3DVertexDeclaration9
2) associate this to the new VertexBuffer object
for each stream declaration in teh VertexDesc object
3) compute the length (in byte) of the stream
4) create the IDirect3DVertexBuffer9 for that stream
5) associate this VB with a new VertexStream object
6) add this VertexStream object the VertexBuffer's list
end for



Of course, if there is only one stream I can try to compute the stream FVF instead of creating a IDirect3DVertexDeclaration9.

The rendering code fetch the IDirect3DVertexDeclaration9 (or the FVF) and setup the device with the fectched data, initialize the streams via SetStreamSource() (+ a bunch of other thing, of course) and call DP or DIP.

Voilà. That's all :)

So to finnaly answer to your questions :

a) i do not store the vertex data in the VertexDesc
b) i store a dynamic list of stream
c) i do not use any VertexAttribute class (although this is giving me ideas... I should think about it :)

Anyway, I hope I'm clearer and I hope I helped you :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628366
    • Total Posts
      2982274
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now