• Advertisement
Sign in to follow this  

OpenGL Designing a render queue

This topic is 4946 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to design a more or less generic and extendable rendering system. So far, the design looks like this: 1. All geometry is queued up as primitives (points, lines, triangles, quads, ...) in form of vertex/normal/texture coordinate lists with corresponding index lists and shader status. There are two queues, one for solid objects and one for blended objects 2. The blended geometry is then (roughly) sorted back to front 3. (The only part that should be different between an OpenGL, D3D or software renderer) The stuff is actually drawn somewhere. So far, I'm pretty sure that I'll get into problems with the combinations of steps 1 and 3, since that needs a pretty extensive data format with facilities for a wide range of features that might not be needed on every object. For example, the possibility of multitexturing would need more data for texture coordinates, blending modus, ... for every vertex in a multitextured primitive. For clarification, this is a very basic structure for a single vertex (the texture itself is specified in a separate "Primitive" class which maintains a list of those elements):
  struct PrimitiveElement {
    Vector3* V; // vertex position
    Vector3* N; // normal
    TexCoord* T; // texture coordinate, has u and v attribute
    
    PrimitiveElement( void ) : V(0), N(0), T(0) {}
    PrimitiveElement( Vector3* _v, Vector3* _n, Vector3* _t ) :
      V(_v), N(_n), T(_t)
    {}
  }
Now, to have multitexturing, I'd need multiple texture coordinates for each element and the already mentioned Bunch Of Information in the Primitive class. An approach that would somehow circumvent that problem would be to have the geometry not queued in an "interchange" format first, but drawn directly after performing the depth sort directly on the geometry objects, but that would put the rendering code in so many more different methods instead of just one for every type of primitive. Am I thinking way to difficult there? Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures? (Edit: commented V, N, T attributes)

Share this post


Link to post
Share on other sites
Advertisement
I'm no expert here, but in OGRE, they had a RenderQueue, which has a queue of RenderGroups, which in turn have Renderables, sorted by the material used.

The idea is that you could store all 2D stuff in a RenderGroup and terrain in a different one and the rest in another. In each RenderGroup, Renderables are sorted by the material used in order to minimise state transitions.

Share this post


Link to post
Share on other sites
Hello Shadowdancer,

I don't think your vertex structure should be described in that way. In the past, in the purpose of trying to have a API independant renderer, I came up with the idea of having a class which actually describes the vertex using code. Without the details, this was something like that :



class VertexDesc
{
public:
void beginStream();
void endStream();
void position3f();
void normal3f();
void texture1f();
void texture2f();
void texture3f();
};




And so on.

Using this vertex descriptor (which nicely maps to the DX8/9 vertex descriptin style) was very simple :



// this code describes a 2 stream vertex, with position/normal in
// the first stream and 2 2D texture coord in the second stream

VertexDesc desc;

desc.beginStream();
desc.position3f();
desc.normal3f();
desc.endStream();
desc.beginStream();
desc.texture2f();
desc.texture2f();
desc.endStream();




The VertexBuffer object was created using this vertex description - this was also the cas of the VertexIterator which was used to iterate through the vertex buffer ^^

This was very nice to create on-the-fly vertice, depending on the data I read in the model file. When I wanted some fixed vertex descriptor, I usually derived a class from VertexDesc and put the code I wanted in the inline constructor of that class.

Hope this helps,

Share this post


Link to post
Share on other sites
Darkor,

I'm sure this will come in handy at some point, but the problem right now is that I need a compact data format to move some attributes and data around in a way that's manageable in at least OpenGL and Direct3D. The idea of that hierarchy is to minimize the number of state changes, but right now I'm looking for a way to control state changes.

Emmanuel,

sorry if I seem stubborn, but your approach doesn't seem all that different to me, it just looks like a syntactic difference. In your code, the rendering (pushing vertices to the hardware) would be written into the VertexDesc class, while my approach would leave it to the renderer to do something with the values. The problem with a ton of specific attributes per-vertex wouldn't change too much.




Another thought that just came up is hierarchical transformations with objects not stored as a whole. Would it be a good idea to store a quaternion for each "component" of a renderable object and then have the primitives refer back to that? Some kind of linked transformation list that just calls, for example, glMultMatrix( ComponentTransform ) for each parent up the tree? So, if I have, say, a space scene in which all ships have a quaternion containing their position and rotation and I wanted to draw a turret on one of those ships, a turret's primitive would point to the ship's transformation which would be done before the primitive's own was done. And since this sounds confusing to me, here's a short form:


draw( object ) {
Stack transformations;

transformations.push( object.transform );

// put all parents' transformations on the stack until we are
// at the root object
parent = object;
while( (parent = object.parent) != NULL )
transformations.push( parent.transform );

// multiply all transformations from the stack to the view
// matrix
while( transform = transformations.pop() )
MultiplyModelview( transform );
}

Share this post


Link to post
Share on other sites
Shadowdancer,

Regarding my last post, I think you misunderstood me. Not too difficult since I did not explain anything :)

Quote:

Is there some easy way out that would preserve portability while not requiring a large data overhead or a multitude of different data structures?


I was answering to that question. If you look at my structure, it doesn't say who does the render. Actually, the rendering is done in the renderer->renderOperation() method (the Renderer class is abstract and may exist for DX, OGL or whatever you want). This method fetch the VertexBuffer and using the VertexDesc knows how to setup the API state in order to correctly draw the data. So it's still the same approach as you (the drawing part is centralized in step 3 and is API specific).

The point is that using this non-static data structure, you have a way to describe any kind of vertex data you want without the need to know how it is organized.

Consider your case. You define the PrimitiveElement class to contain position, normal and texture coord. If you want to support shadowmaps, you may add a new texture coord set to your vertex description, but this is not needed for most of your mesh. So what would you do ? Having tons of rendering methods (one for each kind of vertices, adding code complexity - this is solution 1) or getting everything in one vertex class and use only the data you want to use (adding a big data overhead when you don't want to use all the vertex attributes - this is solution 2).

It seems to me that this was your problem. So I described a potential solution, which is to use a vertex descriptor - using it, the renderer knows which way it must interpret the vertex streams (you'll need to do that in solution 2 anyway) without the need to have all the data in the vertex stream (which avoid the solution 2 data overhaed I speak about).

Of course, the major concern may be writing easyness. Using this data representation in conjunction with a VERTEX structure which defines the vertex attributes using the classical way is a very bad idea because you'll have problems if you don't keep your VERTEX structure in sync with the vertex description. That's why I added iterators to the whole thing. The iterator points to data in the vertex stream. It is created using the vertex desc. Basically, it looks like this :


template <typename TYPE> class VertexIterator
{
private:
byte *_data;
int _vsize

public:
VertexIterator(byte *data, int vsize) :
_data(data), _vsize(vsize)
{
}
inline TYPE operator*() const { return *(TYPE *)_data; }
inline TYPE& operator*() { return *(TYPE *)_data; }
inline void operator++() { _data += _vsize; }
inline void operator++(int) { _data += _vsize; }
// and so on... the classical iterator operators
};



The vertex iterator itself is created upon demand by the VertexBuffer :


VertexIterator<Vector3> nit;
nit = vertexBuffer.getIterator<Vector3>(VA_NORMAL, 0 /*stream*/, 0 /*tex coord set, unused in this case*/);



If there is no normal attribute in the vertex you can either throw an exception in the getIterator<> function or have a isValid() method in the VertexIterator<> class.

I know this may sound a bit crypic, but it is rather simple, and very powerfull.

And I hope I'm not completely off topic :)

Share this post


Link to post
Share on other sites
What I've been doing and it seems to work good is:

RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

Share this post


Link to post
Share on other sites
Quote:
Original post by mitchw
What I've been doing and it seems to work good is:

RenderQueue has list of render groups. A render group is a high level gross grouping of primitives. Think terrain, models, alpha and UI. Each render group has a shader/fragment list. This is an object that has a reference to the shader and a list of fragments to be rendered for this shader. On insertion into the render queue, it automatically gets put into the correct group, then the correct shaderlist, so no sorting required.

If it automatically gets put in the correct group and shaderlist, then that means you create all the groups and shaderlists when the program starts, and that you have enough lists for all the possible materials/shaders???

---------------

BTW, how do you guys sort your renderqueues? what sorting algorithm (radix?) and on what parameters (textures, shaders??). Thanks

Share this post


Link to post
Share on other sites
Emmanuel, thanks a lot for your effort.

I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?

Share this post


Link to post
Share on other sites
I sorta used a simplified version of OGRE's VertexData class (yeah I guess I had better give some credit to the OGRE team if so much is based on their infrastructure).

Basically VertexData stores VertexArrays, which could store the actual vertex array, index data, texture coordinates, normal arrays etc. Each VertexArray identifies as one of these types and memory is allocated and destroyed or sent to the renderer based on which type it is.

Look it up in OGRE docs.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shadowdancer
Emmanuel, thanks a lot for your effort.

I think I'm starting to get what you want to tell [smile]. Basically, you created your own 3D API for vertex creation, which effectively puts vertices into your own idea of a VertexBuffer. How do you store the data streamed into the VertexDesc? Do you keep a dynamic list of streams, each of which can contain a list of attributes, like position, normal or texture coordinates? Are those attributes all objects derived from a common base class, like VertexAttrib?


Thanks for your thanks :) Let me clearify the whole stuff.

The data is not streamed in the vertex desc - the vertex desc is just that : a vertex description. The vertex data is stored in a VertexBuffer object which contains :

* the VertexDesc object
* a list of VertexStream
* some other stuff not really important in our case

The VertexStream actually points to the vertex data.

When you tell that I created my own 3D vertex creation API it is a bit exagerated :) The only thing I really did is that I pushed the D3D vertex declaration one step further. Is you look at my description of the VertexDesc object, it is nothing more than an easy runtime creation of D3DVERTEXELEMENT9.

The Device->createVertexBuffer() method takes a VertexDesc paramaters, interpret it (and optionnaly creates an intermediate representation as in D3D - since I must use a D3DVE9 structure to use the encapsulated vertex streams) and using the interpreted informations creates the vertex streams - which are then stored in the VertexBuffer object.

This is done so that I do not need to know the vertex layout at compile time (allowing me tu support any kind of vertice) while not wasting memory space and bandwidth. I autorised me to do this in this way because the vertex buffer creation should not be a time critical operation.

To summarize, let's see the D3D pseudo code :


1) create a D3DVERTEXELEMENT9 array from the VertexDesc and the associated IDirect3DVertexDeclaration9
2) associate this to the new VertexBuffer object
for each stream declaration in teh VertexDesc object
3) compute the length (in byte) of the stream
4) create the IDirect3DVertexBuffer9 for that stream
5) associate this VB with a new VertexStream object
6) add this VertexStream object the VertexBuffer's list
end for



Of course, if there is only one stream I can try to compute the stream FVF instead of creating a IDirect3DVertexDeclaration9.

The rendering code fetch the IDirect3DVertexDeclaration9 (or the FVF) and setup the device with the fectched data, initialize the streams via SetStreamSource() (+ a bunch of other thing, of course) and call DP or DIP.

Voilà. That's all :)

So to finnaly answer to your questions :

a) i do not store the vertex data in the VertexDesc
b) i store a dynamic list of stream
c) i do not use any VertexAttribute class (although this is giving me ideas... I should think about it :)

Anyway, I hope I'm clearer and I hope I helped you :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement