Sign in to follow this  
spek

OpenGL Starting with GL Instancing, advice?

Recommended Posts

spek    1240

Hey, I'm thinking about implementing Instancing, but before doing so, I need some general advice.

 

So far, most of the rendering is done by looping through (sorted) lists and call a VBO for each object. In some other cases, like drawing a simple sprite, I still use the good old "glVertex3f(...) <4 times>) method. And then for particles, I use a VBO containing many particles that can be updated with OpenGL Transform Feedback.

 

It makes sense that using Instancing helps when rendering lots of the same object (but at a different position, or with slight differences). But in practice, most of my objects only come in small numbers. 3 boxex, 5 decals, 1 barrel, et cetera. Does it still make sense to implement instancing, or does it actually slow down things with overhead? Or to put it different, when rendering a single quad, is instancing equal or faster than doing it the old 4x glVertex3f way?

 

 

Second, some of my objects are animated, and use Transform Feedback to let a Vertex Shader calculate the skinned vertex positions. Each animated model would have its own secundary VBO to write back the transformed vertices. This takes extra memory of course, then again the amount of animated objects is very small compared to the rest. I'm guessing this technique does not co-operate with instancing in any way right?

 

 

Third. My meshes use a LOD system. So when getting further away from the camera, they toggle to a simplified mesh. Is it possible to put all LOD models (usually I have 3 to 5 variants) in a single buffer and let the Instancing somehow pick the right variant depending on the distance? Or would using LOD's being less needed anyway?

 

 

Any other things I should keep in mind when working with Instancing?

 

Merci beaucoup

Rick

Edited by spek

Share this post


Link to post
Share on other sites
Ashaman73    13715

I'm still developing on OGL2.0+extensions, though here are my experiences/knowledge:

1. I draw gui elements (including letters for text!!) the old way (glVertex..,glColor, etc.), not using any buffers or lists and still the performance is remarkable (300-500 + few thousand letters,this will slow down the game, but most often only half the FPS on my nvidia8800).

2. Particles and decals are in a VBO (dynamic), no CPU update once they have been spawned (they move along a spline which is calculated on the GPU alone).

3. Static ad-hoc batching for grass, stones,small plants.

4. Every other object is rendered as single VBO call.

 

In my pipeline the most significant impact has the post-processing effects (screensize-bottleneck). I see it like that, that we (hobby-indie) devs have so much GPU/CPU power at hands, that up-front over-optimization is seldom necessary smile.png , your game will most likely benefit more from maintainable code.

 

Best to check, if you are pixel shader bound (screensize test), then try to check, if some state changes (texture switches etc.) are limiting your performance and eventually check if really draw calls are a limiting factor. I.e. most of my models don't have less then 1000 vertices, I never use LOD after seening no performances improvements.

 

But it depends on your hardware target and what you render (200 zombies  or 10 enemies ? A forest ? ).

 

If you want to optimize I would take a look at the particle system. The geometry shader could be used to improve the particle throughput (one vertex expanded to 4), though particles are most likely although pixel-bound.

Edited by Ashaman73

Share this post


Link to post
Share on other sites
spek    1240

Your approaches look pretty much like the techniques I also uses. Manual drawn stuff for quadlike HUD stuff, VBO's with transform feedback for decals, and 1 VBO call per object.

 

The amount of objects is most likely not a performance killer in my case indeed. Asides from particles, enities are rarely duplicates more than 10 times. Yet the goal now is not to make things faster, but just to be prepared in case we do actually need a high number of the same objects or decals. Which is not unrealistic when thinking about scenes with a lot of rubble, or foliage.

 

So, if Instancing comes for "free" even if you don't really make good use of it, it wouldn't hurt either to implement it, just in case. I'm not really worried about the code getting more stiff, as most of the objects to draw are grouped and sorted for batching already anyway.

 

Greets

Share this post


Link to post
Share on other sites
Vincent_M    969

I haven't used VBOs much, but if you're building a sprite system, I would recommend a "sprite batch". You'd have two classes: Sprite and SpriteBatch. SpriteBatch holds an STL vector of vertices and an STL list of Sprite objects (could go with pointers and allocate too, and that would cut down on copy operations when manipulating the Sprite list). Then, each group of vertices in that vector correspond a sprite object. That sprite object would contain your sprite's properties list its transform matrix, position, dimensions, blit parameters, etc. the Sprite class would have a pointer to its "parent" SpriteBatch it belongs to so that it has access to the vertices it's manipulating. You could also add animation and physics properties to Sprite.

 

Sprite batch would also contain the texture you're using, and handle all rendering. The pros are that you draw all of your objects using that texture in one call. The drawback is that your'd software-transform each sprite using matrix math on the CPU. Still, you only transform the vertices when a change occurs instead of each frame. This could be costly for particle systems, but then again, you'd want point sprites for that!

 

 

EDIT: I forgot to mention models.

I would say it's a good idea to use VBOs with static models. Again, you'd have two classes similar to my sprite example about: Model and ModelObject. The model is in charge of loading the 3D model data from your file, and storing a copy of the data. Then, each ModelObject would be allocated to store a "parent" pointer to the loaded "Model" object it wants to take the form of, and hold world-space properties like a transform matrix, attachment matrix, physics properties, bounding volumes for frustum checks, etc. Model will contain an STL list of ModelObjects, all of whom can be positioned, rotated, attached, scaled, targeting other objects, etc. Then, run Model through your Update and Render loops. Model's Update() and Render() methods will set the shader that model uses, and call each ModelObject's corresponding Update() and Render() methods respectively. The Object's Update() and Render() properties will render the model's data using it's transform data, physics, etc. You can even add an "isRendering" property to ModelObject and check if it's TRUE each frame before rendering.

 

The same can go for dynamic models, but you'll be running vertex skinning code on its vertices for each ModelObject's Render() call so the vertices are temporarily transformed for that instance. Additional animation data would be required such as an Animation class that holds all animation data once (you can store a list of Animation objects in Model), and each ModelObject instance will only hold the animation(s) being applied to it and current frame so the vertices can be skinned either on the GPU via vertex shader, or in real-time on the CPU each frame per Render() call.

Edited by Vincent_M

Share this post


Link to post
Share on other sites
spek    1240

Well, particles, objects and skinned characters are already optimized in terms of using VBO's, and letting the GPU do all the work (converting vertices to sprite quads, moving the particles, and storing skinned vertices back into a second VBO). The real question is, would it hurt (performance wise) to apply instancing (like shown here http://sol.gfxile.net/instancing.html) even if the majority of entities doesn't come in huge numbers?

 

The question arised when I was spawning bullet cases, which tend to come in big numbers when shooting machine guns. So far, these are rendered like any other object in my engine, which comes down to this:

loop through sorted list // sorted on material
{
     sortedList[x].referenceObject..material.apply; // applies shaders, textures and parameters
     for each object in sortedList[x].objects
     {
          object.applyMatrix;
          sortedList[x].referenceObject.vbo.draw;
     }
}

But since these particular bullet cases come in larger numbers, it could be done with instancing, which changes the code "slightly" into something like this

 

loop through sorted list // sorted on material
{
     sortedList[x].referenceObject..material.apply; // applies shaders, textures and parameters
     pushMatrices( sortedList[x].objects.allMatrices );
     sortedList[x].vbo.drawMultipleTimes( sortedList[x].objects.count );
}

It might make the bullets being drawn a bit faster, though most other types of objects may not benefit. Yet, if it doesn't harm either, I prefer to write the drawing approach in a single way, rather than having to split the instanced objects from the non-instanced objects and do things in two different ways...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now