Sign in to follow this  

OpenGL Starting with GL Instancing, advice?

Recommended Posts

Hey, I'm thinking about implementing Instancing, but before doing so, I need some general advice.


So far, most of the rendering is done by looping through (sorted) lists and call a VBO for each object. In some other cases, like drawing a simple sprite, I still use the good old "glVertex3f(...) <4 times>) method. And then for particles, I use a VBO containing many particles that can be updated with OpenGL Transform Feedback.


It makes sense that using Instancing helps when rendering lots of the same object (but at a different position, or with slight differences). But in practice, most of my objects only come in small numbers. 3 boxex, 5 decals, 1 barrel, et cetera. Does it still make sense to implement instancing, or does it actually slow down things with overhead? Or to put it different, when rendering a single quad, is instancing equal or faster than doing it the old 4x glVertex3f way?



Second, some of my objects are animated, and use Transform Feedback to let a Vertex Shader calculate the skinned vertex positions. Each animated model would have its own secundary VBO to write back the transformed vertices. This takes extra memory of course, then again the amount of animated objects is very small compared to the rest. I'm guessing this technique does not co-operate with instancing in any way right?



Third. My meshes use a LOD system. So when getting further away from the camera, they toggle to a simplified mesh. Is it possible to put all LOD models (usually I have 3 to 5 variants) in a single buffer and let the Instancing somehow pick the right variant depending on the distance? Or would using LOD's being less needed anyway?



Any other things I should keep in mind when working with Instancing?


Merci beaucoup


Edited by spek

Share this post

Link to post
Share on other sites

I'm still developing on OGL2.0+extensions, though here are my experiences/knowledge:

1. I draw gui elements (including letters for text!!) the old way (glVertex..,glColor, etc.), not using any buffers or lists and still the performance is remarkable (300-500 + few thousand letters,this will slow down the game, but most often only half the FPS on my nvidia8800).

2. Particles and decals are in a VBO (dynamic), no CPU update once they have been spawned (they move along a spline which is calculated on the GPU alone).

3. Static ad-hoc batching for grass, stones,small plants.

4. Every other object is rendered as single VBO call.


In my pipeline the most significant impact has the post-processing effects (screensize-bottleneck). I see it like that, that we (hobby-indie) devs have so much GPU/CPU power at hands, that up-front over-optimization is seldom necessary smile.png , your game will most likely benefit more from maintainable code.


Best to check, if you are pixel shader bound (screensize test), then try to check, if some state changes (texture switches etc.) are limiting your performance and eventually check if really draw calls are a limiting factor. I.e. most of my models don't have less then 1000 vertices, I never use LOD after seening no performances improvements.


But it depends on your hardware target and what you render (200 zombies  or 10 enemies ? A forest ? ).


If you want to optimize I would take a look at the particle system. The geometry shader could be used to improve the particle throughput (one vertex expanded to 4), though particles are most likely although pixel-bound.

Edited by Ashaman73

Share this post

Link to post
Share on other sites

Your approaches look pretty much like the techniques I also uses. Manual drawn stuff for quadlike HUD stuff, VBO's with transform feedback for decals, and 1 VBO call per object.


The amount of objects is most likely not a performance killer in my case indeed. Asides from particles, enities are rarely duplicates more than 10 times. Yet the goal now is not to make things faster, but just to be prepared in case we do actually need a high number of the same objects or decals. Which is not unrealistic when thinking about scenes with a lot of rubble, or foliage.


So, if Instancing comes for "free" even if you don't really make good use of it, it wouldn't hurt either to implement it, just in case. I'm not really worried about the code getting more stiff, as most of the objects to draw are grouped and sorted for batching already anyway.



Share this post

Link to post
Share on other sites

I haven't used VBOs much, but if you're building a sprite system, I would recommend a "sprite batch". You'd have two classes: Sprite and SpriteBatch. SpriteBatch holds an STL vector of vertices and an STL list of Sprite objects (could go with pointers and allocate too, and that would cut down on copy operations when manipulating the Sprite list). Then, each group of vertices in that vector correspond a sprite object. That sprite object would contain your sprite's properties list its transform matrix, position, dimensions, blit parameters, etc. the Sprite class would have a pointer to its "parent" SpriteBatch it belongs to so that it has access to the vertices it's manipulating. You could also add animation and physics properties to Sprite.


Sprite batch would also contain the texture you're using, and handle all rendering. The pros are that you draw all of your objects using that texture in one call. The drawback is that your'd software-transform each sprite using matrix math on the CPU. Still, you only transform the vertices when a change occurs instead of each frame. This could be costly for particle systems, but then again, you'd want point sprites for that!



EDIT: I forgot to mention models.

I would say it's a good idea to use VBOs with static models. Again, you'd have two classes similar to my sprite example about: Model and ModelObject. The model is in charge of loading the 3D model data from your file, and storing a copy of the data. Then, each ModelObject would be allocated to store a "parent" pointer to the loaded "Model" object it wants to take the form of, and hold world-space properties like a transform matrix, attachment matrix, physics properties, bounding volumes for frustum checks, etc. Model will contain an STL list of ModelObjects, all of whom can be positioned, rotated, attached, scaled, targeting other objects, etc. Then, run Model through your Update and Render loops. Model's Update() and Render() methods will set the shader that model uses, and call each ModelObject's corresponding Update() and Render() methods respectively. The Object's Update() and Render() properties will render the model's data using it's transform data, physics, etc. You can even add an "isRendering" property to ModelObject and check if it's TRUE each frame before rendering.


The same can go for dynamic models, but you'll be running vertex skinning code on its vertices for each ModelObject's Render() call so the vertices are temporarily transformed for that instance. Additional animation data would be required such as an Animation class that holds all animation data once (you can store a list of Animation objects in Model), and each ModelObject instance will only hold the animation(s) being applied to it and current frame so the vertices can be skinned either on the GPU via vertex shader, or in real-time on the CPU each frame per Render() call.

Edited by Vincent_M

Share this post

Link to post
Share on other sites

Well, particles, objects and skinned characters are already optimized in terms of using VBO's, and letting the GPU do all the work (converting vertices to sprite quads, moving the particles, and storing skinned vertices back into a second VBO). The real question is, would it hurt (performance wise) to apply instancing (like shown here even if the majority of entities doesn't come in huge numbers?


The question arised when I was spawning bullet cases, which tend to come in big numbers when shooting machine guns. So far, these are rendered like any other object in my engine, which comes down to this:

loop through sorted list // sorted on material
     sortedList[x].referenceObject..material.apply; // applies shaders, textures and parameters
     for each object in sortedList[x].objects

But since these particular bullet cases come in larger numbers, it could be done with instancing, which changes the code "slightly" into something like this


loop through sorted list // sorted on material
     sortedList[x].referenceObject..material.apply; // applies shaders, textures and parameters
     pushMatrices( sortedList[x].objects.allMatrices );
     sortedList[x].vbo.drawMultipleTimes( sortedList[x].objects.count );

It might make the bullets being drawn a bit faster, though most other types of objects may not benefit. Yet, if it doesn't harm either, I prefer to write the drawing approach in a single way, rather than having to split the instanced objects from the non-instanced objects and do things in two different ways...

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now