Sign in to follow this  

OpenGL Improve performance: Render 100000+ objects

This topic is 2135 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

We prepared a little test project (VS 2010, C++, OpenGl, Freeglut, Glew) to publish our solution to the following main requirements:


- Rendering 100000+ objects at the same time, each one of those independently accessible/seleccionable to be able to change their properties.

- The objects are of a certain type that defines their general shape and properties, which is also changeable during execution.

- Use OF VBOs, Shaders, OpenGl 3.3

- Should work on low-level graphic cards.



This is a very simplified version of our actual project, we are aware of some general methods to improve overall performance such as rendering only the visible objects and not those which are temporarily outside of your frustum.

For this example we took those techniques and other features out to make it smaller and easier to understand.

But please don't hold back with anything that comes to your mind and works for you, we might as well have missed something obvious.

Any constructive criticism, feedback and/or information, tips to improve the performance are welcome.

On our computers (Intel i7-2600, CPU @ 3.40, Ge Force GT 220) we render 100000 objects at about 6-8 frames/sec.
1000000 objects at 3 frames/sec.

It would be nice to improve the performance to get close to 20-25 frames/sec, although it might simply not be possible.


General Info:

Use your left mouse button to rotate the camera and A, D, W, S to move it to the left, right, up or down.

[color=#0000ff]EDIT: What we are looking is a way to improve the performance. Is our approach a proper way to render 100000 objects? We tested a lot of different ways and the reason why we chose this solution is because it gave us the best overall performance, but as I said, maybe we missed the obvious and did not use the "standard" OpenGL way of solving this problem. We couldnt really find a lot information about a project where the requirements were to render this amount of objects.[/color]

Share this post


Link to post
Share on other sites
* The [font=courier new,courier,monospace]Cube[/font] and [font=courier new,courier,monospace]Pyramid[/font] classes seem to be exact duplicates of each other? These classes seem like they shouldn't exist, and you should just use [font=courier new,courier,monospace]Shape[/font] instead for both? I mean, if you've got artists making all sorts of shapes for your game, you don't want a programmer to have to make classes for [font=courier new,courier,monospace]Fish[/font], [font=courier new,courier,monospace]Rock[/font], [font=courier new,courier,monospace]BiggerRock[/font], [font=courier new,courier,monospace]FencePost[/font], [font=courier new,courier,monospace]GreenFencePost[/font], [font=courier new,courier,monospace]Tree01[/font], [font=courier new,courier,monospace]Tree02[/font], etc... You should be able to add new "shapes" to a game without writing new code.

* The use of [font=courier new,courier,monospace]virtual[/font] for drawing pyramids/cubes is an unnecessary idea - especially when every odd/even shape alternates between a pyramid and a cube, as this causes your render loop to alternate between calling two different rendering functions ([i]doubling your icache requirements[/i]).

* There's a lot of room to reduce the number of [font=courier new,courier,monospace]gl[/font] calls during rendering -- e.g. if several cubes were rendered after one another, then only the first would have to call [font=courier new,courier,monospace]glBindVertexArray[/font], every following cube could skip that call.

* There's no sorting of the data going on. By sorting your ([i]visible[/i]) objects each frame before submitting their [font=courier new,courier,monospace]gl[/font] calls, you can greatly reduce the number of [font=courier new,courier,monospace]gl[/font] calls that need to be made (as above).

Regarding C++ style - your classes don't follow the [url="http://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming)"]rule of three[/url] nor implement the [url="http://dev-faqs.blogspot.com.au/2010/07/c-idioms-non-copyable.html"]non-copyable idiom[/url], which makes them dangerous:[code]{
ShapeType x(1), y(2);
y = x;
}//Heap corruption: both x and y delete x's resources in their destructor. Also: y's resources are leaked.[/code]

Share this post


Link to post
Share on other sites
* The [font=courier new,courier,monospace]Cube[/font] and [font=courier new,courier,monospace]Pyramid[/font] classes seem to be exact duplicates of each other? These classes seem like they shouldn't exist, and you should just use [font=courier new,courier,monospace]Shape[/font] instead for both? I mean, if you've got artists making all sorts of shapes for your game, you don't want a programmer to have to make classes for [font=courier new,courier,monospace]Fish[/font], [font=courier new,courier,monospace]Rock[/font], [font=courier new,courier,monospace]BiggerRock[/font], [font=courier new,courier,monospace]FencePost[/font], [font=courier new,courier,monospace]GreenFencePost[/font], [font=courier new,courier,monospace]Tree01[/font], [font=courier new,courier,monospace]Tree02[/font], etc... You should be able to add new "shapes" to a game without writing new code.

[color=#0000ff]In this example you are right, but for us it is essential that different objects have different render methods. [/color]
[color=#0000ff]Maybe we should have explained it better in the post description.[/color]

* The use of [font=courier new,courier,monospace]virtual[/font] for drawing pyramids/cubes is an unnecessary idea - especially when every odd/even shape alternates between a pyramid and a cube, as this causes your render loop to alternate between calling two different rendering functions ([i]doubling your icache requirements[/i]).

[color=#0000ff]The use of virtual is also necessary to be able to have 1 list of shapes and we need polymorphism for more complex object structures.[/color]
[color=#0000ff]The instantiation of shapes in the odd/even way was only done for the purpose of this test project. We will have a list of unsorted objects.[/color]

* There's a lot of room to reduce the number of [font=courier new,courier,monospace]gl[/font] calls during rendering -- e.g. if several cubes were rendered after one another, then only the first would have to call [font=courier new,courier,monospace]glBindVertexArray[/font], every following cube could skip that call.

[color=#0000ff]This is a good idea, but in the tests we have made we didnt see any relevant improvements. For instance if create only Cube objects, delete the glBindVertexArray calls in the render method of the cube and put the glBindVertexArray call in the DrawScene method, like this:[/color]

[color=#a52a2a][size=2]void DrawScene(void)
{
glUseProgram(ShaderIds[0]);
glBindVertexArray(CubeShapeType->stBufferIds[0]);

for(int id = 0; id < arrShapesSize; id++)
{
arrShapes[id]->Render(globalShapeRenderingCounter, ModelMatrixUniformLocation, ColorUniformLocation, ChangeColorUniformLocation);
}

glUseProgram(0);

globalShapeRenderingCounter++;
if (globalShapeRenderingCounter > arrShapesSize) { globalShapeRenderingCounter = 0; }

glBindVertexArray(0);
}[/size][/color]

[color=#0000ff]we couldnt see any essential improvements.[/color]

* There's no sorting of the data going on. By sorting your ([i]visible[/i]) objects each frame before submitting their [font=courier new,courier,monospace]gl[/font] calls, you can greatly reduce the number of [font=courier new,courier,monospace]gl[/font] calls that need to be made (as above).

[color=#0000ff]we do check if objects are visible or not (Frustum method). [/color]

Regarding C++ style - your classes don't follow the [url="http://en.wikipedia.org/wiki/Rule_of_three_%28C%2B%2B_programming%29"]rule of three[/url] nor implement the [url="http://dev-faqs.blogspot.com.au/2010/07/c-idioms-non-copyable.html"]non-copyable idiom[/url], which makes them dangerous:
{
ShapeType x(1), y(2);
y = x;
}//Heap corruption: both x and y delete x's resources in their destructor. Also: y's resources are leaked.

[color=#0000ff]Thanks for this, we will implement it.[/color]

Share this post


Link to post
Share on other sites
I agree with Hogman about your class structure, but for slightly different reasons.

I've changed my thinking on class structures over the years and on a relatively large project I now find the base class/derived-class with render method to be unwieldy. It doesn't scale well. The best method is to define a class called shape, and for that class to encapsulate the data needed to draw any shape, i.e. a set of vertices and so on. Then you can separate your data from the rendering of that data, perhaps using a visitor pattern to fetch and render all shapes from a collection of shapes. No virtual functions required and more flexibility in the design. You tend to find with base class/derived class that there are often things that two derived classes need that a third doesn't, that get stuffed into the base class in any case. It just grows into a bit of a mess with the lines of responsibility unclear once you start adding lots of derived classes.

With respect to drawing 100,000 objects, you really need to be [url="http://en.wikipedia.org/wiki/Geometry_instancing"]instancing[/url].

Share this post


Link to post
Share on other sites
This might provide some insight: [url="http://origin-developer.nvidia.com/docs/IO/8230/BatchBatchBatch.pdf?q=docs/IO/8230/BatchBatchBatch.pdf"]http://origin-developer.nvidia.com/docs/IO/8230/BatchBatchBatch.pdf?q=docs/IO/8230/BatchBatchBatch.pdf[/url]

As mentioned above, it's not feasible to be submitting 100,000 different draw-calls per frame. You need to merge draw-calls together in order to reduce the CPU load.
If instancing is not an option, there's many "pseudo instancing" techniques - such as storing the same vertex data several times back-to-back in the same VBO, which lets you draw the same object multiple times with one draw-call.

[quote name='TeamPerformance' timestamp='1331565840' post='4921361']In this example you are right, but for us it is essential that different objects have different render methods.
The use of virtual is also necessary to be able to have 1 list of shapes and we need polymorphism for more complex object structures.[/quote]Your low-level rendering classes, which make [font=courier new,courier,monospace]gl[/font] calls, do not need to have polymorphic render functions. At this level, you should be working with simple objects which can be composed into complex objects.
At a higher level, you can have polymorphic classes which submit different combinations of these simple compositions.
[quote name='TeamPerformance' timestamp='1331565840' post='4921361']This is a good idea, but in the tests we have made we didnt see any relevant improvements.[/quote]Have you confirmed weather you are GPU-bound or CPU-bound? e.g. if your CPU loop takes 30ms, but is submitting 60ms worth of work to the GPU, then no amount of CPU-side optimisation is going to improve performance.
[quote name='TeamPerformance' timestamp='1331565840' post='4921361']
[i]* There's no sorting of the data going on. By sorting your (visible) objects each frame before submitting their [font=courier new,courier,monospace]gl[/font] calls, you can greatly reduce the number of [font=courier new,courier,monospace]gl[/font] calls that need to be made (as above).[/i]
we do check if objects are visible or not (Frustum method).[/quote]Visibility testing is not sorting. After you've culled your scene and determined the visible list of objects, you can re-order the list of objects to be drawn to achieve the most optimal rendering order.
e.g. if you've got expensive pixel shaders, then drawing objects from the closest the the furthest will improve your GPU performance (due to hi-z / early-z pixel rejection).
Or, if you've got many objects that share the same state (e.g. same textures, same shaders, etc) then drawing them at the same time will reduce the number of [font=courier new,courier,monospace]gl[/font] calls, which will improve CPU performance.

Also, have you run your program through any kind of profiler to see where the hot spots are?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now