Jump to content
  • Advertisement
Sign in to follow this  
TheDep

OpenGL glDraw*Instanced unsupported? - Intel

This topic is 1464 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello.

 

I'm testing different OpenGL functionalities that I will eventually use for my game engine.

I read about glDraw*instanced and thought: "I need that!"

 

On my NVidia machine it works like a charm. I can draw as many objects as my array can hold in a single draw call with close to no performance impact.

However on Intel HD 2000/3000 with OpenGL 3.3 the performance for 768 triangles is 50ms.

 

Why? Am I doing something wrong that Intel doesn't like?

It surely can't be that Intel doesn't support instantiation...? I can play other games on this laptop just fine and they don't give me 50ms / frame.

 

If it really is unsupported, what are my options?

 

Draw function summary:

glBindBuffer(GL_ARRAY_BUFFER, pos);
glBufferData(GL_ARRAY_BUFFER, size, NULL, GL_STREAM_DRAW); // Buffer orphaning
glBufferSubData(GL_ARRAY_BUFFER, 0, size, posData);

glUseProgram();

// Create matrices
// Uniform matrices

// These three lines for vertices, uvs, normals, position data
glEnableVertexAttribArray(vertID);
glBindBuffer(GL_ARRAY_BUFFER, vertBuf);
glVertexAttribPointer();

glBindBuffer(GL_ARRAY_BUFFER, indBuf); // bind indices

// This x4
glVertexAttribDivisorARB();

glDrawElementsInstancedARB(
		GL_TRIANGLES,      // mode
		size,    // count per instance
		GL_UNSIGNED_INT,   // type
		(void*)0,           // element array buffer offset
		instances // total instances
		);

// Unbind stuff

Edit: Declaring the streamed array "posData" as global seems to have positively effected the scenario above and went from drawing 10 triangles / frame to 100 000+

Edit2: I seem to have jumped the train too soon. What actually made the difference was setting the display mode to fullscreen.

Edited by TheDep

Share this post


Link to post
Share on other sites
Advertisement

Using a global variable for the array seems to work. If anyone could explain why, would be greatly appreciated.

Share this post


Link to post
Share on other sites
You could try to run this in a debug context. The debug messages from a debug context also include performance warnings. Maybe they will give you a hint.

Share this post


Link to post
Share on other sites

The HD 2000/3000 chips are rather old now.  It's important to realise that OpenGL specifies functionality, not performance.  So glDraw*Instanced is specified to behave in a certain way, and give a certain result, but is not specified to actually give a performance increase.  It's perfectly valid for any given OpenGL implementation to emulate any given piece of functionality in software (many of us remember the early GL 2.0 drivers that supported non-power-of-two textures but dropped you to 1fps if you actually tried to use one) and the OpenGL design philosophy is that this is a perfectly acceptable thing to do.

 

Bleagh.

 

Anyway, with the older Intels the generally accepted wisdom is that if you want predictable and consistent behaviour you really should be using D3D instead.  With newer Intels (say, from the HD 4000 onwards) the situation is much better and you can quite safely use higher OpenGL functionality.  Sorry things couldn't be different, but that's what you're stuck with.

Share this post


Link to post
Share on other sites

What “array”? posData? pos?

How were you storing it before?

 

More information.

 

 

L. Spiro

 

Sorry for the hasty post.

 

In the example, the array posData is different for each frame and is streamed to the GPU.

The problem existed when posData was declared as: GLfloat* posData = new GLfloat[x]; locally, before the draw loop. And then updated each time before the code above.

 

The performance issue was "solved" by declaring it as GLfloat posData[x]; globally.

Share this post


Link to post
Share on other sites

Anyway, with the older Intels the generally accepted wisdom is that if you want predictable and consistent behaviour you really should be using D3D instead.  With newer Intels (say, from the HD 4000 onwards) the situation is much better and you can quite safely use higher OpenGL functionality.  Sorry things couldn't be different, but that's what you're stuck with.

 

Sometimes I just wish portable code would just work... the more time I spend with portable libraries, the more it starts to feel like one big lie.

Seeing how many people have recently developed intrest in *nix based systems (SteamOS, Android...) I'm gambling on OpenGL and portability to be "next gen".

Share this post


Link to post
Share on other sites

Sorry for the hasty post.
 
In the example, the array posData is different for each frame and is streamed to the GPU.
The problem existed when posData was declared as: GLfloat* posData = new GLfloat[x]; locally, before the draw loop. And then updated each time before the code above.
 
The performance issue was "solved" by declaring it as GLfloat posData[x]; globally.

Globals are never the correct solution.

Don’t delete the array until it has been drawn (which means wait the number of frames equal to the number of back buffers you have).

 

 

L. Spiro

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!