Jump to content
  • Advertisement
Sign in to follow this  
Tiresias

OpenGL directx vs opengl on integrated chipset

This topic is 2875 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello

i noticed that my 2d game is quiet slow on all laptop equipped with an intel integrated chipset. This is expected as those graphic cards are low cost and not powerful.

Currently the game is all opengl. do you think we would gain speed if we use directX instead of OpengL (as they are intel cards i suspect opengl is poorly implemented but i am not sure at all).

before developing the directx version i would be pleased to get feedback from you guys.

alternatly i could try to optimize the usage of opengl but on some card its really fast, even on some basic but not integrated card so i suspect this is only those cards.

anyway any feedback welcome.

Share this post


Link to post
Share on other sites
Advertisement
Yes, it's not only faster, but it also has more features available (you may even be able to do HLSL) and is more stable.

Performance-wise I got maybe 3 times the speed by porting to D3D, but don't take my word for it. Download the DirectX SDK (you might want to try an older version from 2005 or so; i.e. before D3D10) and run some of the examples.

Share this post


Link to post
Share on other sites
Quote:
Original post by Tiresias
Hello

i noticed that my 2d game is quiet slow on all laptop equipped with an intel integrated chipset. This is expected as those graphic cards are low cost and not powerful.

Currently the game is all opengl. do you think we would gain speed if we use directX instead of OpengL (as they are intel cards i suspect opengl is poorly implemented but i am not sure at all).

before developing the directx version i would be pleased to get feedback from you guys.

alternatly i could try to optimize the usage of opengl but on some card its really fast, even on some basic but not integrated card so i suspect this is only those cards.

anyway any feedback welcome.

Maybe, maybe not. Integrated chipsets aren't exactly known for their mega awesome support for modern features.

The one difference that I can see though is that with DirectX you will know exactly what is supported by the hardware and what isn't. OpenGL gives you no way to determine that, and many things that you might think are hardware accelerated could actually be performed by the driver through software emulation.

Share this post


Link to post
Share on other sites
If you feel that its too much work to switch you could simply ensure that you:

1) Aren't using immediate mode (This is extremely slow and yet tons and tons of tutorials teach it)
2) Only use power of two textures
3) Avoid reading back data from the GPU. (glReadPixels etc are bad, very bad)

Either of these are capable of completely trashing your performance on some integrated cards, and 2/3 might trash your performance with DX aswell (D3D doesn't support immediate mode so its a non-issue there).

If you post some rendering code we could probably solve your performance issues for you fairly easily.

Share this post


Link to post
Share on other sites

If an OpenGL application/game is written correctly, there should be little difference in performance between GL and Direct3D. Of course drivers can be a problem, but still the difference between the two shouldn't too noticeable
(in most cases at least). Have you used a profiler to see where the bottlenecks are in your game? To extend Simon's recommendations further:

- You should sort all items to be rendered by their texture name ID, and only switch textures when needed to reduce the number of calls to "glBindTexture". This GL command can eat a lot of time.
- Implement buffered object rendering (if you have not done so already). Ideally Vertex Buffer Objects (VBO)'s, and if not supported on your hardware, Vertex Arrays and do not use Immediate Mode as Simon has said.
- Minimize the number of states being changed between rendering of objects (i.e. do not call glEnable/glDisable after the rendering of each item).

Share this post


Link to post
Share on other sites
Quote:
Original post by shwasasinOf course drivers can be a problem

Bottom line though is that the quality of Intel's OpenGL drivers is a problem, and quite a serious one. In an ideal world you could just pick your API of choice, write code, and the end result will be more or less the same irrespective. With Intel drivers you can't do that. The performance and stability improvements that you get from using D3D instead are very real and very measurable.

Share this post


Link to post
Share on other sites
Hello

i can tell that i am
1) NOT using immediate mode
2) DO Only use power of two textures
3) DO Avoid reading back data from the GPU. (glReadPixels etc are bad, very bad)

for the 1) am not sure what it is but for ex in my rendering loop i only do this:

glBindTexture(GL_TEXTURE_2D, target->textureList[(i*target->xChunks)+j]);

float vertex2[]={sx, sy, sx+sw, sy, sx+sw, sy+sh, sx, sy+sh};
float textureCoor2[]={u1,v1, u2, v1, u2,v2, u1,v2};

glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);

glVertexPointer(2, GL_FLOAT, 0, &vertex2);
glTexCoordPointer( 2, GL_FLOAT, 0, &textureCoor2);

glDrawArrays(GL_QUADS, 0, 4);


I have very few textures (100-300 max), and i think problem is not even related to the number of textures, i must say that i am using as well ceGui lib which is a big lib, but so far i have issues with few windows.

In fact i note the slowness immmediatly, even before painting the 2d scene, just moving a mouse on a 2d black screen i can see that its slow.

i remember having used a opengl debugger to see where was the bottlenecks, but could not find anything concluant.

In fact am re reading the code and there is call done as well : SwapBuffers(hDC) where hDC is setup when creating the window.
This call is supposed to swap front and back, maybe there is some error when configuring the hDC... i will investiguate.

by the way Do you know a good opengl perf tool?

[Edited by - Tiresias on September 6, 2010 4:29:26 AM]

Share this post


Link to post
Share on other sites
GLIntercept is great for measuring time spent inside OpenGL functions, which might be a surprising bottleneck. By this I mean specific CPU time spend inside the function, and am ignoring GPU time completely. Normally this isn't of much relevance, but for a driver like the Intel it can turn up some surprising and useful info and help to pinpoint areas where you might be unknowingly going through software emulation or getting a readback from the GPU (such as that calls to glTexSubImage2D will take excessively long unless you use GL_BGRA/GL_UNSIGNED_INT_8_8_8_8_REV, for example).

Other things you can do include checking the pixel format you're actually using; you might be getting a stencil buffer back (even if you didn't request one) and if so you will need to clear it at the same time as your depth buffer (even if you're not using it) otherwise performance will choke. Prefer 16-bit indexes to glDrawElements as 32-bit will almost certainly send you through software emulation. Cross-check what you're doing in OpenGL against the D3D SDK docs and your D3D device caps (use the caps viewer that comes with the DirectX SDK for this); if something you're doing in OpenGL is not supported by D3D then you're also getting software emulation.

Overall, while the Intel cards are not great, they are capable of getting not-too-shabby performance with games around the complexity of Quake III, and some of the more recent models will even do a rock steady 60 FPS there. So if you're running slow there is definitely something wrong.

Share this post


Link to post
Share on other sites
I ran glDebugger and i saw that my frame rate is around 25 on my integrated card,
after playing a bit with glDebugger it looks like the main bottlenecks is the graphic card (ok we knew it!) , especially the draw primitive,

So this is definitly the piece of code i described before.

My question is : im calling this glDrawArrays about 300 times in a loop,
is there a way of batching those draw in one single flush?
(NOTE that all my textures are differents, if i do a batch i would need a way of batshing the textures as wells)

(or calling draw but configuring the calls to not be executed right away but only when i call like a "flush" function ? )

in fact i have a doubt : this is what we call immediate mode? i mean i am actually in immediate mode as i draw each quad sequencially?
If answer is yes, shall i use VBO (vertex buffer object) technique to batch all in one?

[Edited by - Tiresias on September 7, 2010 3:59:18 PM]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!