Jump to content
  • Advertisement
Sign in to follow this  
Dizzy_exe

OpenGL jerky movement when using Direct3D

This topic is 4001 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello GameDev, I render the same scene using a Direct3D and an OpenGL renderer. When using OpenGL, everything is smooth even at low frame rates. But with the Direct3D renderer, at low frame rates, the camera movement becomes jerky. With OpenGL I use GLFW for windowing and input and with Direct3D I have implemented my own windowing and input handlers which closely emulate the GLFW win32 implementation. I have run the code under a profiler and I have noticed that the time difference between frames varies a lot (instead of something like 41,42,41 ms I see 21, 72, 44 ms) when I move the camera, especially when moving the mouse (the average call to SetCursorPos is about 7 ms which seems a lot to me). The same thing does not happen when using OpenGL. My video board is Nvidia GeForce 6600. Does anyone have any idea what is the problem?

Share this post


Link to post
Share on other sites
Advertisement
You are definitely using time based modelling?

How about V-SYNC? Forced on/off (by app or control panel)?

Looks to me like your CPU is racing ahead of the GPU and periodically (every 3-4 frames would be about right) the driver and API force a block (you'll probably find it's the Present() call) until the command queue has space to continue.

Trying to better balance the CPU/GPU is probably your best bet, but that may be easier said than done. You can use querys to throttle the CPU in a more deterministic manner, but that always struck me as more of a hack than a solution.

hth
Jack

Share this post


Link to post
Share on other sites
Quote:

Original post by jollyjeffers
Looks to me like your CPU is racing ahead of the GPU and periodically (every 3-4 frames would be about right) the driver and API force a block (you'll probably find it's the Present() call) until the command queue has space to continue.


What could cause something like this? Something like sending 100,000 triangles in one draw call? Or am I thinking about it the wrong way?

Also, do you know of an article that explains this? I never thought about something like this but now that you mention it, it sounds very important.

Thanks in advance (and I hope this isn't too off-topic).

Share this post


Link to post
Share on other sites
More observations:

When using DirectX, CPU usage is about 20-30%, when using OpenGL CPU usage is 100%. The frame rate is about the same. My application sets VSYNC off and in control panel the setting is "application controlled". If I put a Sleep(30) before
the Present() call, the motion becomes smoother.

I think jollyjeffers might be right. Is there a way to disable command queuing?

Share this post


Link to post
Share on other sites
Quote:
What could cause something like this? Something like sending 100,000 triangles in one draw call?
Yes, something like that is possible.

Thinking of it as two parallel processors with a producer/consumer pattern in between - the buffer being the command queue that's been mentioned.

As in the general CompSci problem if the producer or consumer get sufficiently out of sync one will be blocked or one will be idle. It can be a quite difficult balancing act.

Quote:
When using DirectX, CPU usage is about 20-30%, when using OpenGL CPU usage is 100%
To be honest, this doesn't really tell us anything useful. Both have sufficiently different architectures and the system-wide CPU load can be influenced by so many things as to be an unreliable metric.

Quote:
Is there a way to disable command queuing?
No, and you definitely wouldn't want to do it even if you could. That'd be functionally equivalent to having a command queue of length 1 and force the CPU/GPU to be in lock-step. This is even worse for performance [smile]

If you can't genuinely balance your application then the technique shown in this article to force a pipeline stall might be an appropriate hack. Force a pipeline stall every N frames and you can stop the application getting too far ahead of the GPU and thus smooth out these stalls you're seeing.

However, this hack could really hurt different hardware configurations where the CPU/GPU is better matched (e.g. a fast GPU installed alongside a slower CPU) so it's not a good general solution, as I alluded to earlier.

hth
Jack

Share this post


Link to post
Share on other sites
I'm not really sure if its good practice or not, but I like to manually smooth out my time deltas using some simple low-pass filtering. It prevents jerky motion like what you're describing, but can introduce a few of its own problems. The key is not filtering too much, otherwise your filtered delta will "lag" the actual delta by a noticable amount.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!