iTunes improves VSync quality?!

Started by
6 comments, last by Jamiernmd 17 years, 2 months ago
I've noticed something really weird with my little DirectX 9 (C++) game. If "not enough" is going on, it VSyncs badly and hits about 53FPS, which looks very choppy on my 75Hz monitor. If I open iTunes - not actually playing anything, just having it sit there - the framerate jumps back up to 75fps and it looks fine. What's going on?! Can't I make it run top speed normally, without having to open another application? Does VSync not work properly if the GPU isn't busy enough?
Construct (Free open-source game creator)
Advertisement
Ha :)

I have also noticed this odd phenomenon with a different app (Ad-Aware)
I recently posted about it. This may or may not pertain to your situation.

http://www.gamedev.net/community/forums/topic.asp?topic_id=433911
signature? I ain't signin nuthin!
Thanks. Tweaked D3DPRESENT_INTERVAL_DEFAULT to D3DPRESENT_INTERVAL_ONE and it seems to work better now!
Construct (Free open-source game creator)
Some applications change the frequency of ticks returned from various timing methods to get higher accuracy. That might be the case here. If it is, framerate most likely isn't dropping, but rather the timing methods are returning invalid results, causing the FPS calcs to be off.
That would be worth looking into as well.
Sirob Yes.» - status: Work-O-Rama.
Quote:Original post by sirob
Some applications change the frequency of ticks returned from various timing methods to get higher accuracy. That might be the case here. If it is, framerate most likely isn't dropping, but rather the timing methods are returning invalid results, causing the FPS calcs to be off.
That would be worth looking into as well.
That is pretty much the exact answer I got from MS when I reported this exact same characteristic - but with Windows Media Player instead of iTunes...

For reference:
Quote:MSDN Documentation:
Windows NT/2000: The default precision of the timeGetTime function can be five milliseconds or more, depending on the machine. You can use the timeBeginPeriod and timeEndPeriod functions to increase the precision of timeGetTime. If you do so, the minimum difference between successive values returned by timeGetTime can be as large as the minimum period value set using timeBeginPeriod and timeEndPeriod. Use the QueryPerformanceCounter and QueryPerformanceFrequency functions to measure short time intervals at a high resolution


hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

I've noticed this too. Does anyone have a solution, I don't want to open WMP every time I want to run my applications. Where should D3DPRESENT_INTERVAL_ONE go? I can't find any reference to it in my code.
It goes in the present parameters that are passed to CreateDevice and Reset.

Another odd framerate phenomenon is when you're rendering too fast, and filling the command buffer on the GPU. nVidia offers a fix
I'm still having this problem. D3DPRESENT_INTERVAL_ONE didn't work for me, i'm not sure i'm using it right. Can someone explain to following paragraph to me:

A second solution is to use DirectX 9's Asynchronous Query functionality (analogous to using fences in OpenGL). At the end of your frame, insert a D3DQUERYTYPE_EVENT query into your rendering stream. You can then poll whether the GPU has reached this event yet by using GetData. As in 1) you can thus ensure (i.e., busy wait w/ the CPU) that the CPU never gets more than 2 frames ahead of the GPU, while the GPU is never idled. Similarly it is conceivable to insert multiple queries per frame to get finer control over lag.

I'm not sure what it's asking me to do. Also does anybody know what is causing the cpu to out-perform the gpu. Is is accepting keyboard input?

Thanks

This topic is closed to new replies.

Advertisement