Why is Win32 Sleep still such garbage?

Started by
34 comments, last by cache_hit 14 years, 1 month ago
It's actually gotten /worse/ over the years - you can't even sleep for anything close to 1 ms anymore (with the calls to timeBeginPeriod(1)/timeEndPeriod(1)). You get 2 ms. And if you Sleep(2)... you get 3 ms. Besides being a rant, I am curious if there any genuine reason this API call sucks so hard? It seems with today's multicore super computers reliably getting service every 1ms ought to be pretty easy to accomplish ><.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
Advertisement
A comment here sugests to me that the Sleep time can be affected by the time taken to process messages in your message loop (which I never realised previously).

Quote:Be careful when using Sleep and code that directly or indirectly creates windows. If a thread creates any windows, it must process messages. Message broadcasts are sent to all windows in the system. If you have a thread that uses Sleep with infinite delay, the system will deadlock. Two examples of code that indirectly creates windows are DDE and COM CoInitialize. Therefore, if you have a thread that creates windows, use MsgWaitForMultipleObjects or MsgWaitForMultipleObjectsEx, rather than Sleep.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
It's not a matter of the hardware's power; it's a matter of the hardware design (in part) and the OS design (mostly). Windows isn't a real-time OS. Your app has to share the processor with god-knows what other apps written by people you don't even know, let alone control. So rules are in place to prevent monopolization of the processor. The same rules directly imply that you can't always get the processor right now.
yes it is a very retarded, same with date/time queries etc (true a bit OS specific)
but these sort of things should of been done correct right from the start, after all theyre not exactly complicated
Sleep has never been, and never was designed to be, an accurate method of pausing a thread or process. You have told the system that you wish to nap for AT LEAST n milliseconds. If you need accurate timing then use an accurate timing method (such as thread timers) and not a documented innacurate method.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Quote:Original post by Shannon Barber
Besides being a rant, I am curious if there any genuine reason this API call sucks so hard?
It seems with today's multicore super computers reliably getting service every 1ms ought to be pretty easy to accomplish ><.
The sleep function is old. It is older than the Windows OS. It is older than the Windows name. It dates back to the pre-1985 days, along with "yield" and a few other timing functions that most people have forgotten. (Yes, I started writing software as a kid back in 1981.)


The function was designed for when overclocking was flipping the "turbo" switch that bumped the PC up to an amazing 8 MHz, at the cost of possible hardware glitches.

That function was designed for when memory speed was measured in microseconds and kilobyte capacities.

The purpose of the call has not changed over the years. It tells the system that you want to be idle for at least a certain number of milliseconds.

That's all. It doesn't have requirements of "no more than". It doesn't have requirements of "exactly". It basically says 'just ignore more for a while, and get back to me eventually'.





If you want something different, then you simply need to use a different function. There are many of them to choose from.
Quote:Original post by zedz
yes it is a very retarded, same with date/time queries etc (true a bit OS specific)
but these sort of things should of been done correct right from the start, after all theyre not exactly complicated


Correct me if i'm wrong but aren't the only OS types that does this "correctly" Realtime systems ?

a RTOS has a fairly significant performance disadvantage, you want processes to run uninterrupted for as long as possible before being swapped out,

the only thing timeBeginPeriod(1) does is tell the OS to tell the cpu to stop doing what its doing once every millisecond to run a specific piece of code instead, that specific piece of code would ofcourse be what hand control of the cpu back to the OS, allows its scheduler to run, etc, etc.

This basically only means that when you call
timeBeginPeriod(1)
sleep(1)
timeEndPeriod(1)

what really happens is that you tell the OS to start forcibly grabbing control at 1ms intervals , then you tell the OS that you won't need the cpu for ATLEAST one millisecond.

The 1ms grabbing cycle doesn't start when you call sleep, it started before that which means that the first interrupt that happens after you've called sleep will occur before the 1ms sleep timer has expired.

Your process will never ever be assigned the first full timeslice after the sleep call, thus when you call sleep(1) you give up:
1) the remainder of your current timeslice
2) The timeslice following that one. (you can theoretically get assigned the later part of that timeslice if another process gives it up after your sleep timer has expired)

Then there is no guarantee even that you get the timeslice following the ones you explicitly gave up either, the scheduler still has to make sure that other applications on the system stay reasonably responsive (properly written low priority background services should be able to handle themselves during the part you explicitly gave up but other normal applications may take a full slice).
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
As a side note - if you Sleep(0), you essentially give up your spot in the OS timehsare for threads for the current thread, but do not specify a wait time. This should get you the closest possible approximation when used together with timBeginPeriod()/timeEndPeriod(). You'd have to do your own timing in that case, though, to guarantee minimum granularity.
I found that the best and easiest Clocking Method is QueryPerfomanceCounter().
If you are in a GUI Thread and need to handle Messages just insert a
while ( PeekMessage...) somewhere.
Yes it's Processor intensive. But if you don't pool, don't expect Accuracy.
There is a Reason why Games use QueryPerfomanceCounter().
As far as Sleep() granularity goes, I'd say PC is mildly irrelevant since it doesn't really provide any additional advantage over using timeGetTime() in 99% of real world applications. While it's good for precise timing, yes, one should take a step back and evaluate if that kind of precision is really needed in the first place. Assuming a granularity of 1 ms with an error of 10 ms (which you might expect from Sleep()), that's 1/1000-1/100 of a second, well within a "100 FPS" precision range. Any fluctuation is bound to be unnoticeable in practice. The PC is most useful for profiling for which in many cases it makes more sense and is less cumbersome to use rdtsc directly.

So, really, ask yourself, are you absolutely sure you need that kind of precision? I know Sleep() sucks, but then again - we're only human (so at the end of the day that's what you should be focusing your attention on :) ).

This topic is closed to new replies.

Advertisement