Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

remi

mircosecond

This topic is 5560 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all! I would like to know if it''s a function like Sleep() but that can make the program stop just for some mircoseconds. The error should not be more than 2 microseconds. I believe that with faster CPU with have nowadays, it should not be very hard to achieve that, but i was unable to find such a function. Any help is welcome(it''s urgent plz).

Share this post


Link to post
Share on other sites
Advertisement
Not really. you can do with with some RealTime OS''s, but not with windows (afaik) and not even with linux without some "make linux a RTOS" patch.

its not CPU, really, its clock rate. Most OS''s keep the clock rate down real around 100hz (linux 2.6 uses 1000, i believe) because the higher you go, the more overhead is involved.

problem is, this low you can''t get resolutions of 2 microseconds easily.

Share this post


Link to post
Share on other sites
no, it is not possible to achieve that with any normal desktop OS ... period.

The reason is simple, the OSes are pre-emptive multi-tasking, with a scheduler whose job it is to maintain fair resource allocation for competing tasks, without undue reasource waste ... the absolute BEST multithreaded OS for the desktop was BeOS, which had a scheduler with 1ms (millisecond, not microsecond) accuracy, this means that in BeOS, if your thread has sufficiently high priority, the sleep function is usually acurate to +- 0.5 ms. Traditionally, windows 32, I know windows 95 and 98 specifically, had a scheduler that was only accurate to the 18.2 FPS timer, aka 55 ms. This was so attrocious as to be completely useless for modern games ... so people did not use it.

An OS could, and some real time OSes due, give timer and wakeup accuracy in the 2-4 um (microsecond) range, but this is the best I have seen to date, and only realy works when there are not many important system threads competing to draw the screen, manage resources, etc ... these OSes usually do not even run a graphical desktop, because they are used for network switching fabric, audio/video feed routing, etc ... normal OSes do not drop to this level, because it is completely inconsistent with trying to acheive efficient use of a processor, on a system which takes more than 1um per normal operation ...

think for a second, 1 um means 1 MHz, if the processor is running at say 1GHz (which is 3 times lower than the top end, and about 3 times higher than the current bottom end), that only gives you 1000 clock cycles to use for all other threads before returning to the current one ... remmber that task switching uses a fair number of clock cycles, as do the operations actually performed by the other threads ... also, the memory itself is more likely running around 266 MHz, 64 bit wide, not couting access latency ... which gives you only the ability to read and write about 500 integers per time slice, maximum ... the real world is MUCH MUCH less (my guess is more like 50-100).

Basically, an OS which tried to achieve such precision would only be suitable with 2 conditions, 1 - the application and facilities used by it (including all needed OS functions) almost completely fits inside cache memory (at least during any given segment of operation), 2 - there are no more than 1-3 threads of any primary importance running in the system, and not more than 5-9 of any significance whatsoever ... else the thrashing would destoy performance.

This is largely speculation on my part, as I am an application programmer, not a system or driver programmer, but I did work for 2+ years on an embedded game platform, and then 2 years at an audio processing company (which used BeOS, many threads, and had significant latency requirements). Please feel free to correct anything I have wrong, I would like to learn more about the current state of things is this area.

Share this post


Link to post
Share on other sites
because the timers are updated by the clock ship, which is set to a several millisecond accuracy.

Unless there are "high resolution timers" that do some nifty timing operations with CPU cycles.... which would vary a lot, since such things are inexact science last time i checked.

and still, your process can still be task switch in the middle of a busy wait. suddenly its 100 milliseconds later...

Share this post


Link to post
Share on other sites
After reading this post , i tried QueryPerformanceFrequency() and QueryPerformanceCounter(), they seem really accurate(at least better GetTickCount()), but still not accurate enough as far as the microsecond is concerned.

I still believe that with the faster CPUs we have now, it's possible to achieve that task, why?
Not so sure, but take the example of the serial commucation(COM port), it can work at a speed of 128 000Baud = 128 000bit/s, so 1bit needs around 7.9microseconds to be sent!



[edited by - remi on July 26, 2003 10:15:18 PM]

Share this post


Link to post
Share on other sites
>> Why not do a busy wait with a high resolution timer?
> because the timers are updated by the clock ship, which is set to a several millisecond accuracy.
The CTC chip is a 1.193 MHZ 16 bit counter. "accuracy" is a divisor that determines how often an interrupt is triggered, in which Windows updates its timer. The point? Windows timing functions (excepting QPC) are not high-resolution timers

> Unless there are "high resolution timers" that do some nifty timing operations with CPU cycles.... which would vary a lot, since such things are inexact science last time i checked. <
rdtsc. The cycle counter does indeed jitter a bit (CPU clock crystal ain''t all too good), but doesn''t matter due to the awesome resolution.

> and still, your process can still be task switch in the middle of a busy wait. suddenly its 100 milliseconds later... <
Harr, not if you''re running at priority 31 >:]

Share this post


Link to post
Share on other sites
check out the rdtsc instruction, it lets you measure clock cycles. its tricky to get the clock frequency though, and unreliable because of modern CPUs changing frequencies in laptops and such.

Share this post


Link to post
Share on other sites
Can you be more specific about why you need to wait for 2 microseconds? Perhaps there is a much better solution to your particular problem.

Share this post


Link to post
Share on other sites
I think i have been specific enough!

The project i''m working on has to do with the hardware(LPT port,....), and letting the chip connected to the PC sleep for milliseconds would be a pretty good waste of speed!

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!