[.net] Timing with Threads for less than a millisecond

Started by
4 comments, last by SimonForsman 13 years, 6 months ago
Heh, I wanted to see if I could make a timer that doesn't use all of the processing power(well at least not one whole core). I was not very successful but technically it should work. I'll explain before posting code. Say you want to have a timer that clicks every 250 microseconds, what you do is create 4 threads at 250 mics intervals and make them sleep for 1 millisecond. You will need more threads if you are doing faster timings or less if you are doing slower timings. That was my idea but the problem is that threads seem to do what ever they want especially when there is other loads on the CPU. So here is the code. I don't know how to make it work better and maybe you have an idea.

using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading;using System.Timers;using System.Runtime.InteropServices;namespace Imba_Timer{    class Program    {        public Thread[] tCountHF;        public Thread tCountLF;        public static int timeHF = 0;        public static int timeLF = 0;        [DllImport("Kernel32.dll")]        private static extern bool QueryPerformanceCounter(            out long lpPerformanceCount);        [DllImport("Kernel32.dll")]        private static extern bool QueryPerformanceFrequency(            out long lpFrequency);        Program()        {            tCountLF = new Thread(ShowTime);        }        Program(int numThreads)        {            tCountHF = new Thread[numThreads];            for (int i = 0; i < numThreads; i++)            {                tCountHF = new Thread(SleepOne);            }        }        static void Main(string[] args)        {            Program myPrg = new Program();            myPrg.ThreadsStartup(250);            Console.ReadLine();        }        public void StartUpCounter(int microSeconds)        {            float stepTimeSec = (float)1 * (float)(microSeconds / 1000000) ;            int x = 100;            float secondsPerOpp = 0;            float secondsPerOpp2 = 0;            float incrTime = 0;            long cntsPerSec = 0;            long prevTimeStamp = 0;            long currTimeStamp = 0;            long prevTimeStamp2 = 0;            long currTimeStamp2 = 0;            QueryPerformanceFrequency(out cntsPerSec);            do            {                QueryPerformanceCounter(out prevTimeStamp2);                QueryPerformanceCounter(out prevTimeStamp);                while (x > 0)                {                    x--;                }                x = 100;                QueryPerformanceCounter(out currTimeStamp);                secondsPerOpp = (currTimeStamp - prevTimeStamp) * (1.0f / (float)cntsPerSec);                QueryPerformanceCounter(out currTimeStamp2);                secondsPerOpp2 = (currTimeStamp2 - prevTimeStamp2) * (1.0f / (float)cntsPerSec);                incrTime += (secondsPerOpp + secondsPerOpp2);            } while (incrTime < stepTimeSec);        }        public void ThreadsStartup(int microSec)        {            int numThreads = 1000 / microSec;            Program start = new Program(numThreads);            Program lowFrequency = new Program();            lowFrequency.tCountLF.Start();            for (int i = 0; i < numThreads; i++)            {                start.tCountHF.Priority = ThreadPriority.Highest;                start.tCountHF.Start(this);                start.StartUpCounter(microSec);            }        }        public void SleepOne(object obj)        {            //while (!stop)            for (int g = 0; g < 120000; g++)            {                Thread.Sleep(1);                timeHF++;            }        }        public void ShowTime()        {            //while (!stop);            for (int g = 0; g < 120; g++)            {                Thread.Sleep(1000);                timeLF += 1000;                Console.WriteLine("Hight Frequency Time is: " + timeHF + " ------ " + "Low Frequency Time is: " + timeLF);            }        }    }}


For now this code is not very useful since most of these high frequency timings are used in something other than games where you can use the Sleep() function. Plus it is not very precise which I do not know why. Oh well was worth a shot.
Advertisement
I guess the first question is: why do you need a timer to tick every .25ms? What are you actually trying to do?
This smells like a horrible way to go about this...

Secondly, as to why it isn't working the way you want: the issue is that sleep causes the OS to schedule your thread for later. It is guaranteed to return NO EARLIER THAN the given interval, but it could return tomorrow, or next week. In practice, it responds better than that, but in general it still has jitter.
TimeBeginPeriod(1); and TimeEndPeriod(1); will affect the overall system performance while your app is running, but increases the resolution of your Sleep calls. But even then, using up 4 threads for this seems like a bad idea. Again, what exactly are you trying to do?


I am not trying to achieve anything really, just wanted to see if it's possible to make a timer like that. I guess the only way to do it is without threads since they wake up anytime they want... About running 4 threads runnning at a time, they are very small and when I ran it it only used like 2% of the 4 cores I have. When I used the querryfrequency etc it used 25% of all cores for the whole period of the program.
It's possible to do such a thing under Linux, but since you use C#, chances are you're rather targetting Windows.

In this case, 1 millisecond is as good as you can get (but, without any guarantees, as KulSeran pointed out) if you set the system timer granularity. If you don't set the granularity, you'll typically get 15 ms as the minimum sleep time. Adding more threads will only cause more scheduling overhead, but will not give you anything more fine-grained.
Yeah lol I thought it would be possible under Linux. Thank you for all of your replies guys.
How (and why) is it possible under Linux?
Quote:Original post by Kambiz
How (and why) is it possible under Linux?


Probably by swapping to a RT kernel, using nanosleep and a high process priority.

Default Linux kernels act pretty much like Windows though and won't meet the OPs requirements. (The RT linux kernel sacrifices quite alot of throughput in order to reduce latency and jitter making it less than optimal for normal work) (You still won't get nanosecond accuracy though, jitter might be as high as 20us even on a heavily optimized system, but thats still far better than what you get with a desktop OS)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

This topic is closed to new replies.

Advertisement