Jump to content
  • Advertisement
Sign in to follow this  
irreversible

micromanaging thread CPU time

This topic is 4349 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Generally windows provides a simple thread management scheme within the API: you can set a thread's priority and thus define how much CPU time each thread gets. The priority-based solution works, but is actually a very violent method of controlling cycle distribution. For multiple CPU's Windows lets an application define a thread's affinity mask to define the thread's favored CPU. This is where things get interesting if you need to micromanage thread time distribution: Issue I firstly, I'm running the entire application at a BELOW_NORMAL priority most of the time (which means that the application essentially runs in the background). This can be easily achieved. However, the application can spawn up to, say, 10 threads that start running synchronously with the main thread, but should each receive an equal cut of a specified allocated slice of the free CPU cycles. A somewhat complex sentence, let's dissect. This is essentially like network shaping in a router. A for instance: the computer runs on idle for some time minutes. The application gets 99% of the CPU cycles for its main thread during that time. However, I want to micromanage it so that the user can specify anywhere between 5-100% of those 99% to be distributed between an additional set of ten threads that the application can (but doesn't have to) spawn as a result of remote events. For a default case, let's say the application gets 99% of the CPU cycles and 20% of those 99 percent are allocated for the non-primary threads. Does Windows provide a solution for doing that? I'd be willing to implement such "shaping" myself if there's no internal support for it it can be built on the existing threading system. As a less favorable solution, I'd probably also be willing to reimplement much of the threading system Windows has to offer (because, as things are, I need this kind of fine-tuned control... :( ). Issue II secondly, having already touched the topic of multiple processors - that's where I'd be stuck with not knowing how to even approach the problem; implementing a threading system based on process information and high performance counter controlled while loops is one thing, but combining and distributing that workload between multiple processors seems quite a bit different deal. NOTE: the good news is that each of these threads uses a separate dataset, which means that there are no traditional multithreading (locking) issues. To sum up: - does Windows provide any CPU "shaping" tools/methods? - which is most likely the most flexible (not neccessarily, but hopefully also the simplest) method: controlling the existing threading system or writing a new threading system? - I've Googled for the above issues (and found almost nothing - probably because I don't know how to research these), but I haven't looked into the multiple processor issue yet. The thing is that what I'm trying to do is highly suitable for multiple processors and I'd like to take advantage of that. So suggestions are definitely welcome :). Cheers

Share this post


Link to post
Share on other sites
Advertisement
For optimal performance you should spawn 1 worker thread for each core/CPU in the system. You should not spawn 10 active threads, this causes thrashing (destroys cache coherency and wrecks performance). General recommendations are to create a few more worker threads in case one of the jobs is really long, you don't want to stop servicing simple quicker jobs.

You should either have each task maintain its own state and have it voluntarily yield after a fixed amount of time, or you can use the fiber API and use the stack to maintain your state. You can have multiple threads use (and even share!) fibers. Synchronizing and scheduling are your problems then.

Also, and this seem contrary to many people's intuition for whatever reason, the longer the task takes to complete the lower the priority of the task should be.

One more thing, also for reasons beyond my mortal comprehension, Windows likes to do round-robin scheduling. This also minimizes cache coherency, so you may consider setting an affinity mask to restrict one worker thread to each cores and leave the extra ones floating.

Share this post


Link to post
Share on other sites
irreversible - it sounds like you're experienced enough to have thought this through pretty thoroughly, but I'll make some obvious suggestions for clarities sake.

Although you talk about managing CPU utilisation, you haven't mentioned worst case frequency of remote events, worst case execution time of managing those events, deadlines for handling those event and the consequences of missing deadlines. Without this type of information the help you'll get here is going to be fairly speculative.

As an aside - if you decide on persuing this design have you considered some of the commerical real time extensions for windows? There are some packages on the market which are supposed to be ok - I haven't used them myself and am a bit doubtful that they would be capable of what you're asking. In the same vein, have you considered other operating systems? RTLinux, VxWork and QNX are all operating systems which have much more predictable behavior with respect to scheduling than windows does.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shannon Barber
For optimal performance you should spawn 1 worker thread for each core/CPU in the system. You should not spawn 10 active threads, this causes thrashing (destroys cache coherency and wrecks performance). General recommendations are to create a few more worker threads in case one of the jobs is really long, you don't want to stop servicing simple quicker jobs.


The jobs are meant to be batch-based. Some information is received over the netowork and a new job is created. To maintain consistency, threads are alloctaed per-query (per IP). If the connection drops, the application waits for a timeout, then frees the thread for another connection. My idea is that initially there are zero worker threads and up to N (hard-limited to, say, 10) are created depending on local settings (chosen by the user) and the number of connections received. If no next connection is found immediately (or within a given timeframe) when a connection drops or when a job is finished (meaning that all the batches are finished from that IP), then the thread is suspended, possibly shut down.


Quote:

You should either have each task maintain its own state and have it voluntarily yield after a fixed amount of time, or you can use the fiber API and use the stack to maintain your state. You can have multiple threads use (and even share!) fibers. Synchronizing and scheduling are your problems then.


This is where I could use some help. The only method of thread balancing I know is the Sleep() command, which is in milliseconds (way too long). What's worse is that the thread executes code from a plugin that doesn't guarantee that it would return any time soon (for instance, the thread could lose all control to the plugin for several seconds or if the situation is really dire, then for several minutes). What I really need is OS-based control that really handles cycle distribution. I don't know how to do that. Any other solution would be a surrogate one.

Quote:

Also, and this seem contrary to many people's intuition for whatever reason, the longer the task takes to complete the lower the priority of the task should be.


There are no guarantees as to the length of the job. I can force the jobs to take a minimum amount of time (by forcing them to be big enough), but I can't know in advance how long they will take in reality. Given a realistic case, I suppose batches could be finished from anything between a few seconds to a few minutes, but entire IP-specific jobs could carry on for days.

Quote:

One more thing, also for reasons beyond my mortal comprehension, Windows likes to do round-robin scheduling. This also minimizes cache coherency, so you may consider setting an affinity mask to restrict one worker thread to each cores and leave the extra ones floating.


That's a good idea.






Quote:
Original post by XXX_Andrew_XXX
Although you talk about managing CPU utilisation, you haven't mentioned worst case frequency of remote events, worst case execution time of managing those events, deadlines for handling those event and the consequences of missing deadlines. Without this type of information the help you'll get here is going to be fairly speculative.


My bad.

Worst case frequency is a client connecting and immediately disconnecting, plus the timeout (while waiting for timeout, the allocated thread keeps working). Worst case execution time does not matter - the thread can be terminated at any time (for instance when the local user quits or shuts the computer off) - it won't affect dataflow in any way (the only thing that happens is that the requester at the other end won't get his modified goods back. In that sense there is no worst case scenario. There are no deadlines as such to speak of - the only "deadline" is that when the information is not back to the requester when he/she absolutely needs it, he/she will either do it himself or ask someone else to do it, notifying the local user that the batch can be terminated. It's something like Prime95, only different - you go on about calculating your prime and there are no deadlines. When the server decides (well, hypothetically, because the Prime95 server only sends out jobs to verify any given prime candidate once) that a client's time is up, then it's up. There is no penalty to missing the deadline - you'll just lose the job.

I'll put this in bold, because this embodies the main idea behind the effort: the main idea behind this is to force any user to allocate at least a minimum amount of idle CPU power to external jobs passed to them (most likely anonymously) from a remote computer. The precise allocation quotas and privileges do not matter at this point - I first need to get the framework up and working.


Quote:

As an aside - if you decide on persuing this design have you considered some of the commerical real time extensions for windows? There are some packages on the market which are supposed to be ok - I haven't used them myself and am a bit doubtful that they would be capable of what you're asking. In the same vein, have you considered other operating systems? RTLinux, VxWork and QNX are all operating systems which have much more predictable behavior with respect to scheduling than windows does.


Nope. And I don't intend to. The main reasons for that are 1) I don't have the money, 2) I am not writing (at this time) a commercial application, 3) I don't want to. Capice? :P

I'v already gone to great lengths to reinvent certain kinds of wheel to avoid making the application dependent on third party libraries. That's how I prefer things.



PS - as far as this particular problem goes, I have no background experience. I'm quite comfortable with multi-threading within an application, but that's about it :).

Share this post


Link to post
Share on other sites
Windows has 32 levels of thread priority, but you can't directly affect it
for the most part. Setting your thread level
at BELOW_NORMAL give you a thread range from 1-15, and the actual priority can drift within that range.
So, if your main thread is doing nothing most the time, it will drift down to a rating of 1.
Thus if your worker threads are doing most the processing, they will drift up to a 15.
Lastly, all threads of priority X share the CPU time equally.

So, for your case, windows probably is going to do just fine without your input. Since it looks like you
only want to deal with macro-scale tuning of your program (all that matters is that each thread gets equal time)
If you want to go finer grained(different threads doing the same amount of work get different percents of the CPU)
then you should look into some threading APIs that support micro-threads / fibers.

--edit notes due to your post while i was typing...
Sleep(0) will release your thread's time if you finish your work in that thread early. Anything more than
a 0 in there will give you that stall you are worried about. But a sleep(0) could be a good idea just as
you finish a batch at wait for the next one.

If your thread calls a plugin and loses control to the plugin, then your thread
is still running, and still
being timed accordingly. The plugin doesn't magically steal your thread's time slice, add to it or affect it
in any way.

Share this post


Link to post
Share on other sites
Quote:
Original post by KulSeran
Windows has 32 levels of thread priority, but you can't directly affect it
for the most part. Setting your thread level
at BELOW_NORMAL give you a thread range from 1-15, and the actual priority can drift within that range.
So, if your main thread is doing nothing most the time, it will drift down to a rating of 1.
Thus if your worker threads are doing most the processing, they will drift up to a 15.
Lastly, all threads of priority X share the CPU time equally.

So, for your case, windows probably is going to do just fine without your input. Since it looks like you
only want to deal with macro-scale tuning of your program (all that matters is that each thread gets equal time)
If you want to go finer grained(different threads get different percents of the CPU) then you should look
into some threading APIs that support micro-threads / fibers.


As I said, I want the latter. I want to give the worker threads a guaranteed CPU percentage even when the main thread is running at full throttle (which it will most likely do, even if it's running at a BELOW_NORMAL priority). Pondering over this a little will tell you that what I want to achieve, can't be achieved using just priority queues. I need CPU resource shaping. Can you suggest any of these API's? Because I know of none...

Share this post


Link to post
Share on other sites
Ha! finally found what I was looking for:
Journal of Ysaneya, at gamedev: on scheduling

Now this might not be very helpful, and i remember seeing a library that does what he describes, but that is eluding me.
Something on stack-based fibers, nanothreads/microthreads/pseudothreads ... something.
But Since i cant find it, maybe that will give you some ideas on how to run your scheduling system.

Unfortunatly, there is the "sleep" problem that you fear, in wich case you may need to up the priority of the
scheduling thread to be assured it gets CPU time when it wakes up.
Also, you could schedule your threads with the hi-res timer(remember to only read the hi-res timer off one CPU
to avoid the dreaded jitter between CPU clocks) and make sure your scheduler sleeps for less time than any thread's timeslice.
You end up with wasted cycles as your scheduler spin-locks waiting for the thread's time to be up,
but you start and end each thread at set times.

Share this post


Link to post
Share on other sites
I thought I'd implement the method proposed by Ysaneya and then post the results - the sheer simpleness of it is quite overwhelming :).

Anyway, I set up some shaping and automatic priority queuing and CPU resource management, but Sleep() is definitely not working for me the way it should be. The code is simple enough. It's pretty much along the lines of the code from Ysaneya's journal.

I made up the term "granule" to describe the amount of time in which all threads get to execute their share, the minimum amount of time a thread can execute in a granule being 1 ms.

The output for five threads looks like this (as run in a loop):

First column: time to sleep, then timeGetTime() values before and after, and finally the difference.


sleep: 2: 30541332 -> 30541398 = 66
sleep: 2: 30541398 -> 30541429 = 31
sleep: 2: 30541429 -> 30541493 = 64
sleep: 2: 30541493 -> 30541555 = 62
sleep: 1: 30541555 -> 30541619 = 64
sleep: 2: 30541628 -> 30541710 = 82
sleep: 2: 30541710 -> 30541793 = 83
sleep: 2: 30541793 -> 30541835 = 42
sleep: 2: 30541835 -> 30541913 = 78
sleep: 1: 30541913 -> 30541993 = 80
sleep: 2: 30541993 -> 30542054 = 61
sleep: 2: 30542054 -> 30542117 = 63
sleep: 2: 30542117 -> 30542179 = 62
sleep: 2: 30542179 -> 30542210 = 31
sleep: 1: 30542210 -> 30542273 = 63
sleep: 2: 30542273 -> 30542335 = 62
sleep: 2: 30542335 -> 30542398 = 63
sleep: 2: 30542398 -> 30542476 = 78
sleep: 2: 30542476 -> 30542538 = 62
sleep: 1: 30542538 -> 30542617 = 79
sleep: 2: 30542617 -> 30542695 = 78
sleep: 2: 30542695 -> 30542757 = 62
sleep: 2: 30542757 -> 30542820 = 63
sleep: 2: 30542820 -> 30542851 = 31
sleep: 1: 30542851 -> 30542969 = 118
sleep: 2: 30542969 -> 30543038 = 69
sleep: 2: 30543038 -> 30543132 = 94
sleep: 2: 30543132 -> 30543257 = 125
sleep: 2: 30543257 -> 30543398 = 141
sleep: 1: 30543570 -> 30543572 = 2
sleep: 2: 30543572 -> 30543617 = 45
sleep: 2: 30543617 -> 30543648 = 31
sleep: 2: 30543648 -> 30543695 = 47
sleep: 2: 30543695 -> 30543804 = 109
sleep: 1: 30543804 -> 30543913 = 109
sleep: 2: 30544008 -> 30544011 = 3
sleep: 2: 30544011 -> 30544117 = 106
sleep: 2: 30544117 -> 30544210 = 93
sleep: 2: 30544210 -> 30544320 = 110
sleep: 1: 30544320 -> 30544429 = 109
sleep: 2: 30544429 -> 30544432 = 3
sleep: 2: 30544432 -> 30544435 = 3
sleep: 2: 30544435 -> 30544438 = 3
sleep: 2: 30544438 -> 30544523 = 85
sleep: 1: 30544523 -> 30544525 = 2
sleep: 2: 30544525 -> 30544632 = 107
sleep: 2: 30544632 -> 30544726 = 94
sleep: 2: 30544726 -> 30544729 = 3
sleep: 2: 30544729 -> 30544732 = 3
sleep: 1: 30544732 -> 30544734 = 2
sleep: 2: 30544734 -> 30544737 = 3
sleep: 2: 30544737 -> 30544740 = 3
sleep: 2: 30544740 -> 30544836 = 96
sleep: 2: 30545038 -> 30545042 = 4
sleep: 1: 30545042 -> 30545148 = 106
sleep: 2: 30545148 -> 30545257 = 109
sleep: 2: 30545257 -> 30545351 = 94
sleep: 2: 30545351 -> 30545460 = 109
sleep: 2: 30545460 -> 30545463 = 3
sleep: 1: 30545463 -> 30545465 = 2
sleep: 2: 30545465 -> 30545468 = 3
sleep: 2: 30545468 -> 30545471 = 3
sleep: 2: 30545471 -> 30545474 = 3
sleep: 2: 30545474 -> 30545554 = 80
sleep: 1: 30545554 -> 30545663 = 109
sleep: 2: 30545663 -> 30545783 = 120
sleep: 2: 30545784 -> 30545898 = 114
sleep: 2: 30545898 -> 30545910 = 12
sleep: 2: 30545910 -> 30545914 = 4
sleep: 1: 30545914 -> 30545926 = 12



Increasing granule size 10-fold, gives this:


sleep: 20: 30719849 -> 30719870 = 21
sleep: 20: 30719870 -> 30719932 = 62
sleep: 20: 30719932 -> 30719979 = 47
sleep: 20: 30719979 -> 30720011 = 32
sleep: 10: 30720011 -> 30720088 = 77
sleep: 20: 30720088 -> 30720151 = 63
sleep: 20: 30720151 -> 30720213 = 62
sleep: 20: 30720213 -> 30720276 = 63
sleep: 20: 30720276 -> 30720338 = 62
sleep: 10: 30720338 -> 30720416 = 78
sleep: 20: 30720416 -> 30720479 = 63
sleep: 20: 30720479 -> 30720557 = 78
sleep: 20: 30720557 -> 30720620 = 63
sleep: 20: 30720620 -> 30720651 = 31
sleep: 10: 30720651 -> 30720713 = 62
sleep: 20: 30720713 -> 30720776 = 63
sleep: 20: 30720777 -> 30720838 = 61
sleep: 20: 30720838 -> 30720927 = 89
sleep: 20: 30720927 -> 30720979 = 52
sleep: 10: 30720979 -> 30721010 = 31
sleep: 20: 30721010 -> 30721073 = 63
sleep: 20: 30721073 -> 30721120 = 47
sleep: 20: 30721120 -> 30721151 = 31
sleep: 20: 30721151 -> 30721229 = 78
sleep: 10: 30721229 -> 30721291 = 62
sleep: 20: 30721291 -> 30721416 = 125
sleep: 20: 30721416 -> 30721495 = 79
sleep: 20: 30721495 -> 30721573 = 78
sleep: 20: 30721573 -> 30721651 = 78
sleep: 10: 30721651 -> 30721713 = 62
sleep: 20: 30721713 -> 30721810 = 97
sleep: 20: 30721810 -> 30721932 = 122
sleep: 20: 30721932 -> 30722057 = 125
sleep: 20: 30722057 -> 30722135 = 78
sleep: 10: 30722135 -> 30722198 = 63
sleep: 20: 30722198 -> 30722218 = 20
sleep: 20: 30722218 -> 30722276 = 58
sleep: 20: 30722276 -> 30722338 = 62
sleep: 20: 30722339 -> 30722370 = 31
sleep: 10: 30722370 -> 30722432 = 62
sleep: 20: 30722432 -> 30722479 = 47
sleep: 20: 30722791 -> 30722832 = 41
sleep: 20: 30722832 -> 30722995 = 163
sleep: 20: 30722995 -> 30723155 = 160
sleep: 10: 30723155 -> 30723338 = 183
sleep: 20: 30723338 -> 30723512 = 174
sleep: 20: 30723512 -> 30723534 = 22
sleep: 20: 30723534 -> 30723557 = 23
sleep: 20: 30723557 -> 30723590 = 33
sleep: 10: 30723590 -> 30723618 = 28
sleep: 20: 30723618 -> 30723651 = 33
sleep: 20: 30723651 -> 30723679 = 28
sleep: 20: 30723679 -> 30723710 = 31
sleep: 20: 30723710 -> 30723738 = 28
sleep: 10: 30723738 -> 30723750 = 12
sleep: 20: 30723750 -> 30723775 = 25
sleep: 20: 30723775 -> 30723795 = 20
sleep: 20: 30723795 -> 30723818 = 23



Quite obviously that's not working, so I looked elsewhere.

Setting the shaper thread's priority to time-critical (which should give it the highest possible priority within the application, produced the following results.


sleep: 2: 31173150 -> 31173152 = 2
sleep: 2: 31173152 -> 31173162 = 10
sleep: 1: 31173162 -> 31173164 = 2
sleep: 2: 31173164 -> 31173167 = 3
sleep: 2: 31173167 -> 31173170 = 3
sleep: 2: 31173170 -> 31173173 = 3
sleep: 2: 31173173 -> 31173182 = 9
sleep: 1: 31173183 -> 31173185 = 2
sleep: 2: 31173185 -> 31173188 = 3
sleep: 2: 31173188 -> 31173191 = 3
sleep: 2: 31173191 -> 31173193 = 2
sleep: 2: 31173193 -> 31173202 = 9
sleep: 1: 31173202 -> 31173204 = 2
sleep: 2: 31173204 -> 31173207 = 3
sleep: 2: 31173207 -> 31173210 = 3
sleep: 2: 31173210 -> 31173213 = 3
sleep: 2: 31173213 -> 31173222 = 9
sleep: 1: 31173222 -> 31173224 = 2
sleep: 2: 31173224 -> 31173227 = 3
sleep: 2: 31173227 -> 31173230 = 3
sleep: 2: 31173230 -> 31173233 = 3
sleep: 2: 31173233 -> 31173242 = 9
sleep: 1: 31173242 -> 31173244 = 2
sleep: 2: 31173244 -> 31173247 = 3
sleep: 2: 31173247 -> 31173250 = 3
sleep: 2: 31173250 -> 31173253 = 3
sleep: 2: 31173253 -> 31173262 = 9
sleep: 1: 31173262 -> 31173264 = 2
sleep: 2: 31173264 -> 31173267 = 3
sleep: 2: 31173267 -> 31173270 = 3
sleep: 2: 31173270 -> 31173273 = 3
sleep: 2: 31173273 -> 31173282 = 9
sleep: 1: 31173282 -> 31173284 = 2
sleep: 2: 31173284 -> 31173287 = 3
sleep: 2: 31173287 -> 31173290 = 3
sleep: 2: 31173290 -> 31173293 = 3
sleep: 2: 31173293 -> 31173302 = 9
sleep: 1: 31173302 -> 31173304 = 2
sleep: 2: 31173304 -> 31173307 = 3
sleep: 2: 31173307 -> 31173310 = 3
sleep: 2: 31173310 -> 31173313 = 3
sleep: 2: 31173313 -> 31173322 = 9
sleep: 1: 31173322 -> 31173324 = 2
sleep: 2: 31173324 -> 31173327 = 3
sleep: 2: 31173327 -> 31173330 = 3
sleep: 2: 31173330 -> 31173333 = 3
sleep: 2: 31173333 -> 31173343 = 10
sleep: 1: 31173343 -> 31173345 = 2
sleep: 2: 31173345 -> 31173348 = 3
sleep: 2: 31173348 -> 31173351 = 3
sleep: 2: 31173351 -> 31173354 = 3
sleep: 2: 31173354 -> 31173362 = 8
sleep: 1: 31173362 -> 31173364 = 2
sleep: 2: 31173364 -> 31173367 = 3
sleep: 2: 31173367 -> 31173370 = 3
sleep: 2: 31173370 -> 31173373 = 3
sleep: 2: 31173373 -> 31173382 = 9
sleep: 1: 31173382 -> 31173384 = 2
sleep: 2: 31173384 -> 31173387 = 3
sleep: 2: 31173387 -> 31173390 = 3
sleep: 2: 31173390 -> 31173393 = 3
sleep: 2: 31173393 -> 31173402 = 9
sleep: 1: 31173402 -> 31173404 = 2
sleep: 2: 31173404 -> 31173407 = 3
sleep: 2: 31173407 -> 31173410 = 3
sleep: 2: 31173410 -> 31173413 = 3
sleep: 2: 31173413 -> 31173423 = 10
sleep: 1: 31173423 -> 31173425 = 2
sleep: 2: 31173425 -> 31173428 = 3
sleep: 2: 31173428 -> 31173431 = 3
sleep: 2: 31173431 -> 31173434 = 3
sleep: 2: 31173434 -> 31173442 = 8
sleep: 1: 31173442 -> 31173444 = 2
sleep: 2: 31173444 -> 31173447 = 3
sleep: 2: 31173447 -> 31173450 = 3
sleep: 2: 31173450 -> 31173453 = 3
sleep: 2: 31173453 -> 31173462 = 9
sleep: 1: 31173463 -> 31173465 = 2
sleep: 2: 31173465 -> 31173468 = 3
sleep: 2: 31173468 -> 31173471 = 3
sleep: 2: 31173471 -> 31173474 = 3
sleep: 2: 31173474 -> 31173483 = 9
sleep: 1: 31173483 -> 31173484 = 1
sleep: 2: 31173484 -> 31173487 = 3
sleep: 2: 31173487 -> 31173490 = 3
sleep: 2: 31173490 -> 31173493 = 3
sleep: 2: 31173493 -> 31173502 = 9
sleep: 1: 31173502 -> 31173504 = 2
sleep: 2: 31173504 -> 31173507 = 3
sleep: 2: 31173507 -> 31173510 = 3
sleep: 2: 31173510 -> 31173513 = 3
sleep: 2: 31173513 -> 31173523 = 10
sleep: 1: 31173523 -> 31173525 = 2
sleep: 2: 31173525 -> 31173527 = 2
sleep: 2: 31173527 -> 31173530 = 3
sleep: 2: 31173530 -> 31173533 = 3
sleep: 2: 31173533 -> 31173542 = 9
sleep: 1: 31173542 -> 31173544 = 2
sleep: 2: 31173544 -> 31173547 = 3
sleep: 2: 31173547 -> 31173550 = 3
sleep: 2: 31173550 -> 31173553 = 3
sleep: 2: 31173553 -> 31173563 = 10
sleep: 1: 31173563 -> 31173565 = 2
sleep: 2: 31173565 -> 31173567 = 2
sleep: 2: 31173567 -> 31173570 = 3
sleep: 2: 31173570 -> 31173573 = 3
sleep: 2: 31173573 -> 31173582 = 9
sleep: 1: 31173582 -> 31173584 = 2
sleep: 2: 31173584 -> 31173587 = 3
sleep: 2: 31173587 -> 31173590 = 3
sleep: 2: 31173590 -> 31173593 = 3
sleep: 2: 31173593 -> 31173602 = 9
sleep: 1: 31173602 -> 31173604 = 2
sleep: 2: 31173604 -> 31173607 = 3
sleep: 2: 31173607 -> 31173609 = 2
sleep: 2: 31173609 -> 31173612 = 3
sleep: 2: 31173612 -> 31173622 = 10
sleep: 1: 31173622 -> 31173624 = 2
sleep: 2: 31173624 -> 31173627 = 3
sleep: 2: 31173627 -> 31173630 = 3
sleep: 2: 31173630 -> 31173633 = 3
sleep: 2: 31173633 -> 31173643 = 10
sleep: 1: 31173643 -> 31173645 = 2
sleep: 2: 31173645 -> 31173648 = 3
sleep: 2: 31173648 -> 31173650 = 2
sleep: 2: 31173650 -> 31173653 = 3
sleep: 2: 31173653 -> 31173663 = 10
sleep: 1: 31173663 -> 31173665 = 2
sleep: 2: 31173665 -> 31173668 = 3
sleep: 2: 31173668 -> 31173671 = 3
sleep: 2: 31173671 -> 31173674 = 3
sleep: 2: 31173674 -> 31173683 = 9
sleep: 1: 31173683 -> 31173685 = 2
sleep: 2: 31173685 -> 31173688 = 3
sleep: 2: 31173688 -> 31173691 = 3
sleep: 2: 31173691 -> 31173693 = 2
sleep: 2: 31173693 -> 31173702 = 9
sleep: 1: 31173702 -> 31173704 = 2
sleep: 2: 31173704 -> 31173707 = 3
sleep: 2: 31173707 -> 31173710 = 3
sleep: 2: 31173710 -> 31173713 = 3
sleep: 2: 31173713 -> 31173722 = 9
sleep: 1: 31173722 -> 31173724 = 2
sleep: 2: 31173724 -> 31173727 = 3
sleep: 2: 31173727 -> 31173730 = 3
sleep: 2: 31173730 -> 31173733 = 3
sleep: 2: 31173733 -> 31173742 = 9
sleep: 1: 31173742 -> 31173744 = 2
sleep: 2: 31173744 -> 31173747 = 3
sleep: 2: 31173747 -> 31173750 = 3
sleep: 2: 31173750 -> 31173753 = 3
sleep: 2: 31173753 -> 31173762 = 9
sleep: 1: 31173762 -> 31173764 = 2
sleep: 2: 31173764 -> 31173767 = 3
sleep: 2: 31173767 -> 31173770 = 3
sleep: 2: 31173770 -> 31173773 = 3
sleep: 2: 31173773 -> 31173782 = 9
sleep: 1: 31173782 -> 31173784 = 2
sleep: 2: 31173784 -> 31173787 = 3
sleep: 2: 31173787 -> 31173790 = 3
sleep: 2: 31173790 -> 31173793 = 3
sleep: 2: 31173793 -> 31173803 = 10
sleep: 1: 31173803 -> 31173805 = 2
sleep: 2: 31173805 -> 31173808 = 3
sleep: 2: 31173808 -> 31173811 = 3
sleep: 2: 31173811 -> 31173814 = 3
sleep: 2: 31173814 -> 31173822 = 8
sleep: 1: 31173822 -> 31173824 = 2
sleep: 2: 31173824 -> 31173827 = 3
sleep: 2: 31173827 -> 31173830 = 3
sleep: 2: 31173830 -> 31173833 = 3
sleep: 2: 31173833 -> 31173843 = 10
sleep: 1: 31173843 -> 31173845 = 2
sleep: 2: 31173845 -> 31173848 = 3
sleep: 2: 31173848 -> 31173851 = 3
sleep: 2: 31173851 -> 31173854 = 3
sleep: 2: 31173854 -> 31173862 = 8
sleep: 1: 31173862 -> 31173864 = 2
sleep: 2: 31173864 -> 31173867 = 3
sleep: 2: 31173867 -> 31173870 = 3
sleep: 2: 31173870 -> 31173873 = 3
sleep: 2: 31173873 -> 31173883 = 10
sleep: 1: 31173883 -> 31173885 = 2
sleep: 2: 31173885 -> 31173888 = 3
sleep: 2: 31173888 -> 31173891 = 3
sleep: 2: 31173891 -> 31173894 = 3
sleep: 2: 31173894 -> 31173902 = 8
sleep: 1: 31173902 -> 31173904 = 2
sleep: 2: 31173904 -> 31173907 = 3
sleep: 2: 31173907 -> 31173910 = 3
sleep: 2: 31173910 -> 31173913 = 3
sleep: 2: 31173913 -> 31173923 = 10
sleep: 1: 31173923 -> 31173925 = 2
sleep: 2: 31173925 -> 31173928 = 3
sleep: 2: 31173928 -> 31173931 = 3
sleep: 2: 31173931 -> 31173934 = 3
sleep: 2: 31173934 -> 31173942 = 8
sleep: 1: 31173942 -> 31173944 = 2
sleep: 2: 31173944 -> 31173947 = 3
sleep: 2: 31173947 -> 31173950 = 3
sleep: 2: 31173950 -> 31173953 = 3
sleep: 2: 31173953 -> 31173963 = 10
sleep: 1: 31173963 -> 31173965 = 2
sleep: 2: 31173965 -> 31173968 = 3
sleep: 2: 31173968 -> 31173971 = 3
sleep: 2: 31173971 -> 31173974 = 3
sleep: 2: 31173974 -> 31173983 = 9
sleep: 1: 31173983 -> 31173984 = 1
sleep: 2: 31173984 -> 31173987 = 3
sleep: 2: 31173987 -> 31173990 = 3
sleep: 2: 31173990 -> 31173993 = 3
sleep: 2: 31173993 -> 31174002 = 9
sleep: 1: 31174002 -> 31174004 = 2
sleep: 2: 31174004 -> 31174007 = 3
sleep: 2: 31174007 -> 31174010 = 3
sleep: 2: 31174010 -> 31174013 = 3
sleep: 2: 31174013 -> 31174023 = 10
sleep: 1: 31174023 -> 31174025 = 2
sleep: 2: 31174025 -> 31174027 = 2
sleep: 2: 31174027 -> 31174030 = 3
sleep: 2: 31174030 -> 31174033 = 3
sleep: 2: 31174033 -> 31174042 = 9
sleep: 1: 31174042 -> 31174044 = 2
sleep: 2: 31174044 -> 31174047 = 3
sleep: 2: 31174047 -> 31174050 = 3
sleep: 2: 31174050 -> 31174053 = 3
sleep: 2: 31174053 -> 31174063 = 10
sleep: 1: 31174063 -> 31174065 = 2
sleep: 2: 31174065 -> 31174067 = 2
sleep: 2: 31174067 -> 31174070 = 3
sleep: 2: 31174070 -> 31174073 = 3
sleep: 2: 31174073 -> 31174082 = 9
sleep: 1: 31174082 -> 31174084 = 2
sleep: 2: 31174084 -> 31174087 = 3
sleep: 2: 31174087 -> 31174090 = 3
sleep: 2: 31174090 -> 31174093 = 3
sleep: 2: 31174093 -> 31174103 = 10
sleep: 1: 31174103 -> 31174105 = 2
sleep: 2: 31174105 -> 31174108 = 3
sleep: 2: 31174108 -> 31174110 = 2
sleep: 2: 31174110 -> 31174113 = 3
sleep: 2: 31174113 -> 31174122 = 9
sleep: 1: 31174122 -> 31174124 = 2
sleep: 2: 31174124 -> 31174127 = 3
sleep: 2: 31174127 -> 31174130 = 3
sleep: 2: 31174130 -> 31174133 = 3
sleep: 2: 31174133 -> 31174143 = 10
sleep: 1: 31174143 -> 31174145 = 2
sleep: 2: 31174145 -> 31174148 = 3
sleep: 2: 31174148 -> 31174150 = 2
sleep: 2: 31174150 -> 31174153 = 3
sleep: 2: 31174153 -> 31174162 = 9
sleep: 1: 31174162 -> 31174164 = 2
sleep: 2: 31174164 -> 31174167 = 3
sleep: 2: 31174167 -> 31174170 = 3
sleep: 2: 31174170 -> 31174173 = 3
sleep: 2: 31174173 -> 31174183 = 10
sleep: 1: 31174183 -> 31174185 = 2
sleep: 2: 31174185 -> 31174188 = 3
sleep: 2: 31174188 -> 31174191 = 3
sleep: 2: 31174191 -> 31174193 = 2
sleep: 2: 31174193 -> 31174202 = 9
sleep: 1: 31174202 -> 31174204 = 2
sleep: 2: 31174204 -> 31174207 = 3
sleep: 2: 31174207 -> 31174210 = 3
sleep: 2: 31174210 -> 31174213 = 3
sleep: 2: 31174213 -> 31174222 = 9
sleep: 1: 31174222 -> 31174224 = 2
sleep: 2: 31174224 -> 31174227 = 3
sleep: 2: 31174227 -> 31174230 = 3
sleep: 2: 31174230 -> 31174233 = 3
sleep: 2: 31174233 -> 31174242 = 9
sleep: 1: 31174242 -> 31174244 = 2
sleep: 2: 31174244 -> 31174247 = 3
sleep: 2: 31174247 -> 31174250 = 3
sleep: 2: 31174250 -> 31174253 = 3
sleep: 2: 31174253 -> 31174263 = 10
sleep: 1: 31174263 -> 31174265 = 2
sleep: 2: 31174265 -> 31174268 = 3
sleep: 2: 31174268 -> 31174271 = 3
sleep: 2: 31174271 -> 31174274 = 3
sleep: 2: 31174274 -> 31174282 = 8
sleep: 1: 31174282 -> 31174284 = 2
sleep: 2: 31174284 -> 31174287 = 3
sleep: 2: 31174287 -> 31174290 = 3
sleep: 2: 31174290 -> 31174293 = 3
sleep: 2: 31174293 -> 31174303 = 10
sleep: 1: 31174303 -> 31174305 = 2
sleep: 2: 31174305 -> 31174308 = 3
sleep: 2: 31174308 -> 31174311 = 3
sleep: 2: 31174311 -> 31174314 = 3
sleep: 2: 31174314 -> 31174322 = 8
sleep: 1: 31174322 -> 31174324 = 2
sleep: 2: 31174324 -> 31174327 = 3
sleep: 2: 31174327 -> 31174330 = 3
sleep: 2: 31174330 -> 31174333 = 3
sleep: 2: 31174333 -> 31174342 = 9
sleep: 1: 31174343 -> 31174345 = 2
sleep: 2: 31174345 -> 31174348 = 3
sleep: 2: 31174348 -> 31174351 = 3
sleep: 2: 31174351 -> 31174354 = 3
sleep: 2: 31174354 -> 31174362 = 8
sleep: 1: 31174362 -> 31174364 = 2
sleep: 2: 31174364 -> 31174367 = 3
sleep: 2: 31174367 -> 31174370 = 3
sleep: 2: 31174370 -> 31174373 = 3
sleep: 2: 31174373 -> 31174382 = 9
sleep: 1: 31174382 -> 31174384 = 2
sleep: 2: 31174384 -> 31174387 = 3
sleep: 2: 31174387 -> 31174390 = 3
sleep: 2: 31174390 -> 31174393 = 3
sleep: 2: 31174393 -> 31174402 = 9
sleep: 1: 31174402 -> 31174404 = 2
sleep: 2: 31174404 -> 31174407 = 3
sleep: 2: 31174407 -> 31174410 = 3
sleep: 2: 31174410 -> 31174413 = 3
sleep: 2: 31174413 -> 31174422 = 9
sleep: 1: 31174423 -> 31174425 = 2
sleep: 2: 31174425 -> 31174428 = 3
sleep: 2: 31174428 -> 31174431 = 3
sleep: 2: 31174431 -> 31174434 = 3
sleep: 2: 31174434 -> 31174442 = 8
sleep: 1: 31174442 -> 31174444 = 2
sleep: 2: 31174444 -> 31174447 = 3
sleep: 2: 31174447 -> 31174450 = 3
sleep: 2: 31174450 -> 31174453 = 3
sleep: 2: 31174453 -> 31174462 = 9
sleep: 1: 31174462 -> 31174465 = 3
sleep: 2: 31174465 -> 31174468 = 3
sleep: 2: 31174468 -> 31174471 = 3
sleep: 2: 31174471 -> 31174474 = 3
sleep: 2: 31174474 -> 31174483 = 9
sleep: 1: 31174483 -> 31174484 = 1
sleep: 2: 31174484 -> 31174487 = 3
sleep: 2: 31174487 -> 31174490 = 3
sleep: 2: 31174490 -> 31174493 = 3
sleep: 2: 31174493 -> 31174503 = 10
sleep: 1: 31174503 -> 31174505 = 2
sleep: 2: 31174505 -> 31174508 = 3
sleep: 2: 31174508 -> 31174511 = 3
sleep: 2: 31174511 -> 31174514 = 3
sleep: 2: 31174514 -> 31174523 = 9
sleep: 1: 31174523 -> 31174525 = 2
sleep: 2: 31174525 -> 31174527 = 2
sleep: 2: 31174527 -> 31174530 = 3
sleep: 2: 31174530 -> 31174533 = 3
sleep: 2: 31174533 -> 31174542 = 9
sleep: 1: 31174542 -> 31174544 = 2
sleep: 2: 31174544 -> 31174547 = 3
sleep: 2: 31174547 -> 31174550 = 3
sleep: 2: 31174550 -> 31174553 = 3
sleep: 2: 31174553 -> 31174563 = 10
sleep: 1: 31174563 -> 31174565 = 2
sleep: 2: 31174565 -> 31174567 = 2
sleep: 2: 31174567 -> 31174570 = 3
sleep: 2: 31174570 -> 31174573 = 3
sleep: 2: 31174573 -> 31174582 = 9
sleep: 1: 31174582 -> 31174584 = 2
sleep: 2: 31174584 -> 31174587 = 3
sleep: 2: 31174587 -> 31174590 = 3
sleep: 2: 31174590 -> 31174593 = 3
sleep: 2: 31174593 -> 31174602 = 9
sleep: 1: 31174603 -> 31174605 = 2
sleep: 2: 31174605 -> 31174608 = 3
sleep: 2: 31174608 -> 31174610 = 2
sleep: 2: 31174610 -> 31174613 = 3
sleep: 2: 31174613 -> 31174622 = 9
sleep: 1: 31174622 -> 31174624 = 2
sleep: 2: 31174624 -> 31174627 = 3
sleep: 2: 31174627 -> 31174630 = 3
sleep: 2: 31174630 -> 31174633 = 3
sleep: 2: 31174633 -> 31174643 = 10
sleep: 1: 31174643 -> 31174645 = 2
sleep: 2: 31174645 -> 31174648 = 3
sleep: 2: 31174648 -> 31174650 = 2
sleep: 2: 31174650 -> 31174653 = 3
sleep: 2: 31174653 -> 31174662 = 9
sleep: 1: 31174662 -> 31174664 = 2
sleep: 2: 31174664 -> 31174667 = 3
sleep: 2: 31174667 -> 31174670 = 3
sleep: 2: 31174670 -> 31174673 = 3
sleep: 2: 31174673 -> 31174683 = 10
sleep: 1: 31174683 -> 31174685 = 2
sleep: 2: 31174685 -> 31174688 = 3
sleep: 2: 31174688 -> 31174691 = 3
sleep: 2: 31174691 -> 31174693 = 2
sleep: 2: 31174693 -> 31174702 = 9
sleep: 1: 31174702 -> 31174704 = 2
sleep: 2: 31174704 -> 31174707 = 3
sleep: 2: 31174707 -> 31174710 = 3
sleep: 2: 31174710 -> 31174713 = 3
sleep: 2: 31174713 -> 31174722 = 9
sleep: 1: 31174723 -> 31174725 = 2
sleep: 2: 31174725 -> 31174728 = 3
sleep: 2: 31174728 -> 31174731 = 3
sleep: 2: 31174731 -> 31174734 = 3
sleep: 2: 31174734 -> 31174742 = 8
sleep: 1: 31174742 -> 31174744 = 2
sleep: 2: 31174744 -> 31174747 = 3
sleep: 2: 31174747 -> 31174750 = 3
sleep: 2: 31174750 -> 31174753 = 3
sleep: 2: 31174753 -> 31174763 = 10
sleep: 1: 31174763 -> 31174765 = 2
sleep: 2: 31174765 -> 31174768 = 3
sleep: 2: 31174768 -> 31174771 = 3
sleep: 2: 31174771 -> 31174774 = 3
sleep: 2: 31174774 -> 31174782 = 8
sleep: 1: 31174782 -> 31174784 = 2
sleep: 2: 31174784 -> 31174787 = 3
sleep: 2: 31174787 -> 31174790 = 3
sleep: 2: 31174790 -> 31174793 = 3
sleep: 2: 31174793 -> 31174803 = 10
sleep: 1: 31174803 -> 31174805 = 2
sleep: 2: 31174805 -> 31174808 = 3
sleep: 2: 31174808 -> 31174811 = 3
sleep: 2: 31174811 -> 31174814 = 3
sleep: 2: 31174814 -> 31174822 = 8
sleep: 1: 31174822 -> 31174824 = 2
sleep: 2: 31174824 -> 31174827 = 3
sleep: 2: 31174827 -> 31174830 = 3
sleep: 2: 31174830 -> 31174833 = 3
sleep: 2: 31174833 -> 31174843 = 10
sleep: 1: 31174843 -> 31174845 = 2
sleep: 2: 31174845 -> 31174848 = 3
sleep: 2: 31174848 -> 31174851 = 3
sleep: 2: 31174851 -> 31174854 = 3
sleep: 2: 31174854 -> 31174862 = 8
sleep: 1: 31174862 -> 31174864 = 2
sleep: 2: 31174864 -> 31174867 = 3
sleep: 2: 31174867 -> 31174870 = 3
sleep: 2: 31174870 -> 31174873 = 3
sleep: 2: 31174873 -> 31174882 = 9
sleep: 1: 31174882 -> 31174884 = 2
sleep: 2: 31174884 -> 31174887 = 3
sleep: 2: 31174887 -> 31174890 = 3
sleep: 2: 31174890 -> 31174893 = 3
sleep: 2: 31174893 -> 31174902 = 9
sleep: 1: 31174902 -> 31174904 = 2
sleep: 2: 31174904 -> 31174907 = 3
sleep: 2: 31174907 -> 31174910 = 3
sleep: 2: 31174910 -> 31174913 = 3
sleep: 2: 31174913 -> 31174923 = 10
sleep: 1: 31174923 -> 31174925 = 2
sleep: 2: 31174925 -> 31174928 = 3
sleep: 2: 31174928 -> 31174931 = 3
sleep: 2: 31174931 -> 31174934 = 3
sleep: 2: 31174934 -> 31174942 = 8
sleep: 1: 31174942 -> 31174944 = 2
sleep: 2: 31174944 -> 31174947 = 3
sleep: 2: 31174947 -> 31174950 = 3
sleep: 2: 31174950 -> 31174953 = 3
sleep: 2: 31174953 -> 31174962 = 9
sleep: 1: 31174962 -> 31174965 = 3
sleep: 2: 31174965 -> 31174968 = 3
sleep: 2: 31174968 -> 31174971 = 3
sleep: 2: 31174971 -> 31174974 = 3
sleep: 2: 31174974 -> 31174983 = 9
sleep: 1: 31174983 -> 31174984 = 1
sleep: 2: 31174984 -> 31174987 = 3
sleep: 2: 31174987 -> 31174990 = 3
sleep: 2: 31174990 -> 31174993 = 3
sleep: 2: 31174993 -> 31175002 = 9
sleep: 1: 31175002 -> 31175004 = 2
sleep: 2: 31175004 -> 31175007 = 3
sleep: 2: 31175007 -> 31175010 = 3
sleep: 2: 31175010 -> 31175013 = 3
sleep: 2: 31175013 -> 31175023 = 10
sleep: 1: 31175023 -> 31175025 = 2
sleep: 2: 31175025 -> 31175027 = 2
sleep: 2: 31175027 -> 31175030 = 3
sleep: 2: 31175030 -> 31175033 = 3
sleep: 2: 31175033 -> 31175042 = 9
sleep: 1: 31175042 -> 31175044 = 2
sleep: 2: 31175044 -> 31175047 = 3
sleep: 2: 31175047 -> 31175050 = 3
sleep: 2: 31175050 -> 31175053 = 3
sleep: 2: 31175053 -> 31175063 = 10
sleep: 1: 31175063 -> 31175065 = 2
sleep: 2: 31175065 -> 31175067 = 2
sleep: 2: 31175067 -> 31175070 = 3
sleep: 2: 31175070 -> 31175073 = 3
sleep: 2: 31175073 -> 31175082 = 9
sleep: 1: 31175082 -> 31175084 = 2
sleep: 2: 31175084 -> 31175087 = 3
sleep: 2: 31175087 -> 31175090 = 3
sleep: 2: 31175090 -> 31175093 = 3
sleep: 2: 31175093 -> 31175103 = 10
sleep: 1: 31175103 -> 31175105 = 2
sleep: 2: 31175105 -> 31175108 = 3
sleep: 2: 31175108 -> 31175110 = 2
sleep: 2: 31175110 -> 31175113 = 3
sleep: 2: 31175113 -> 31175123 = 10
sleep: 1: 31175123 -> 31175125 = 2
sleep: 2: 31175125 -> 31175128 = 3
sleep: 2: 31175128 -> 31175131 = 3
sleep: 2: 31175131 -> 31175134 = 3
sleep: 2: 31175134 -> 31175143 = 9
sleep: 1: 31175143 -> 31175145 = 2
sleep: 2: 31175145 -> 31175148 = 3
sleep: 2: 31175148 -> 31175150 = 2
sleep: 2: 31175150 -> 31175153 = 3
sleep: 2: 31175153 -> 31175162 = 9
sleep: 1: 31175162 -> 31175164 = 2
sleep: 2: 31175164 -> 31175167 = 3
sleep: 2: 31175167 -> 31175170 = 3
sleep: 2: 31175170 -> 31175173 = 3
sleep: 2: 31175173 -> 31175183 = 10
sleep: 1: 31175183 -> 31175185 = 2
sleep: 2: 31175185 -> 31175188 = 3
sleep: 2: 31175188 -> 31175191 = 3
sleep: 2: 31175191 -> 31175193 = 2
sleep: 2: 31175193 -> 31175202 = 9
sleep: 1: 31175202 -> 31175204 = 2
sleep: 2: 31175204 -> 31175207 = 3
sleep: 2: 31175207 -> 31175210 = 3
sleep: 2: 31175210 -> 31175213 = 3
sleep: 2: 31175213 -> 31175223 = 10
sleep: 1: 31175223 -> 31175225 = 2
sleep: 2: 31175225 -> 31175228 = 3
sleep: 2: 31175228 -> 31175231 = 3
sleep: 2: 31175231 -> 31175234 = 3
sleep: 2: 31175234 -> 31175242 = 8
sleep: 1: 31175242 -> 31175244 = 2
sleep: 2: 31175244 -> 31175247 = 3
sleep: 2: 31175247 -> 31175250 = 3
sleep: 2: 31175250 -> 31175253 = 3
sleep: 2: 31175253 -> 31175262 = 9
sleep: 1: 31175262 -> 31175264 = 2
sleep: 2: 31175264 -> 31175267 = 3
sleep: 2: 31175267 -> 31175270 = 3
sleep: 2: 31175270 -> 31175273 = 3
sleep: 2: 31175273 -> 31175282 = 9
sleep: 1: 31175282 -> 31175284 = 2
sleep: 2: 31175284 -> 31175287 = 3
sleep: 2: 31175287 -> 31175290 = 3
sleep: 2: 31175290 -> 31175293 = 3
sleep: 2: 31175293 -> 31175302 = 9
sleep: 1: 31175302 -> 31175304 = 2
sleep: 2: 31175304 -> 31175307 = 3
sleep: 2: 31175307 -> 31175310 = 3
sleep: 2: 31175310 -> 31175313 = 3
sleep: 2: 31175313 -> 31175322 = 9
sleep: 1: 31175322 -> 31175324 = 2
sleep: 2: 31175324 -> 31175327 = 3
sleep: 2: 31175327 -> 31175330 = 3
sleep: 2: 31175330 -> 31175333 = 3
sleep: 2: 31175333 -> 31175343 = 10
sleep: 1: 31175343 -> 31175345 = 2
sleep: 2: 31175345 -> 31175348 = 3
sleep: 2: 31175348 -> 31175351 = 3
sleep: 2: 31175351 -> 31175354 = 3
sleep: 2: 31175354 -> 31175362 = 8
sleep: 1: 31175362 -> 31175364 = 2
sleep: 2: 31175364 -> 31175367 = 3
sleep: 2: 31175367 -> 31175370 = 3
sleep: 2: 31175370 -> 31175373 = 3
sleep: 2: 31175373 -> 31175382 = 9
sleep: 1: 31175382 -> 31175384 = 2
sleep: 2: 31175384 -> 31175387 = 3



Again, better, But I'm still disappointed - the error is occasionally as large as 1000%. Note that even though the error seems to be spread out "predictably" in the sense that it's always the last but first (fourth) thread executed that goes off the scale, the time passed to Sleep() is always the same as in the first column and the error doesn't limit itself to the same thread when the program is executed several times; nor does it always limit itself to any given single thread... Quick testing shows that all threads get their proper share to within 1 ms if the time allocated to a thread is at least 20 ms. That's not too long, but I'd still like to keep the minimum allotment within 3-5 ms, which would be considerably more suitable in my case, plus it would dramatically increase thread responsiveness.

Any ideas on how to improve on this (or am I stuck with increasing the granule size quite considerably to minimize the relative error)?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!