Time limited computation

Started by
29 comments, last by Storyyeller 12 years, 6 months ago
Smells like bug ;-)

I again seriously doubt you have something on your machine that's swiping 30ms of time from your process, barring something really silly like setting low thread priority on your app or something like that.


Manual profiling is pretty easy; use something like QueryPerformanceCounter to grab a high-resolution time at various points in your code, and compute how long each section is taking. A little RAII wrapper can get you automatic timings every time a function enters or exits, and some cleverness can generate you a nice formatted call graph showing exactly how long your instrumented functions are taking to run.

Of course, if you're still doing this on your local machine, just use an external sampling profiler and see where the extra 20-30ms is going...

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Advertisement


I went ahead and decided to just check the time at the beginning of every iteration, and return early when there's only 20ms left. Most of the time it works pretty well. However, it will sometimes randomly timeout anyway for no discernible reason.

20ms is usual time slice of OS scheduler. So if your last thread slice gets pre-empted just for one cycle, you'll miss the deadline.

Various tests indicate that on EC2 and other virtualized hosts the hypervizor may be as late as 1 second regularly (as per wall clock).

The only thing I can think of is that the OS is making my process go to sleep for more than 20ms. Is this likely? What can I do to prevent it?[/quote]

Unless you install hard real-time OS, nothing. Really, it cannot be done. Between hypervizor and OS scheduler, getting 250ms accuracy can be considered very good.

If the recursive call[/quote]
I seem to have written something about recursion already, so I won't really go into it anymore. Except that it really shouldn't be used.
I started out using an explicit stack, but I found that the code was simpler when using recursion. Do you think it's worthwhile to rewrite it to avoid recursion?
I trust exceptions about as far as I can throw them.

I started out using an explicit stack


A rose by any other name...

Do you think it's worthwhile to rewrite it to avoid recursion?[/quote]

It's more about avoiding recursive algorithms, regardless of how they're implemented. For example, Fibonacci, coded recursively or with manual stack is the same thing. Using golden ratio or memoization is a better algorithm.

Quicksort is another example. A battle-tested algorithm, proven to be fastest by almost all metrics. Yet almost all examples given in text books contain a nasty n^2 stack growth. Trivially fixed by adding a single line, yet subtle enough to be missed by most.


And if relying on timing, make sure the cut-off factor is several 100ms or some 30-50% of given time. During testing, everything will work fine. But during finals, with many competitors, machines will get considerably more load, increasing the probability of inaccurate timing.

just use an external sampling profiler[/quote]

Problems like this cannot be profiled, especially if it's quantum related. If it's related to hypervizor, profiler simply won't see anything, just the wall clock will be jumping forward. Similar with I/O problems (other VM performing bulk disk IO of several megabytes which can have terrible impact on all other instances on same VM).
Well in this case I'm trying to solve a constraint optimization problem. The current algorithm I'm using is backtracking, which is inherently recursive. Do you have any ideas for a better algorithm?
I trust exceptions about as far as I can throw them.
When in doubt look to the experts. This is last years winner, https://github.com/a...pp/MyTronBot.cc

Of note are the functions:

_alrm_handler(sig) { _timed_out=true; }
int main(int argc, char **argv) {...
signal(SIGALRM, _alrm_handler);
...
}


He looks to be using some form of minimax, namely:

_alphabeta(...) {
...
for (int _m=-1;_m<4 && !_timed_out;_m++) {
...
_alphabeta(...);
...
}
...
}

Quicksort is another example. A battle-tested algorithm, proven to be fastest by almost all metrics. Yet almost all examples given in text books contain a nasty n^2 stack growth. Trivially fixed by adding a single line, yet subtle enough to be missed by most.


Perhaps I am one of "most", but I don't remember having seen any description of quicksort that contains n^2 stack growth. You only need constant stack space per function call, and the number of calls is O(n*log(n)), with the stack depth at any given time being just O(log(n)).
Worst case for quick sort is O(n*n). If you pick the wrong half of the array to recurse in to you get the excessive stack growth. Pick the correct (smaller) half and you need at most log(n) stack size.
Oh, I see. And you do the larger half using tail recursion.


Perhaps I am one of "most", but I don't remember having seen any description of quicksort that contains n^2 stack growth.


n, not n^2, typo. Still, for stack, n is bad enough.

Worst case partitions into (n-1),(1), leaving you with stack n deep. Solved by always recursing into smaller of the partitions, which does end up with logn.

This topic is closed to new replies.

Advertisement