Time limited computation

Started by
29 comments, last by Storyyeller 12 years, 5 months ago
In AI competitions, execution time is usually limited. So it is necessary to find a way to ensure that the function will always return within the cutoff to avoid being disqualified.

What is the most efficient way to ensure this? The only method I can think of is to sprinkle timer check statements throughout the code and return early if any of the checks fail (with a small margin for safety). But is there any better way? If not, how do you decide which frequency to check the time? Profiling on a remote server is not usually feasible.
I trust exceptions about as far as I can throw them.
Advertisement


What is the most efficient way to ensure this?


Use the required optimal algorithm.

All competitions are about figuring out the representation of the problem and implementing that. If you choose the correct one, then solution will complete within required time, regardless of language or other details.

If not, then you simply didn't find the correct solution.

In almost all cases it looks like this: correct solution takes 1.4ms in C and 1.8ms in Python and 3.4ms in Ruby. Incorrect algorithm takes: 48 days in C, 2.8 years in Python and 55.8 years in Ruby. Timers don't matter.
What's the value in returning if you don't have the answer? Is not having an answer worse than having a wrong one?
Sorry, I guess I wasn't clear enough. I was referring to AI competitions in particular. Unless you can exactly solve the entire game, then being able to spend more time is always beneficial, but you have to cut it off at some point. Furthermore, you generally have an answer that is iteratively improved.

For example, in a chess game with iterative deepening, you search to a certain depth, then to a deeper depth, and so on until you run out of time. But you always have an answer, just not an optimal one.



What's the value in returning if you don't have the answer? Is not having an answer worse than having a wrong one?

Being disqualified and instantly losing is generally worse than not doing anything.
I trust exceptions about as far as I can throw them.

Sorry, I guess I wasn't clear enough. I was referring to AI competitions in particular.


AI competitions are about score. Time is rarely a factor, since contests take weeks or months and what matters is best score, not how long it took to obtain it, as long as best results are sent in by a certain date.


Or maybe there is one particular contest you have in mind that has certain set of rules, in which case it might be helpful to know which, so that individual rules may be examined.
Here's the only I'm currently attempting to do.
http://aichallenge.org/index.php
I trust exceptions about as far as I can throw them.
Running environment for this problem is hard real time, so for "proper" solution, same rules must be obeyed. All operations must be deterministic and all operations must be constant time. Java would need to avoid GC at all costs due to non-determinism, for example.


While ad-hoc timing via aborts might work, it does not guarantee any kind of improvement over deterministic fixed-size search space. You either know that each particular step will conclude in given time, or you might time out locally and end without a solution or at least better one.


So for each step you need to know the worst case time for one step of simulation/improvement. Since there are constraints on input, these can be accurately determined. Then you know precisely how long each step takes (such as pathfinding, visibility test, etc...). Perform enough to stay within time limit. This will be the best result attainable. If there is still time left, run the improvement step. If normal step takes 200ms in worst case, run 4 of them. If first few improvements take 20, 20, 30, 20, 20, 40, 20 (170), then you can reliably run 4 more, then repeat the estimation. It becomes a discrete problem.

Ideally, algorithms should be bound by discrete parameters, not wall time. So when searching, let each ant examine up to 10 steps (or similar). Time and space complexity can then be accurately determined.

While at first it may seem that unbounded search with local timeout would produce a potentially better result, it would not. Either unbounded search exhausts same resources as bounded, or it times out in some non-deterministic state, which will, given same algorithm, be no better than bounded one.
But unless you're running on bare metal, it's impossible to ensure real time performance. Even if you're deterministic, the OS isn't, and that's not even considering the impact of cache usage, paging, etc. And there's no feasible way to do profiling on their server anyway.
I trust exceptions about as far as I can throw them.
Of course you can profile. You just have to profile on the fly, while you're running your algorithm. This is mostly an issue of algorithm design, as Antheus has pointed out.

Here's one possible approach: divide your process into iterative refinement steps. Determine how long step N+1 will take in relation to step N. Then, every time you compute some step Q on the actual server, use your theoretically sound relation between T(N+1) and T(N) to compute T(Q+1). If that time overflows your remaining time limit, then stop computing.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Another advice is: never use recursion.

Simple example:int compute(...) {
for (every i in input) {
add(compute(i));
}
return best();
};
Simple, clean, elegant. And absolute disaster, especially when doing heuristics. What is the complexity of the above?

Instead, compare this version:
int level0[MAX];
int level1[MAX*MAX];
int level2[MAX*MAX*MAX];

void compute0to1() {} //
void compute1to2() {} // all same for loop
void compute2to3() {} //


Something becomes painfully obvious: MAX*MAX*MAX*...., or n^m

The latter style of algorithms is vastly preferred, especially given that input sizes are well defined. Since it's fully deterministic, it's also easy to profile - just use running time of one step. If it's enough, it'll always be enough and you can try increasing by one more level.

And there's no feasible way to do profiling on their server anyway.[/quote]

It doesn't matter.

Develop your algorithm.
Write it.
Run it.

If it's fast enough, it will work. If it's not, the algorithm is too complex and not suitable for the constraints of the competition.

It's not about microoptimizations. It's about quicksort vs. bubblesort.

This topic is closed to new replies.

Advertisement