I have entities in the scene who update part of their state using the results of an external script. There's no upper limit (yet) on how many of these entities there can be, so in theory a scene could have five or five hundred. I say external because there is no interface for Java (or C or anything else) to the script environment. It's another process that takes in a script file and an inputs file from the terminal and creates an outputs file with the results of its processing. I can handle running the scripts and translating data back and forth just fine. However, the time it takes for each execution instance is significant enough (around 10 ms on my dev machine) that I can't iterate through every entity in the update loop without causing significant delays for a scene with more than a couple of them running. I'm working on optimizing everything I'm doing to communicate between the game and the script process, but (for this machine) there is a hard minimum of around 3 ms per execution. Hence I'm looking into how best to divvy the work.
My current ideas are something like this:
- Handle as many objects in the update loop as is possible within X ms.
- Use a cached thread pool ExecutorService.
- Use a fixed thread pool ExecutorService.
I'm not nuts about the first one because the scripts should run as close as possible to their given update rate (each script has a configurable clock rate) or things could, in theory, go awry. To explain a little, the script process is actually a Verilog synthesizer and simulator. Each "script" is a Verilog file describing some hardware component that is simulated by the process. Each component might have a clock and it helps if they run as close as possible to their expected clock rates or timing can get fussy. Obviously the scheduler doesn't allow things to happen in real time nor anywhere near the level of accuracy one would hope for from real hardware, but it's a plus if they're delayed as little as possible. Pushing script updates to the next game update is something I'm hoping to avoid. Though I will point out that I haven't tested this yet to see how much of a problem it is. Thorough testing of this will be a bit difficult, though, since this game is meant to be used by others and I have no way of knowing what sort of hardware they might be trying to simulate.
I'm going to try implementing the third idea first because it feels the most "right" to me, though I have no real justification for it at the moment. My only concern with the second one is the possibility of the game having been idle for a bit, the threads being released, and suddenly a wave of 150 update requests pops up. The cached pool says it will create threads as needed. Does this mean it would try to create 150 threads at once?
Does anyone have some experience with a situation like this, and if so, what worked for you?