Guessing the next few frame lengths from old frame lengths in a statistical way

Started by
6 comments, last by Hodgman 12 years, 3 months ago
Hi;

I am designing a DirectX9 game, which contains a terrain system and a fly over camera. For an algorithm I am designing at the moment, I need a way to guess how much time the next few frames(maximum 10) would take using the old frame length data, in milliseconds. For example, I am holding the lengths of the last 200 hundred frames in an array and I want to somehow guess the lengths of the incoming 4 frames in a logical way. I know there are many statistical estimation techniques like least squares method and the Kalman filter but I am not great at statistical mathematics and I got easily lost and confused when I tried to google for next point estimation techniques. There are a LOT of information to read and analyse and therefore I am trying my luck here. I just need a basic technique to do the job. Thanks in advance.
Advertisement
I'd assume just linearly interpolating from the last two frames would be a reasonable estimate to start with. I'd only go further if that proves to be too inaccurate.
Somehow I cannot submit edits to my last post...

What I wanted to add is that even just averaging some amount of previous data points is probably a pretty good estimation.
"I'd assume just linearly interpolating from the last two frames would be a reasonable estimate to start with. I'd only go further if that proves to be too inaccurate."

I tried this but unfortunately its results are way too inaccurate. I am thinking of applying the Kalman filter but I am not even sure whether it can handle a situation like this.
What are the accuracy requirements? Can you elaborate what you are trying to do and why you need this prediction? The problem I see is that individual "frame times" can vary wildly depending on what your program does. Even worse, they can be flat out non deterministic since stuff like your OSes process scheduler etc. will also affect those times.

What are the accuracy requirements? Can you elaborate what you are trying to do and why you need this prediction?


"I'd assume just linearly interpolating from the last two frames would be a reasonable estimate to start with. I'd only go further if that proves to be too inaccurate."

I tried this but unfortunately its results are way too inaccurate.


To echo japro's sentiment, it would be useful to see the data you are using and what 'too inaccurate' means to you.

As you say, there are lots of estimation techniques, but it would also be useful to know a little more about why you need this estimate? I understand that you need this for an algorithm you are working on, but I don't really know the constraints you have on this problem. I mean, it could be reasonable to assume a model where the frame duration is constants and the fluctuations are due to background process (like japro suggested). Or maybe the frame duration is area dependent (in terms of the terrain), and maybe that could be precomputed.

-Josh

--www.physicaluncertainty.com
--linkedin
--irc.freenode.net#gdnet

First of all thanks for the quick replies!
I will try to detail my situation as much as I can. I am rendering a height map based terrain and calculating shadows for it. The sun is the only directional light source and is movable in my sceneario. The sun is a trajectory and therefore, it follows a fixed path on the sky. Since I asume the sun as a directional light source, it is only represented as a direction vector like <2,-5,3> being x,y and z coordinates respectively.The direction of the sun lights, lets say D, is a function of time, like D(t). For now, I only rotate the sun around the y axis with a fixed speed, say 100 degrees per second. In my shadowing algorithm, I need to compute the direction of the sun rays for the current frame and a few incoming frames (like the next 4,5,6 frames) I am computing shadows for those incoming frames in the current frame and then using the precomputed shadow information at the according future frames. I am using CUDA acceleration for the shadow computation and therefore I can compute shadows for N frames at alpha*N time alpha being 0<alpha<1. Therefore, I am increasing the overall average performance when I do this precomputation in a single frame, in a single CUDA call, not a per-frame basis.

Since the sun's direction is a function of time, I am able to know it for the future frames in the current frame and do the shadow computations in a single CUDA kernel invocation. But while doing this, I made the following approximation: Every frame of rendering takes approximately the same amount of time and therefore I can use the duration of a single frame as the unit time interval. This means if a single frame takes T time to complete, then the sun's direction after N frames will be D(sT + NT), with s being the total number of frames rendered since the program has started.

This assumption is actually not correct since the rendering of a single frame can take different amounts of time, depending on the position of the fly-over camera. If the camera is looking down to a small part of the terrain, the frame rate increases to 300-400 fps. If the viewing frustum of the camera contains too much terrain geometry to render, the frame rate drops to 100-150 fps. The camera is user controlled via keyboard and mouse inputs. My former assumption results in which the sun is moving slowly when the frame rate is low and unnecessarily fast when the frame rate is high. What I would prefer is to obtain an overall frame rate independent sun movement speed as possible. If I had a good guess at what duration the incoming N frames (N is maximum 10) will render, then I could compute the sun direction at each frame according to this guess and this would result with a nearly frame rate independent sun movement.

A good part of this situation is that the frame rate won't change rapidly for succesive frames, because there is some kind of "spatial locality", if the camera views a complex scene which is rendered slowly then the next few frames probably would render at similar speeds since we would still view the terrain with a similar complexity.

So, I am looking for a method to forecast the duration of the next few frames, by analysing the rendering durations of the old frames at this situation. Any help would be highly appreciated...
If a good frame is 2.5ms and a bad frame is 10ms, then you can avoid tackling the problem at all by fixing your frame rate to the refresh rate (e.g. 60hz/16.6ms)

This topic is closed to new replies.

Advertisement