Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


constant velocity problem

This topic is 6296 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi. i''ve got the following problem: everytime when i''m coding an opengl program i''m adjusting all the values for rotations, translations etc. so that they are working just fine on the machine i''m working at. Now let''s say i take my program from my computer at work (p2-400, ati graphics) to my computer at home (k7-600, geforce ddr) or even better, i give it to some of my friends (they''re running ghz machines . now all my routines are running ways to fast and it''s very ugly to watch it or it''s even so fast that one is not able to recognize what''s happening on the screen. is there an easy way to get a constant timescale where you are able to determine things like e.g. "degrees per second" for rotations or "move x units per second" for translations etc.?? I mean: use the power of high-end machines to run things smoother, NOT faster? Of course this is possible, because all first person shooter etc. are doing such things. but i can''t find anything about it and it seems that even all tutorials and nehe''s stuff does not care about it. today i''ve tried the following: with "time()" and "_ftime()" (you have to include for that) i measured the time a single frame needs to get rendered. with a given condition like "move x units along the x-axis within one second" and the measurement that e.g. the frame needed 20 milliseconds to render you are able to calculate the distance to translate in each renderloop so that you actually arrive at x units after one second (the render time gets re-calculated each frame and therefore the steps are dynamically adjusted). sounds nice, but there''s a problem: the minimum amount of time that can be measured is 1 millisecond (i have to avoid zero because the scene wouldn''t move with that value given). if the frames get rendered within < 1 millisecond, the problem stays. i tried to measure the time for 10 rendered frames etc. but i realized that this becomes a problem if the scene is getting more complex and average framerate drops (in the worst case here, below 10/sec.). so. the idea to calculate the time one frame needs to get rendered and the adjusting my values dynamically seemed quite nice to me, but i''m not sure if it''s the right thing. maybe some of you have got better solutions or at least some websites where i can get information about it. thanks for your help. oh, one last thing: i''m something like a newbie to opengl and to this forum, so don''t flame on me if i''ve done whatever wrong here and sorry for my bad english too.

Share this post

Link to post
Share on other sites
Yeah, that''s basicly what you want to do. Try getting a higher resolution timer. In Windows you should use QueryPerformanceTimer or something like that after you set it all up. One of the Nehe tutorials has a high performance timer in it. His approach will get you started.

Resist Windows XP''s Invasive Production Activation Technology!

Share this post

Link to post
Share on other sites
Yes, for a higher performace timer, you will want to use QueryPerformaceCounter(), and QueryPerformanceFrequency(). On most computers, this should give a resolution greater than 1 millisecond. On my computer, I get a frequency of 1193180, which is a minimum resolution of 8.38x10^-7 seconds.
It sounds as though you have the right idea, however, if you just base the speed on how fast it takes to render a frame, that will introduce some errors as well, because there are a few things that usually happen between calls to the render loop. To get around this, you can just use two variables, one to store the time that the last frame started, and the other to store the time that the current frame started, and use the difference as the elapsed time between frames.

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
Does anyone know, what QueryPerformaceCounter() actually measures ? I never used it myself (I don''t use Windows) but I always thought that it uses the rdtsc command to measure time. This doesn''t seem true though, since all users report a frequency of around 1.2 MHz, but rdtsc is _alot_ faster than that, and depends on the CPU speed.

Hmm, just wondering...


Share this post

Link to post
Share on other sites
Guest Anonymous Poster
The QueryPerformanceCounter function retrieves the current value of the high-resolution performance counter, if one exists.

BOOL QueryPerformanceCounter(
LARGE_INTEGER *lpPerformanceCount // pointer to counter value

Pointer to a variable that the function sets, in counts, to the current performance-counter value. If the installed hardware does not support a high-resolution performance counter, this parameter can be to zero.

Return Values
If the installed hardware supports a high-resolution performance counter, the return value is nonzero.

If the installed hardware does not support a high-resolution performance counter, the return value is zero.


Thats from the MSDN. I remember reading that the high perfermance counter is based on the Uptime of the system.

''There is no reason to have math part of the curriculum in schools. Why should we have to know 2x = 3y - 5. We have computers now." - Rosie O''Donnle

Share this post

Link to post
Share on other sites
How about you use a lower resolution (1ms or so) timer to only update your positions in a seperate function each timer a millisecond (or however long) has passed? In glut, I use timerfunc, in win32 using NeHe''s code that''s used in the new update function, I calculate when 35 milliseconds have passed, then update and refresh my "clock". Do something like that and you''ll get nice smooth animations in any situation, with the exeption of a machine that can''t handle the program.


Share this post

Link to post
Share on other sites
thanks for your help!
i''ll try all these things this evening and maybe post my results here again. for now i''ve had a quick look at the nehe''s tutorials and you''re right: #21 is about timing (besides lines, antialiasing and sound) and uses a high resolution timer (not the way i thought of but it does). this code is also used in tutorial #23 (direct input).

@ATronic: i think there are two problems (or unwanted things for me) if you update your positions only every given time interval:
high end machines will idle. i think the extra power should be used to make things even smoother
if you lower this value too much, low end machines will need more time to finish one renderloop than you give them (but of course they will finish it) and on this machine the program actually will run slower than expected. thanks anyways

Share this post

Link to post
Share on other sites
I simply use timeGetTime(). It returns how long your machine has been on, in ms. So the code to handle move x steps in 10 ms would be something like:

DWORD lastupdate = timGetTime();



Share this post

Link to post
Share on other sites
I did not read all of the replies too thoroughly, but I believe what you want to do make all of your animated objects operate on delta time (dt), where dt is the time since the last game cycle.

//calculate dt
t = YourGetTimeFunction()
dt = t - last_time;
last_time = t;

now, all your objects should move according to this dt value. In order for this to work you either specify the objects velocity (linear or rotation) or use key-frames. I like key frames. A key frame is basically a set of orientations at certain times. for example:

Frame 1: time = 0, (x,y) = (1.0,3.0)
Frame 2: time = 1, (x,y) = (4.0,3.0)
Frame 3: time = 2, (x,y) = (4.0,10.0)

Then, the idea is to interpolate between these 'frames'. So, for each object you keep track of how much time has passed, we will call this elapsed_time. Now, what you do is find out which 2 frames the elapsed_time falls between. For example, lets say elapsed_time = 1.3. So, it falls between frames 2 & 3. Then you calculate where in between the frames it is, like this

nT = (elapsed_time - t2) / (t3 - t2);
if(nT > 1.0) nT = 1.0;
if(nT < 0.0) nT = 0.0;

So, nT should be between 0.0 and 1.0;

now, interpolate the translation, like this:

x = x2 + ((x3 - x2)*nT) //interpolation code
y = y2 + ((y3 - y2)*nT)

This is all done EACH cycle of your game. Once the elapsed_time is past the last frame, you can loop it by setting elapsed_time to back to 0. Or you can clamp it (no more movement) by letting elapsed time run on.

It is done basically the same way for scaling and rotation - the only thing that is different is the interpolation code. This will keep the objects going at a constant speed on any machine. Even if the machine is too slow, it will produce the smoothest animation possible for it.

And yes, on high end machines, the dt value will be smaller, so the incremental movements will be smaller, so the animation will be smoother.

Hope this is what you were looking for,


Edited by - BrianH on April 26, 2001 3:20:42 PM

Share this post

Link to post
Share on other sites
Hi Brian!

You exactly understood what i was looking for.
I like the idea of using keyframes, it seems to be very elegant. I think I'll try to do something with it as soon as possible.

For now I've modified the timer code of nehe's tutorial #21 so that it works fine with the "easy" way to achieve what I wanted (not interpolating between keyframes).

For those who are interested in it:
I built up a struct where I store all timer depending values:

struct {

// timer frequency
__int64 frequency;

// timer resolution
float resolution;

// timer last Value
float timer_last;

// timer elapsed time
float timer_elapsed;

} timer; // Structure is named timer

Now there's a function that is only called once to initiate the timer:

// initialize timer
void TimerInit(void) {

// temp var to store current "time"
__int64 time;

// clear timer structure
memset(&timer, 0, sizeof(timer));

// get timer frequency
QueryPerformanceFrequency((LARGE_INTEGER *) &timer.frequency);

// calculate the timer resolution using the timer frequency
timer.resolution = (1.0f/(float)timer.frequency);

// get the current time and store it in timer_last
QueryPerformanceCounter((LARGE_INTEGER *) &time);
timer.timer_last = (float)time;

At last I've made a function that calculates the elapsed time every time it is called and stores that value in the timer struct:

// get time difference to last query in milliseconds
void TimerGetTime() {

// temp var to store current "time"
__int64 time;

// get the current time
QueryPerformanceCounter((LARGE_INTEGER *) &time);

// store the elapsed time in milliseconds since last query
timer.timer_elapsed = ( (float)time - timer.timer_last)
* timer.resolution * 1000.0f;

// set timer_last to current time, so time difference
// can be calculated properly next time
timer.timer_last = (float)time;

If I call the last function every game cycle the size of steps for each translation or rotation etc. can be calculated based on timer.elapsed_time just like i described it in my first posting. So the program will run at the same speed on every machine. The more power you get, the smoother the animations are. Sweet

Oh. Is there really a need for checking if this high resolution counter is available like nehe did in his tutorial? What is this timer based on? In which case can't this counter be provided by the machine? (OS-related? CPU?)

Hmm, I'm curious about working with keyframes now

Edited by - broTspinne on April 26, 2001 5:17:01 PM

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!