[SDL] GetTicks is running slower than the application

Started by
9 comments, last by tristan1333 7 years ago

I'm currently working on a game which relies on SDL_GetTicks(). However, on lower tier computers (My home computer), the game runs at a very high FPS. What is happening however, is it hogs all the CPU and so the thread that is handling the SDL_GetTicks() is slowed down I believe. This effectively means arrows that are firing for example, are slowed down. On faster machines, an archer can fire and only have 1 arrow on the screen at once. However, on my computer an archer can fire three times before the arrow leaves the screen. The jumping relies on a SDL_GetTicks() as well, and is also affected meaning you jump lower. I'm wondering on how to fix this issue? I think I should do a timer inside of the application however I'm not quite sure on how to go about this.

Advertisement

Check the actual results from SDL_GetTicks(). It might be that you're drawing too fast, so two calls to sdl_getticks will return same value, as its precision is really low. And there can be anything in your code that is not prepared to zero delta time.

First and foremost, I would recommend using std::chrono instead of SDL_GetTicks() as it is more accurate, more portable and it's very easy to convert between different time measures (you could do a type alias for seconds: using Seconds = std::chrono::duration<double, std::ratio<1>>; and then implicitly convert to it from nanoseconds). However, SDL_GetTicks() should still be pretty reliable so it's definitely weird if it would slow down... What framerate do you mean with "very high FPS"? SDL_GetTicks() only returns milliseconds, so if your framerate is around or higher than 1000 I could imagine it being very inaccurate (std::chrono::high_resolution_clock::now returns nanoseconds or the smallest unit your computer can handle).

If that is not the cause of the problem, it could be helpful to see a bit of the code that relies on SDL_GetTicks() and maybe parts of your main game loop.

I'm currently working on a game which relies on SDL_GetTicks(). However, on lower tier computers (My home computer), the game runs at a very high FPS. What is happening however, is it hogs all the CPU

That is to be expected - slower computers have less free time available after doing the work.

and so the thread that is handling the SDL_GetTicks() is slowed down I believe.

I don't see any reason why SDL_GetTicks would be using a thread, unless you made it do so, which sounds wrong.

And when you say 'I believe', do you have any evidence that you're getting the wrong values back from SDL_GetTicks?

The rest of the problem just sounds like your game loop is wrong and you're handling physics wrongly as a result. I suggest posting your basic game loop.


	while(!quit)  																				//if(currentKeyStates[SDL_SCANCODE_UP]) function[SDL_DEFINED_KEYCODE]
	{

		switch(GAME_STATE)
		{
			case MENU:
				main_menu(&e,currentKeyStates);
				break;
			case CHARACTERS:
				character_menu(&e,currentKeyStates);
				break;
			case GAME:
				menu_game(&e,currentKeyStates);
				break;
		}
		SDL_RenderClear(mainren);
		SDL_PollEvent(&e);
		if(TARGET_FPS > (SDL_GetTicks() - fps_time))
		{
			SDL_Delay(TARGET_FPS - (SDL_GetTicks() - fps_time));
		}
	}

That's the current main game loop. I can show you any other parts if you like. And yeah I understand it should be running slowly of course, but rather that it shouldn't change stuff like arrows speeds.


	if(last_travel + travel_rate < SDL_GetTicks())
			{
				last_travel = SDL_GetTicks();
				x += velocity;
			}
//This is arrow timers that change it's velocity. Arrows travel at a snail pace. //EDIT: On low tier machines.

Menu_Game handles all of the actual game processing. It has an SDL_RenderPresent(mainren) at the end of it. I appreciate the comments.

EDIT EDIT: I don't have any evidence and can't collect it tonight sorry. Will try that when I can change my code tomorrow. And I believe that would be the rest of my problem more than likely. The jumping works by increasing the y value until the timer reaches a certain limit. While that's definitely a hacky way to do it, it's definitely to do with timers.

First and foremost, I would recommend using std::chrono instead of SDL_GetTicks() as it is more accurate, more portable and it's very easy to convert between different time measures (you could do a type alias for seconds: using Seconds = std::chrono::duration>; and then implicitly convert to it from nanoseconds). However, SDL_GetTicks() should still be pretty reliable so it's definitely weird if it would slow down... What framerate do you mean with "very high FPS"? SDL_GetTicks() only returns milliseconds, so if your framerate is around or higher than 1000 I could imagine it being very inaccurate (std::chrono::high_resolution_clock::now returns nanoseconds or the smallest unit your computer can handle).

How relevant these 'new' time entities are ? I mean, are they fast enough ? Are they reliable enough ? Since they are in the standard library, it seems so... But I'm sure some people here made some tests in real condition and might be able to let us know what they actually think about them.

Okay, seeing the code, it's got some significant problems.

Firstly, it's not clear that your 'delay if we're early' code is correct, since you've not shown how you calculate fps_time. It's also error-prone because SDL_Delay isn't accurate enough to hit a target frame rate in that way. That can affect physics. And if you're running so slowly that delays don't get called at all, the game will slow down.

Secondly, your movement code is wrong. It'll only move the arrow on frames when SDL_GetTicks() has caught up to last_travel + travel_rate, but you only check it once per frame, so if you aren't getting frequent enough frames, the arrow won't move far enough. And when you do move it, you always move it the same distance, whether you only just caught up to last_travel + travel_rate or whether you exceeded it by 5 seconds.

Basically, you should rework your gameloop so that you are measuring the frame time and factoring that into the amount of velocity you apply to your objects, and do that every frame so that there is always some amount of movement. Trying to use timers and applying the changes discretely is not going to work when the times involved are small relative to frame duration.

A trivial way to do that is to ditch the SDL_Delay stuff, remove the last_travel and travel_rate stuff, and change your movement code to something like x += velocity * this_frame_time_in_seconds. This factors in the speed of your game into the amount of velocity you apply, so that the amount of velocity per second is constant, whatever the duration of each frame.

There are more complex ways that can provide for even smoother movement (see http://gameprogrammingpatterns.com/game-loop.html), but this should suffice for you.

I'll work on fixing the gameloop tomorrow, though I am partly confused on how you get this_frame_time_in_seconds.


double lastTime = getCurrentTime();
while (true)
{
  double current = getCurrentTime();
  double elapsed = current - lastTime;
  processInput();
  update(elapsed);
  render();
  lastTime = current;
}

Do I literally just times it by the number that comes out from elapsed if the velocity is 1? Or is there something else to be done there? Appreciate your answers.

Frame time in seconds is however long that current frame was, measured in seconds. If elapsed is measured in seconds, and it is being calculated based on how long it took to do everything since the last time around, then it is the same thing as elapsed. This value is not dependent on velocity, and velocity is not dependent on elapsed time. However, the change to position is dependent on both - velocity is "change of position over time", so you derive the movement amount by taking velocity and multiplying it by time.

Real world example: I drive at 50mph, so how far did I move in half an hour? That's 50mph x 0.5 hours = 25 miles

Game example: I move a sprite at 50 pixels per second, so how far does it move in a 30ms frame? 50pps x 0.030s = 1.5 pixels.

How relevant these 'new' time entities are ? I mean, are they fast enough ? Are they reliable enough ? Since they are in the standard library, it seems so... But I'm sure some people here made some tests in real condition and might be able to let us know what they actually think about them.

Yeah, considering that it's in the standard library, I would say that it's at least relevant. If it's fast and reliable... Well, it should be, but it's up to the implementation really. On newer Windows compilers, std::chrono::high_resolution_clock is using QueryPerformanceCounter to access time. AFAIK, that's what most other libraries seem to use as well, but with different resolution. std::chrono_high_resolution_clock is supposed to use the highest resolution your machine will give you, so there won't be anything more precise. The really good things about it is that it's completely portable as it's in the standard library and it makes your code look very clean.

This topic is closed to new replies.

Advertisement