floating sprite problem

Started by
6 comments, last by Irlan Robson 9 years, 2 months ago

I have a sprite that is supposed to stay in place when I move the camera around the level, but when I try to move it my sprite floats out of position while camera is moving. How can I fix this? I tried to do 'fix your time step' thing but something tells me I am doing it the wrong way.

Here are the screenshots that I cropped out to illustrate the problem.

This one is the correct position when camera is not moving: https://dl.dropboxusercontent.com/u/118608180/correct_position.png

And these are the screenshots what happens depending on which side i choose to move the camera:

https://dl.dropboxusercontent.com/u/118608180/incorrect_position_1.png (if i move camera up-left)

https://dl.dropboxusercontent.com/u/118608180/incorrect_position_2.png (if i move camera down)

https://dl.dropboxusercontent.com/u/118608180/incorrect_position_3.png (if i move camera right)

And here is a releveant portion of the code I used:


float dt = 0.0f;

// loop until a WM_QUIT message is received
msg.message = static_cast<UINT>(~WM_QUIT);
while(msg.message != WM_QUIT)
{
	sf::Event Event;

	if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
	{
		// Win32 part
		TranslateMessage(&msg);
		DispatchMessage(&msg);
	}
	else // SFML part
	{
		// start frame clock
		timer.start();

		RenderView.clear(sf::Color(64, 64, 64));

		// check mouse position
		Mouse = sf::Mouse::getPosition(RenderView);

		// set camera move speed
		cam->SetCamSpeed(1);
		dt = timer.msec;

		while(RenderView.pollEvent(Event))
		{
			if(Event.type == sf::Event::KeyPressed)
			{
				if(sf::Keyboard::isKeyPressed(sf::Keyboard::Up)) // move camera up
				{
					cam->CamOffsetY -= cam->CamSpeed * dt;
					StandardView.move(0.0f, -(cam->CamSpeed * dt));
				}
				if(sf::Keyboard::isKeyPressed(sf::Keyboard::Down)) // move camera down
				{
					cam->CamOffsetY += cam->CamSpeed * dt;
					StandardView.move(0.0f, cam->CamSpeed * dt);
				}
				if(sf::Keyboard::isKeyPressed(sf::Keyboard::Left)) // move camera left
				{
					cam->CamOffsetX -= cam->CamSpeed * dt;
					StandardView.move(-(cam->CamSpeed * dt), 0.0f);
				}
				if(sf::Keyboard::isKeyPressed(sf::Keyboard::Right)) // move camera right
				{
					cam->CamOffsetX += cam->CamSpeed * dt;
					StandardView.move(cam->CamSpeed * dt, 0.0f);
				}
			}
		}

		// find map tile that has mouse cursor on it
		for(unsigned int i = 0; i < spr.size(); i++)
		{
			if(Mouse.x + cam->CamOffsetX > spr[i].getGlobalBounds().left &&
				Mouse.x + cam->CamOffsetX < spr[i].getGlobalBounds().left + spr[i].getGlobalBounds().width &&
				Mouse.y + cam->CamOffsetY > spr[i].getGlobalBounds().top &&
				Mouse.y + cam->CamOffsetY < spr[i].getGlobalBounds().top + spr[i].getGlobalBounds().height)
			{
				ValidMarker = 1;
				marker.setPosition(spr[i].getPosition().x, spr[i].getPosition().y);
				break;
			}
			else
				ValidMarker = 0;
		}

		for(unsigned int i = 0; i < spr.size(); i++)
			RenderView.draw(spr[i]); // draw grid sprites
		
		RenderView.setView(StandardView);
		if(ValidMarker == 1) // draw marker if mouse is hovering over tiles
			RenderView.draw(marker);
		RenderView.draw(CharacterPawn);
		RenderView.display();
		
		// stop frame clock
		timer.stop();
	}
}
Advertisement


if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
{
// Win32 part
TranslateMessage(&msg);
DispatchMessage(&msg);
}
else // SFML part

Are you mixing SFML with Win32 for what purpose? I believe SFML has already the Win32 API part implemented on it.

You're doing it wrong. That's how you should update your game with a fixed time-step:


while ( QUIT_CONDITION ) {
       RENDER_TIME += ELAPSED_REAL_TIME;      
       while ( RENDER_TIME + UPDATED_TIME < FIXED_TIME_STEP ) {
              UPDATED_TIME += FIXED_TIME_STEP;
              UpdateEvents();
              UpdateGameState();
      }
}

Are you mixing SFML with Win32 for what purpose?

i need a level editor with multiple windows menu bar and stuff, along side of sfml rendering window

I tried to do that by following this implementation, here:

and it didn't help at all, whats worse it degraded the performance to the point of unusable.

This is what I used to get my delta time in my original post:


#ifndef _TIMER_H_
#define _TIMER_H_

#include <ctime>

// adapted from timer class found on internet (no real changes, just coding style)
class TimerClass
{
public:
	void start(); // constructor
	void stop(); // de-constructor
	void getTimes();

	// begin/end variables
	clock_t begin, end;
	// variable declarations used for time calculation
	double ticks, msec, sec, min;
};

#endif

////////////////////////////////////

#include "timer.h"

void TimerClass::start()
{
	begin = clock() * CLK_TCK;
}

void TimerClass::stop()
{
	end = clock() * CLK_TCK; getTimes();
}

void TimerClass::getTimes()
{
	ticks = end - begin;	// stop the timer, and calculate the time taken
	msec = ticks / 1000;	//milliseconds from Begin to End
	sec = msec / 1000;		//seconds from Begin to End
	min = sec / 60;			//minutes from Begin to End
}

I reached the point of total confusion, and I don't know how to implement loop suggested above with my own. sad.png


I tried to do that by following this implementation, here:

You can't accumulate time in double/floats/integers/longs, you need to keep them in unsigned long long variables (64-bit unsigned integers), and the time measurement need microseconds (the most precise conversion we can do). Such small time slices like delta times can be converted after the time class has time in microseconds, and those are not accumulated:


#ifndef __TIME_H__
#define __TIME_H__

typedef unsigned long long UINT64;

class CTime {
public:
	CTime();

	void Update();
	void UpdateBy(UINT64 _ui64Ticks);

	inline UINT64 CurTime() const { return m_ui64CurTime; }
	inline UINT64 CurMicros() const { return m_ui64CurMicros; }
	inline UINT64 DeltaMicros() const { return m_ui64DeltaMicros; }
	inline float DeltaSecs() const { return m_fDeltaSecs; }
        inline void SetResolution(UINT64 _ui64Resolution) { m_ui64Resolution = _ui64Resolution; }
protected:
	UINT64 RealTime() const;
	
	UINT64 m_ui64Resolution;
	UINT64 m_ui64CurTime;
	UINT64 m_ui64LastTime;
	UINT64 m_ui64LastRealTime;
	UINT64 m_ui64CurMicros;
	UINT64 m_ui64DeltaMicros;
	float m_fDeltaSecs;
};

#endif

#include "CTime.h"
#include <Windows.h>

CTime::CTime() : m_ui64Resolution(0ULL), m_ui64CurTime(0ULL), m_ui64LastTime(0ULL), m_ui64LastRealTime(0ULL),
m_ui64CurMicros(0ULL), m_ui64DeltaMicros(0ULL), m_fDeltaSecs(0.0f) {
	QueryPerformanceFrequency( reinterpret_cast<LARGE_INTEGER*>(&m_ui64Resolution) );
	m_ui64LastRealTime = RealTime();
}

UINT64 CTime::RealTime() const {
	UINT64 ui64Ret;
	QueryPerformanceCounter( reinterpret_cast<LARGE_INTEGER*>(&ui64Ret) );
	return ui64Ret;
}

void CTime::Update() {
	UINT64 ui64TimeNow = RealTime();
	UINT64 ui64DeltaTime = ui64TimeNow - m_ui64LastRealTime;
	m_ui64LastRealTime = ui64TimeNow;
	
	UpdateBy(ui64DeltaTime);
}

void CTime::UpdateBy(UINT64 _ui64Ticks) {
	m_ui64LastTime = m_ui64CurTime;
	m_ui64CurTime += _ui64Ticks;

	UINT64 m_ui64LastMicros = m_ui64CurMicros;
	m_ui64CurMicros = m_ui64CurTime * 1000000ULL / m_ui64Resolution;
	m_ui64DeltaMicros = m_ui64CurMicros - m_ui64LastMicros;
	m_fDeltaSecs = m_ui64DeltaMicros * static_cast<float>(1.0 / 1000000.0);
}

Note that this class it is reusable, and you can use it whenever you want.

The game has two instances of the time class: render time and logical time.

The render time is the accumulated frame time being updated only by calling Update() (updating the game by the real time).

The game logic time updates the game logic time by FIXED_TIME_STEP microseconds (generally 16666ULL or 33333ULL), that is, you call LogicTime.UpdateBy( 16666ULL ) for instance.

Example:


UpdateAndDraw() {
        m_tRenderTime.Update();
	UINT64 ui64CurMicros = m_tRenderTime.CurMicros();
	while (ui64CurMicros - m_tLogicTime.CurTime() > FIXED_TIME_STEP) {
		m_tLogicTime.UpdateBy(FIXED_TIME_STEP);
		
?                 //Input, game logic, physics, A.I., etc. goes inside this block. Everything outside it is not synchronized with the game logical time and it will screw up the game simulation.
                UpdateGameState(); //Update game logic (example)
	}

        RenderGameState();
}

Do I must have two separate instances for this to work? Can I still make this work if I don't separate logic and rendering? I know it might not be recommended practice but, I don't want to make stuff overly complex.


Do I must have two separate instances for this to work? Can I still make this work if I don't separate logic and rendering? I know it might not be recommended practice but, I don't want to make stuff overly complex.

#1 No. Another option is keep all unsigned long long variables on the game class and increment that; the time class it's used for efficiency and code reuse.

#2 Make what work? For game logic and rendering you would be doin't it wrong and since it is wrong I would not recommend.

#3 I gave you the code. I'm against code copy-pasting but in this so simple case you can grab and use it.

In my time implementation, the dt seconds (what you did use) is the DeltaSecs() function and it is already implemented (use that to move the camera not the objects).

Regards to your OP, make sure you're not moving the sprite but only the camera. Are you sure that you're rendering the sprites position relative to the camera position?

Example:


for (all sprites) {
    DrawSpriteAtPosition( Sprite.Position - Camera.Position );
}

That is why separate logic and rendering should be done. You move the camera in the game logic block and render everything relative to the camera position and rotation in the rendering block at the end of the fixed time step loop.

Something that I forgot to say:

You must set the time resolution to one microsecond, otherwise you won't get the right conversion when updating the timer to microseconds or seconds.

That's how you set the resolution:


GameLogicTime.SetResolution(1000000ULL);

or even better:


#define ONE_MICROSECOND 1000000ULL

GameLogicTime.SetResolution(ONE_MICROSECOND);

This topic is closed to new replies.

Advertisement