Major lag compensation

Started by
4 comments, last by Bimble Bob 13 years, 9 months ago
I've written a little networking test whereby a server is "ticking" at a certain rate and a client can connect to it, sync up and begin ticking at the same rate. The way the client syncs up is by sending a request to the server asking for what tick it is currently on and how far into it it is (i.e. The number of milliseconds since the current tick started). The client times the roundtrip and uses that data to compensate for lag and starts ticking at a corrected tick number that should be (and is often) whatever tick number the server is at.

My problem has come when simulating lag with high tick rates. I am doing this by simply calling Sleep on the client between sending the request to the server and receiving the response. It works well when the tick rate is higher(say 80ms per tick and up) even with very high lag (5000ms) but once the tick rate gets below 80ms (I'm testing 10ms at the moment) the client has a tendency to either massively over predict (If the lag is high, ending up way ahead of the server) or under predict (If the lag is lower, ending up behind the server). The amount that it over or under predicts seems related to the amount of lag i.e. Very low lags values like 10ms produce an under prediction by just one tick whereas higher values like 1000ms can over predict by 100 ticks.

I know that very high latency is unusual and an extreme case but should real time apps be expected to handle this? Even so it's odd that it continue to over compensate even with less lag and higher tick rates. Here is the code for the lag compensation on the client:

LARGE_INTEGER counterFreq;		LARGE_INTEGER startTime;		QueryPerformanceFrequency(&counterFreq);		QueryPerformanceCounter(&startTime);			// Request time from server		char timeReqMsg = 0;		send(mServerConnectionSocket, &timeReqMsg, 1, 0);		// Simulate some lag		Sleep(4000);		recv(mServerConnectionSocket, reinterpret_cast<char *>(&mTimer.TickCount), 4, 0);		recv(mServerConnectionSocket, reinterpret_cast<char *>(&mTimer.Milliseconds), 4, 0);		LARGE_INTEGER endTime;			QueryPerformanceCounter(&endTime);		float delta = ((float)(endTime.QuadPart - startTime.QuadPart) / (float)counterFreq.QuadPart) * 1000.0f;		mTimer.TickCount += (int)(delta / 10.0f);		delta = mTimer.Milliseconds;				while(AppStatus::IsRunning)		{			std::cout << mTimer.TickCount << "\n";			Sleep(10 - delta);			delta = 0;			mTimer.TickCount++;								}


And here is the code that responds to the client on the server:

SOCKET clientConnection = accept(mListenSocket, 0, 0);			char syncMsg = 0;			recv(clientConnection, &syncMsg, 1, 0);			char *tickCountMsg;			tickCountMsg = reinterpret_cast<char *>(&mTimer.TickCount);			send(clientConnection, tickCountMsg, 4, 0);							LARGE_INTEGER counterFreq;			LARGE_INTEGER endTime;				QueryPerformanceFrequency(&counterFreq);			QueryPerformanceCounter(&endTime);			endTime.QuadPart = endTime.QuadPart;			mTimer.Milliseconds = (int)(((float)(endTime.QuadPart - mTimer.TickStartTime)  / (float)counterFreq.QuadPart) * 1000.0f);			char *msMsg;			msMsg = reinterpret_cast<char *>(&mTimer.Milliseconds);			send(clientConnection, msMsg, 4, 0);	


The actual ticking is done on another thread. It simply records the time (in counts) the tick started and sleeps until it needs to increase the tick count again.

Sorry for the wordy post but this has been confusing me for a couple of hours now and I wanted to see see if anyone has a more educated view on what's happening.

Cheers.

[Edited by - Dom_152 on July 21, 2010 10:44:50 AM]
It's not a bug... it's a feature!
Advertisement
Simulating lag by ticking isn't all that great.

Instead, you should simulate lag by wrapping the recv() call. When you call recv(), check whether there are any packets to return that are older than (lag time). If not, return no data.

The code looks something like this:

struct recv_data {  int value;  long time;  std::vector<char> data;};std::list<recv_data> queue;int lagged_recv(int socket, void *buffer, int size, int flags) {    std::vector<char> tmp;    tmp.resize(size);    int val = ::recv(socket, &tmp[0], size, flags);    recv_data rd;    long now = my_time_value();    rd.value = val;    rd.time = now;    rd.data = tmp;    queue.push_back(rd);    if (queue.front().time < now - AMOUNT_OF_LAG) {        int ret = queue.front().value;        if (ret > 0) {            memcpy(buffer, &queue.front().data[0], ret);        }        queue.pop_front();        return ret;    }    return 0; // no data yet}

enum Bool { True, False, FileNotFound };
Out of interest what is wrong with using Sleep to simulate lag in a simple test like this?
It's not a bug... it's a feature!
Three problems:

1) Your sleep(10-X) will sleep for a very long time if X > 10.
2) You're still using the same clock for client and server.
3) Real lag doesn't look like that; you keep running the main loop, and packets arrive later. Thus the test doesn't seem that useful.
enum Bool { True, False, FileNotFound };
It's also not at all clear what purpose that while(AppStatus::IsRunning) loop is serving. Where's the compensation you speak of?
OK perhaps I am not using the right terminology, I am new to network programming after all. What I mean is that the client is supposed to adjust it's counting of ticks based on the delay involved in getting the tick time from the server so that by the time the client enters that loop and begins counting it will be in sync with the counting on the server.

It's only a stupid beginner experiment; I just wanted to see if it was possible. I know that Sleep(10 - X) can result in crazy long delays but X can never be bigger than 10. I don't think I understand your point "2) You're still using the same clock for client and server.".

Sorry for the nooby questions.
It's not a bug... it's a feature!

This topic is closed to new replies.

Advertisement