My problem has come when simulating lag with high tick rates. I am doing this by simply calling Sleep on the client between sending the request to the server and receiving the response. It works well when the tick rate is higher(say 80ms per tick and up) even with very high lag (5000ms) but once the tick rate gets below 80ms (I'm testing 10ms at the moment) the client has a tendency to either massively over predict (If the lag is high, ending up way ahead of the server) or under predict (If the lag is lower, ending up behind the server). The amount that it over or under predicts seems related to the amount of lag i.e. Very low lags values like 10ms produce an under prediction by just one tick whereas higher values like 1000ms can over predict by 100 ticks.
I know that very high latency is unusual and an extreme case but should real time apps be expected to handle this? Even so it's odd that it continue to over compensate even with less lag and higher tick rates. Here is the code for the lag compensation on the client:
LARGE_INTEGER counterFreq; LARGE_INTEGER startTime; QueryPerformanceFrequency(&counterFreq); QueryPerformanceCounter(&startTime); // Request time from server char timeReqMsg = 0; send(mServerConnectionSocket, &timeReqMsg, 1, 0); // Simulate some lag Sleep(4000); recv(mServerConnectionSocket, reinterpret_cast<char *>(&mTimer.TickCount), 4, 0); recv(mServerConnectionSocket, reinterpret_cast<char *>(&mTimer.Milliseconds), 4, 0); LARGE_INTEGER endTime; QueryPerformanceCounter(&endTime); float delta = ((float)(endTime.QuadPart - startTime.QuadPart) / (float)counterFreq.QuadPart) * 1000.0f; mTimer.TickCount += (int)(delta / 10.0f); delta = mTimer.Milliseconds; while(AppStatus::IsRunning) { std::cout << mTimer.TickCount << "\n"; Sleep(10 - delta); delta = 0; mTimer.TickCount++; }
And here is the code that responds to the client on the server:
SOCKET clientConnection = accept(mListenSocket, 0, 0); char syncMsg = 0; recv(clientConnection, &syncMsg, 1, 0); char *tickCountMsg; tickCountMsg = reinterpret_cast<char *>(&mTimer.TickCount); send(clientConnection, tickCountMsg, 4, 0); LARGE_INTEGER counterFreq; LARGE_INTEGER endTime; QueryPerformanceFrequency(&counterFreq); QueryPerformanceCounter(&endTime); endTime.QuadPart = endTime.QuadPart; mTimer.Milliseconds = (int)(((float)(endTime.QuadPart - mTimer.TickStartTime) / (float)counterFreq.QuadPart) * 1000.0f); char *msMsg; msMsg = reinterpret_cast<char *>(&mTimer.Milliseconds); send(clientConnection, msMsg, 4, 0);
The actual ticking is done on another thread. It simply records the time (in counts) the tick started and sleeps until it needs to increase the tick count again.
Sorry for the wordy post but this has been confusing me for a couple of hours now and I wanted to see see if anyone has a more educated view on what's happening.
Cheers.
[Edited by - Dom_152 on July 21, 2010 10:44:50 AM]