Sign in to follow this  

Is Something Wrong With "gettimeofday"

This topic is 4479 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i use gettimeofday calculate the time,and if it satisfy the condition,i will send the packet to client.
void Test()
{
struct timeval tv;
gettimeofday(&tv,NULL);
printf("Time:%d,%d\n",tv.tv_sec,tv.tv_usec);
sendPacket();
}

the difference between the printed time under linux is right,but when i recvPacket() in the client which runs under windows 2000pro,the differece between the received packet time is less than the printed time under linux. also i use tcpdump capture the sending packet under linux, the diff time is less than the real time. linux OS: AS4 i use "ace" to send the packet! Can u tell me why? Thank u in advance!

Share this post


Link to post
Share on other sites
Gonna need more information.

If the printed value on linux is correct, then why would there be a problem with gettimeofday? The point at which you have a problem is only after sending it on linux and recieving it on windows.

-=[ Megahertz ]=-

Share this post


Link to post
Share on other sites
It's quite possible that the scheduler jitter such that you receive two packets closer to each other in time than they were sent.

If you need a specific interval for packets, you need a de-jitter buffer, which receives packets as soon as they come in, and schedules them for delivery at the predictable interval. Ideally, put a time-stamp in the packets on send, too.

Share this post


Link to post
Share on other sites
I try all the methods!

Precondition:
1. the buffer of sending packets is 16M.
2. the diffience time between Server and Client ,i mean ,sending packets and receving packets is often about 100ms.
3.i write a *Test* program to test the ACE framework, it runs right!
4.i move all the codes from LINUX to WINDOWS,and i replace gettimeofday with
GetTickCount,i also runs right!


and i search gettimeofday in google's group,it's notorious! can u tell me which function can replace the gettimeofday, i will use millisecond!


Thank U In Advance!

Share this post


Link to post
Share on other sites
It's not something silly is it?

Remember the usec is MICROSECONDS on linux... from my timer class, works ok on both linux and windoze.


float WSTimer::getSeconds()
{
#ifdef WSCORE_WIN32 // timeGetTime is MILLISECOND PRECISION
m_iEndclock = timeGetTime();
#endif

#ifdef WSCORE_LINUX // gettimeofday is MICROSECOND PRECISION
gettimeofday(&tv,0);
m_iEndClock = tv.tv_sec * 1000000;
m_iEndClock += tv.tv_usec;
m_iDeltaTime = m_iEndClock - m_iStartClock;
#endif
// Rate_inv on windows = 1/1000
// Rate_inv on linux = 1/1000000
m_iDeltaTime = m_iEndClock - m_iStartClock;
return (float)((double)(m_iDeltaTime)*m_fRate_inv);
};



Does this help?

Share this post


Link to post
Share on other sites
Thank u , _winterdyne_ !

The problem i occur is an exception!
i know how to use gettimeofday?but the fact is that when i am sending the packet,i print the time, and when i receive(using tcpdump capture the packet or using a short winsock program to capture the packet and print the time).

for example

[Server]
// the difference is 1120ms
1222222222,123456 Sending the packet 1
1222222223,243456 Sending the packet 2

[Client]
// the difference is 1000ms
26555555,123 Receiving the packet 1
26555556,123 Receiving the packet 2


this is not the factual data,but the difference is right!

I dont know how to explain it?
Can u explain it for me?

BTW:i trace the program, it immediately sending the packet to Client!

Share this post


Link to post
Share on other sites
I see.

You're saying the delay between packets on the server is longer than on the client.

What is the size of the packets? Is 2 larger than the MTU on the server but smaller on the client?

I think you'll have to do what hplus has suggested and implement a receiving buffer to schedule deliver at a predicatable time, if you really need to do so.


Share this post


Link to post
Share on other sites
Thank u,_winterdyne_ !

My sending buffer is 16M ,and my packet is about 0x39 bytes!
if there is anything wrong with the transfering packets ,why the interval of sending two packets is often longer 100ms than the client receiving the packets!

i cann't explain it,and u?

Share this post


Link to post
Share on other sites
Where are you printing the time difference? What is happening between the sends?
What socket implementation are you using to send and receive?
Is nagling (Nagle's algorithm) occuring in send? Try setting TCP_NODELAY if you're using TCP methods.

Share this post


Link to post
Share on other sites

This topic is 4479 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this