Bizarre clock() problem with connect (I think?)

Started by
12 comments, last by choffstein 16 years, 3 months ago
Hey all, First off, I've posted here (instead of under Multiplayer and Network Programming) because this might be a more general problem than just with connect()...however, if a mod feels differently, please move it [smile]. I've written a custom timer class that uses clock() to get the ticks elapsed since program start. I've also written a NetworkConnection class with a start() method, that uses connect() to establish a connection with a remote server. My networking code works properly to the best of my knowledge. I started noticing weird timing with clock() when I made connect attempt to establish a connection on a port it would timeout on (my current timeout is set to 60 seconds), so I bailed on my custom timer class and just looked at clock() itself. I essentially have code that looks like this:

NetworkConnection conn;

printf("MS start:%f\n",((float)clock()/CLOCKS_PER_SEC)*1000);
conn.start();
printf("MS finish: %f\n",((float)clock()/CLOCKS_PER_SEC)*1000);

The milliseconds returned should be in the vicinity of 60000, since my timeout is 60 seconds. Instead, I have a MS start of roughly 0.0 ms, and a MS finish of roughly 0.5 ms. This leads me to believe one of two things: 1. connect() is non-blocking for I/O, causing these printfs to just execute while the program is waiting for connect to return its inevitable -1. or 2. the program is executing the clock() calls in the printfs as soon as it can, and isn't related to conn.start() at all; the output is put to stdout after conn.start() finishes its execution, but the clock() call is done prior to conn.start()'s execution. Or is it just really late and I'm being an absolute dunce about this? Thanks, Aviosity
Advertisement
We only have your word for it that conn.start() does what you say it does. Post the code for that function. It's almost certain to be nothing to do with clock() and entirely to do with what is in conn.start().
What? My word isn't good enough? [wink]

You're probably absolutely right. Here's conn.start():

NETERR NetworkConnection::start(char* host, unsigned int portNum){  int result = 0;    DEBUG("Starting TCP network connection to host %s on port %d...\n",host,portNum);  //Attempt to create socket.  sockFD = socket(PF_INET,SOCK_STREAM,0);  if(sockFD == -1)    {      DEBUG("Error creating socket.\n%s\n\n",strerror(errno));      return NETERR_OPEN;    }  else    {      DEBUG("Socket opened (file descriptor %d).\n",sockFD);    }    //Attempt to look up host data.  hostData = gethostbyname(host);  if(hostData == NULL)    {      DEBUG("Error retrieving host data for %s.\n%s\n\n",host,strerror(errno));      return NETERR_HOSTLOOKUP;    }  else    {      DEBUG("Host data retrieved:\n   Original hostname: %s\n   Resolved hostname: %s\n   Host IP: %s\n",host,hostData->h_name,inet_ntoa(*(struct in_addr*)hostData->h_addr));    }  //Setup host sockaddr structure.  hostAddr.sin_family = AF_INET;  hostAddr.sin_port = htons(portNum);  hostAddr.sin_addr = (*(struct in_addr*)hostData->h_addr);  //Attempt to connect to server on given port.  result = connect(sockFD,(struct sockaddr*)&hostAddr,sizeof(hostAddr));  if(result == -1)    {      DEBUG("Error establishing connection to %s on port %d.\n%s\n\n",host,portNum,strerror(errno));      return NETERR_CONNECT;    }  else    {      DEBUG("Socket %d connected to %s on port %d.\n",sockFD,hostData->h_name,portNum);    }  //Return and declare as open.  DEBUG("Socket %d ready to send/recv.\n");  open = 1;  return NETERR_OK;}


Note that any variable (such as hostData, hostAddr, and open) are member variables of the NetworkConnection class. The last message I get before the long 60-second timeout wait is the hostData output, so I think it's fair to assume the actual pausing is taking place at connect().

As a side note, I placed DEBUG statements (and also, just to make sure it wasn't something bizarre with my DEBUG statements, I checked with printf) right around the connect() call and saw the same behavior. To me, it looks like connect() is almost pausing the clock()...which can't be.

Let me know if there's anything you need me to clear up [smile].

Thanks so much,
Aviosity
By default TCP/IP will send back an error packet behind the scenes if you connect to a port with no listener to save the client waiting for a timeout.

If the port is hidden by a firewall or the IP is incorrect then that won't happen and you should hit your timeout.
Hey Adam,

That makes sense, and I'm attempting to connect to a server (aviosity.org) on a port I know isn't open or have a listener on (5190). I want to hit the timeout, because I'm using it to test my timer. It literally looks, in the output, like the clock has only executed 100 ticks (roughly) over the course of a 60-second timeout. The entire program hangs for 60 seconds while connect() is attempting to establish a connection, but the clock ticks seem to say its only been hanging for fractions of a millisecond.

Any more ideas?

Thanks so much,
Aviosity
So I've discovered the problem isn't just related to connect(). When I write this particular piece of code:

printf("Clock start: %d\n",clock());getchar();printf("Clock end: %d\n",clock());


The printfs show only about a quarter of a millisecond of elapsed time (about 250 ticks), where I've left the program sitting and holding for upwards of 30 seconds (roughly 30 million ticks). If I weren't already bald, I'd be in trouble [wink]. My understanding is that clock() does not stop counting ticks when the program blocks or is waiting for an interrupt, but am I flawed in this? And if I am, what should I be using instead of clock() for high-resolution timing?

I'm running on OS X 10.5 using g++ 4.01 on a Core 2 Duo. CLOCKS_PER_SEC is defined at 1,000,000. I'm kind of grasping at straws here, so if anyone has any idea what's going on here, I'd appreciate it [smile].

Thanks,
Aviosity
If you just want to control the timeout on the connect call then might I suggest you instead change the socket to non-blocking, make a call to connect, follow that with a call to select (specifying the desired timeout), and then change it back to blocking.

Here is one example of doing that.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
I think the issue is that you are trying to print a type clock_t, which is defined in the standard to be an arithmetic type capable of representing time -- you are not given any specific implementation details, which means your %d may be the incorrect format, and trying to printf() a clock_t is nonsense anyway.

If you really need to, you could probably do some sort of casting magic, knowing that it is an arithmetic type, and making sure not to chop off any digits.

Or perhaps create an interval variable, declared as a double, and defined to be the difference between the two clock_t values...
Hey iMalc, visage,

I don't want to control the timeout on my connect call, I want to make sure I have accurate timing. The example I posted above using getchar() shows something really goofy going on with clock(), and not just in conjunction with connect(). This indicates to me that the issue is with clock() and not with connect() (I'm not sure whether or not that's a blessing [smile]).

On my system, clock_t is defined (typedef'd) as an unsigned long. I should have no problem just passing it to printf with a %d format specifier, but have cast it to an unsigned long to make sure, which results in no compilation errors or warnings and is generating the same result.

I appreciate everyone's help so far - this is driving me absolutely up a wall that I don't have an accurate timer (which is rather important for what I'm doing). Anybody else see something I might be goofing up?

Thanks so much,
Aviosity
Hmm, according to this page CLOCKS_PER_SEC on OS X is 100.

Also see these docs from the Apple site. Note that it counts processor time used and not real time elapsed. When the process is asleep waiting for something very little processor time will be used.

That would explain your problem, but doesn't get you a high resolution timer.

The best option is probably going with an OS specific way of reading the timer, but I've no idea what the equivalent of QueryPerformanceCounter() is on OS X.

This topic is closed to new replies.

Advertisement