fd_set under unix

Started by
11 comments, last by Structural 20 years ago
Just to be clear, as was said the first parameter to select (nfds) is ignored in Winsock. On Unix you have to set it correctly. What you have to set it to is the value of the largest fd plus one, *not* the number of fds as would be logical.

IMHO select wins the award for the worst api design in the entire computing universe.
-Mike
Advertisement
I''ve seen implementations who give FD_SETSIZE as first parameter. Would this be correct? The BSD fd.h header and the VxWorks docs I have here seems to agree, although the windows fd''s I''ve seen go far over the value of 64 (the value FD_SETSIZE under both Windows and BSD).

quote:
If either pReadFds or pWriteFds is NULL, they are ignored. The width parameter defines how many bits will be examined in the file descriptor sets, and should be set to either the maximum file descriptor value in use plus one, or simply to FD_SETSIZE. When select( ) returns, it zeros out the file descriptor sets, and sets only the bits that correspond to file descriptors that are ready. The FD_ISSET macro may be used to determine which bits are set.


Would this have a performance impact?
STOP THE PLANET!! I WANT TO GET OFF!!
woohoo... it works... even though I've been chasing bugs all day that weren't there.

I'm compiling and running the app on a remote target through a telnet session, and it seems that only printf's in the main thread are shown in the telnet window. Printf's in other threads are only shown in the terminal at the target.
If I had known that, I wouldn't have spent all afternoon figuring out why I didn't see my "data thread started" messages.
I figured it out when the guy at the target said someone "has been spamming the targets terminal with weird printf's".

Oh well... Learned something new again today... Don't spam someone elses target

[edited by - Structural on April 7, 2004 3:34:24 PM]
STOP THE PLANET!! I WANT TO GET OFF!!

This topic is closed to new replies.

Advertisement