Winsock problem

Started by
7 comments, last by Dauid 15 years, 2 months ago
sup, I've been trying to get a socketserver/client app working and I got it working!, but not the way I want it to. This is my problem: On the server I'm having an increasing integer, increasing at a high rate. This integer is the data I'm sending to the client, what I want is that the client only will recieve the latest available update of the integer, even if the client is receiving at a slower pace. I'm using socket( AF_INET, SOCK_STREAM, 0 ) for configuration. code & output example, excluding all dataconversions. Server:: int i = 0; while(true){ i++; send(i); cout << i << ' '; Sleep(10); } Client:: int buf; while(true){ recv(buf); cout << buf << ' '; Sleep(100); //using higher sleep to describe my problem } Output over ~x seconds: Server: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ... Client: 0 1 2 3 What I want it to output is: Server: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ... Client: 0 5 11 14 I want the client to be as updated as the server, or at least as close as possible. I'm having a hard time understanding this networkprogramming so could anyone give some sample code? Thanks in advance
Advertisement
Using streams-sockets you will automatically receive all data sent thus far. So if your server sends '1' then '2' then ..... '7' during the time the client is sleeping, the client will on wake-up receive '1234567', not just '1', or just '7', but the whole data. This cannot be avoided by the network, but you can simply extract the last number in your client. recv() will return how many bytes are received, so you can simply read one number from the back of the buffer, and disregard the earlier numbers.

If you are doing this with much higher data-rates, such as sending 1 MB of data every second from the server, and the client has too bad networking to actually receive the data, then the TCP protocol will automatically handle this too. Your server will receive an error that the buffer is full when no more data can be sent or buffered for sending.
Quote:Original post by Erik Rufelt
so you can simply read one number from the back of the buffer, and disregard the earlier numbers.


Wouldn't this be a downer to the performance of the app?.. receiving obsolete data.

No it most probably will never be an issue. In any case you can never avoid receiving all the data sent by the server, so to avoid that you must talk to the server and make it know how often it needs to send something. Using stream sockets you will always receive all sent data. If you fail to receive one byte then you will never receive another byte after, that is the way streams are supposed to work. All data is received in the same order it was sent.
then how about this,
the server is issuing a send to the connected socket,
the client is waiting a few seconds to call recv which would block the sendcall from the server.
When the client calls recv the send is no longer blocked and is sending its specified data. The thing now is that the client only calls the recv once but the server calls send until the buffer in the networksystem is filled.

I want the send()-function to be blocked until the client calls recv().

I can't seem to find any documentation on this issue, but I must've missed something.
The TCP and UDP protocols do not support such behavior. It must be implemented on top of them.

Latest only can work in two ways
- If you can afford latency, have client request data. Server then replies with current value
- If you cannot afford latency, server has no choice but to keep sending. Client keeps reading from socket, but application runs independently, and only uses latest value.

Quote:I want the send()-function to be blocked until the client calls recv().


Various remoting frameworks call this blocking RPC call. It has its uses, but should generally be avoided. It's also not entirely trivial to implement right.
Quote:I can't seem to find any documentation on this issue, but I must've missed something.
This approach is generally highly undesirable due to various practical issues. For example, client shutting down without notification, causing server to hang indefinitely - neither TCP nor UDP support detection of such cases in the example you describe. Solutions then require manual keep-alive management and whatnot.


If you have server producing large amount of data, and you want to keep the client notified, then you will use all available bandwidth. Result of this means that at some point, while trying to send data to client, you will notice you are out of bandwidth and cannot send. Bandwidth can be monitored manually by keeping track of how much has been sent, or by checking WSAEWOULDBLOCK return code.

Pseudo server loop then looks like this:
while (true) {  if (canSend) {    send(latestValue);  } else {    Sleep(...); // depends on actual send rate  }}

It's basically infinite loop, which keeps trying to send as much data as possible.

canSend can be implemented in several ways. One is to just use select() to determine which sockets are writable. Another is to keep track of how much data you've sent, and stop sending at some point, this is fairly unreliable.

Another option, usually used by UDP, but can be adopted for TCP is to use ACK/NACK scheme. This scheme uses in-transit buffer to keep track of how much data the client hasn't acknowledged. TCP has such a scheme built in to prevent flooding peer's buffer.


But short version. You either need to flood the network with redundant data that peer will never use, or accept round-trip latency, which can often make the process useless (character movement in games, for example).
Thanks alot for that answer Antheus!

I almost understands how this works now ^^

I guess it doesn't matter that the server sends obsoletedata, what I don't understand is where this buffer is and how I can access it. I keep sending data using send(), how can I use recv() to only fetch the end of the buffer?

And say if this buffer gets full, how can I empty it so that send can refill with fresh updated data?
Original post by Dauid

Quote:I guess it doesn't matter that the server sends obsoletedata, what I don't understand is where this buffer is and how I can access it. I keep sending data using send(), how can I use recv() to only fetch the end of the buffer?


Each socket has two buffers associated with them - one for sending, one for receiving. When you establish a connection, there are 4 buffers, two for each socket on each side of connection.

In addition, kernel or network stack may be buffering some data - this is irrelevant since it's not accessible to user.

Last buffer is the network itself. Data in transit is stored in memory on routers between two points, or is in the form of electrical signals on the wire. Again, irrelevant.

The important thing is, TCP has flow control mechanism, which indicates that somewhere along this line a congestion has occured. This is reported by select(), which will not return a socket that cannot be written to, it's reported as WOULDBLOCK error when calling send() on non-blocking socket, by send() blocking if using blocking sockets, or via some other mechanism for non-berkeley socket APIs.

But it's a simple bandwidth vs. latency trade-off. Do you prefer to waste bandwidth by sending redundant data, or do you prefer optimal bandwidth use and high latency when receiving data. That is the question of design, desire, requirements, technical constraints, etc...

Quote:And say if this buffer gets full, how can I empty it so that send can refill with fresh updated data?


Action of emptying buffer doesn't make sense in network programming due to previously mentioned complexity of what "buffer" actually is. Once you call send(), things are out of your hands.

Some APIs offer the ability to flush local socket buffer so that network sends the data immediately, but that's not related to your question.

The only form of buffering behavior that can be affected is in case of TCP connection by disabling the Nagle algorithm (google).
I see,

so to solve my problem of the client getting the updated data from the server is to constrain the pace of sending the data and process it as fast as possible on the client. Ignoring any receptiondelays on the client.

Thanks again and that article you posted seems really useful!

This topic is closed to new replies.

Advertisement