Jump to content
  • Advertisement
Sign in to follow this  
idinkin

Simple experiment

This topic is 4859 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have to processes running on one machine. One of them is the server and the other one is the client. The client sends messages through a connectionless datagram socket like this:
        for(int i = 0; i < 5000; i++)
	{
		char buf[100];
		sprintf(buf, "%d\n", i);
		if ((numbytes = sendto(sockfd, buf, strlen(buf), 0,
			     (struct sockaddr *)&their_addr, sizeof(struct sockaddr))) == -1) {
			printf("%d", WSAGetLastError());
			perror("sendto");
			exit(1);
		}
	}

        if ((numbytes = sendto(sockfd, "stop", 4, 0,
			     (struct sockaddr *)&their_addr, sizeof(struct sockaddr))) == -1) {
		printf("%d", WSAGetLastError());
		perror("sendto");
		exit(1);
	}
The server recieves the messages like this:
int count = 0;
	while(1)
	{
		if ((numbytes=recvfrom(sockfd, buf, MAXBUFLEN-1 , 0,
			(struct sockaddr *)&their_addr, &addr_len)) == -1) {
			perror("recvfrom");
			exit(1);
		}

		buf[numbytes] = '\0';
		
		if(!strcmp(buf, "stop"))
			break;

		fprintf(log, buf);
		count++;
	}

	fprintf(log, "Messages recieved: %d", count);
The server is launched first. The problem is that no errors are reported from sendto and recvfrom, but the server always gets only the first 1860 messages, and never breaks out from the loop as it does not recieve the 'stop' message from the client. I can run the client as many times as I want, and everytime the server is logging the first 1860 messages and keeps running. Any ideas why it might happen?

Share this post


Link to post
Share on other sites
Advertisement
An interesting fact is that if I send 10,000 messsages instead of 5,000 messages the server gets all of the first 1860 messages, and all of the last messages - from 8263-9999. 3598 messages total. And it stops beacuse it recieves the last 'stop' message too.

Is there a setting I can tune (a size of a buffer?) to stop such things from happening?

Share this post


Link to post
Share on other sites
The problem is that you send the messages too fast, so some part of the UDP stack drops the packets. if you insert a sleep for a millisecond after the call to send(), chances are it'll work fine (because the server thread will get scheduled often enough to drain the internal queues).

Of course you don't get an error in these cases, as UDP does not guarantee delivery. Dropping the packet on buffer overflow is "correct" behavior. if you need reliable in-order delivery, use TCP instead.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!