Archived

This topic is now archived and is closed to further replies.

Winsock & Receive Timeout problem.

This topic is 5821 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have written a class that wraps some of the winsock functions. This calls creates a blocking socket. It allows me to send and receive data... The problem arises when I try to receive data I want the recv function to time out after a few seconds... I set this up by using the SetSockectOpt function with the SO_RCVTIMEOUT flag. This does not seem to work the recv function stays blocked for ever... To test this I created a server application that receives a connection request and falls into a infinit loop... I know the code works because If I send data back recv() will return the bytes received and fill the buffer.... Here is my class implementation...
  
// Required include files.

#include <windows.h>
#include <stdio.h>
#include <winsock.h>

// Application include files.

#include "socket.h"

Socket::Socket()
{}

bool Socket::init(WORD pVersion, bool pDebug)
{
	debug = pDebug;
	
	version = pVersion;

	int errorCode;
	
	if(WSAStartup(version, &WSAData) != 0)
	{
		errorCode = WSAGetLastError();
		
		if(debug)
			printf("Socket::init() - Error Code: %d\n", errorCode);

		return(false);
	}

	return(true);
}

bool Socket::create()
{
	int errorCode;
		
	if((sock = socket(AF_INET, SOCK_STREAM, 0)) == INVALID_SOCKET)	
	{
		errorCode = WSAGetLastError();
		
		if(debug)
			printf("Socket::create() - Error Code: %d\n", errorCode);
		
		return(false);
	}

	return(true);
}

bool Socket::open(char *pIPAddress, int pPort, int pTimeOut)
{
	IPAddress = pIPAddress;
	port = pPort;
	//timeOut = pTimeOut;

	timeOut = 5;

	int errorCode;

	struct sockaddr_in  serverAddress;

	memset(&serverAddress, 0, sizeof(serverAddress));
    serverAddress.sin_family = AF_INET;
    
	// Use the local host

    serverAddress.sin_addr.s_addr = inet_addr(IPAddress);
    serverAddress.sin_port = htons(port);

	
	char timeOutBuffer[5];
	itoa(timeOut, timeOutBuffer, 10);
	
/*	
	// Set the socket send timeout.
	if(setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO, timeOutBuffer, sizeof(timeOutBuffer)) == SOCKET_ERROR)
	{
		errorCode = WSAGetLastError();

		if(debug)
			printf("Socket::open() - Error Code: %d\n", errorCode);
	
		return(false);
	
	}
*/
	// Set the socket read timeout.
	if(setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, timeOutBuffer, sizeof(timeOutBuffer)) == SOCKET_ERROR)
	{
		errorCode = WSAGetLastError();

		if(debug)
			printf("Socket::open() - Error Code: %d\n", errorCode);
		
		return(false);
	
	}
	
	if(connect(sock, (struct sockaddr *) &serverAddress, sizeof(serverAddress)) == SOCKET_ERROR)
    {
		errorCode = WSAGetLastError();
	
		if(debug)
			printf("Socket::open() - Error Code: %d\n", errorCode);
	
		return(false);
	}
	
	return(true);
}

bool Socket::write(char *pData, int pLength)
{
	int errorCode;
	int bytesSent = 0;

	bytesSent = send(sock, pData, pLength, 0); 
	
	if (bytesSent == SOCKET_ERROR)
	{
		errorCode = WSAGetLastError();		

		if(debug)
			printf("Socket::write() - Error Code: %d\n", errorCode);
	
		return(false);
	}

	if(debug)
		printf("Socket::write() - Bytes Sent: %d\n", bytesSent);

	return(true);
}

bool Socket::read(char *pData, int pBufferLength, BYTE pTerminator, int &pBytesReceived)
{
	int errorCode;
	int bytesReceived = 0;
	int accumulatedBytes = 0;
	
	bool terminate = false;

	printf("Before recv...\n");
	bytesReceived = recv(sock, pData, pBufferLength, 0);
	printf("After recv...\n");
	printf("Bytes received: %d", bytesReceived);

	if (bytesReceived == SOCKET_ERROR)
	{
		errorCode = WSAGetLastError();	

		if(debug)
			printf("Socket::read() - Error Code: %d\n", errorCode);
		
		return(false);
	}

/*
	while(!terminate)
	{		
		printf("Before recv...\n");
		bytesReceived = recv(sock, (pData + accumulatedBytes), (pBufferLength - accumulatedBytes), 0);
		printf("After recv...\n");
		printf("Bytes received: %d", bytesReceived);

		if (bytesReceived == SOCKET_ERROR)
		{
			errorCode = WSAGetLastError();	

			if(debug)
				printf("Socket::read() - Error Code: %d\n", errorCode);
		
			return(false);
		}

		accumulatedBytes += bytesReceived;

		if(bytesReceived == 0)
			break;
		
		// Look for the terminating character in the chunk of data received from
		// the socket. Ineficient should check each chunk received instead of the
		// entire buffer.
		for(int i = 0; i < accumulatedBytes; i++)
		{
			if(pData[i] == pTerminator)
				terminate = true;
		}
	}
	
	// Terminate the string.
	pData[accumulatedBytes] = ''\0'';

	pBytesReceived = accumulatedBytes;
	
	if(debug)
		printf("Socket::read() - Bytes Received: %d\n", accumulatedBytes);
*/
	return(true);
}

bool Socket::close()
{
	int errorCode;

	if(closesocket(sock) == SOCKET_ERROR)
    {
		errorCode = WSAGetLastError();

		if(debug)
			printf("Socket::close() - Error Code: %d\n", errorCode);
	
		return(false);
	}
	
	return(true);
}

Socket::~Socket()
{
}
  

Share this post


Link to post
Share on other sites
In my experience thus far with Windows an application programing, it''s always been worth the extra effort to use asyncronous methods - even if you generally use them syncronously. They just always seem to work better, by avoiding aggrevating behavior just as you describe.

If you''re using blocking sockets, just make two threads per socket - one for sending one for receiving (and one thread for accepting on the listening socket). When you call WSADeinit (or whatever it''s called) all blocking operations will terminate with an error.

Share this post


Link to post
Share on other sites
The function you should look into if you want timeouts is select. It lets you poll if a socket is ready to send/recv, and you can set a timeout on how long to poll the socket.

Most of my experience was with unix sockets for assignments in school, but I''ve used select several times in Windows. I don''t know if there''s a better way to do it with winsock.

Jesse Chounard

Share this post


Link to post
Share on other sites
I fixed the problem with setsocketopt()...

Common MS can be that stupid to provide functions that dont work... So I did hunting around the MSDN loads of crap! and found a small snippet of code...

The code uses setsocketopt right after the creation of the socket using the socket() function. So I moved the code to right after the socket() call and voila it works....

I dont think though calling setsocketopt, after populating the socketaddr_in structure should affect it, but then aggain who knows

Share this post


Link to post
Share on other sites