Question about stress testing IOCP server

Started by
5 comments, last by Ysaneya 17 years ago
Hi, I've begin to program an IOCP server for several month. now I have finished most of the basic structure and begin to do a stress test of the server, the method I am using is to launch about 500 client program to connect to the server and loop to send packets in a frequency about 20 pakcet a second, each packet size is about 512bytes. when server received a packet, it broadcast to all clients. however, soon after the test begin(about 20s), I got a WSAENOBUFS error on server side when sending packets, and server side crushed. I have read some article on the web and it seems that I probaly do too much WSASend that caused this error. Now I am confused, in my understanding IOCP should be able to handle over thousands concurrent network traffics, what I am doing is only 500 clients broadcasting messages, is this mean my "stress test" already reach the IOCP limits? hope someone help. plus: my hardware is Intel Core2 CPU x 2, 1GB Ram (should be enough) flyking
Advertisement
I've had many more connection with less hardware than you have with IOCP, are you sure your not leaking memory in their?
maybe you could post up your code for the worker threads?

Also it might be worth a look here jsut in case :)
http://support.microsoft.com/kb/q196271/
Tried this?

http://www.gamedev.net/community/forums/topic.asp?topic_id=442461
Quote:Original post by Antheus
Tried this?

http://www.gamedev.net/community/forums/topic.asp?topic_id=442461


thx, I'll try to increase my send buffer now

Quote:Original post by Soul
I've had many more connection with less hardware than you have with IOCP, are you sure your not leaking memory in their?
maybe you could post up your code for the worker threads?

Also it might be worth a look here jsut in case :)
http://support.microsoft.com/kb/q196271/



I am quite sure there are no memory leak. however, I am using a memory pool class which save all the memory blocked for recycle use.

my code are bulky, so I'll just give an outline of my work thread:

DWORD ioWorker(){  ClientContext * context;  IOBuffer* buffer; // This structure contain the OVERLAPPED  LPOVERLAPPED lpOverlapped;  DWROD dwIOSize;  bool bStop = false;  while ( !bStop )  {     BOOL bRet = GeeQueuedCompletionState( hCompletionPort, &dwIOSize, (DWORD)&context, &lpOverlapped, INFINITE );          if ( !bRet )     {        // Handle error        ...     }     if ( lpOverlapped )     {        buffer = CONTAINING_RECORD( lpOverlapped, IOBuffer, mOL ); // mOL is the OVERLAPPED member                switch ( buffer->getIOState() )        {        case IO_READ:          // Here I assign all the posted read packet an unique order number.          // then call WSARecv with IO_READ_COMPLETE state          ...          break;          case IO_READ_COMPLETE:          // Here I process the received buffer, first I reorder the buffer           // then copy the data into a customer data list,           // and triger on an data process thread to read all packets in that list          // and then send a sync packet to all other customers by post IO_SEND packet          // THE WSAENOBUFS OCCURED HERE          // finally, I post another read request.          ...          break;        case IO_SEND:          // Here I assign all send packet an unique send order number          // then I call WSASend with IO_SEND_COMPLETE state          ...          break;        case IO_SEND_COMPLETE:          // Here I just check the size transfered with the buffer size          ...          break;        }     }    }}


[Edited by - flyking on April 10, 2007 10:01:47 AM]
Quote:when server received a packet, it broadcast to all clients.


There's some chance that this has to do with your problem. If you have 500 clients, and each client sends data 20 times a second, and you echo that packet to all clients each time, that means you receive 500*20 == 10,000 packets per second, but you're sending 500*499*20 == 4,990,000 packets per second!

In general, when going from general LAN games (where you can echo to all connected players) to MMO games, you will have to do interest and bandwidth management, and decide who sees what data. You also typically structure the output per client so that you schedule the updates, and send the N highest-priority updates each time slot (up to the allowed maximum bandwidth per client).

enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
Quote:when server received a packet, it broadcast to all clients.


There's some chance that this has to do with your problem. If you have 500 clients, and each client sends data 20 times a second, and you echo that packet to all clients each time, that means you receive 500*20 == 10,000 packets per second, but you're sending 500*499*20 == 4,990,000 packets per second!

In general, when going from general LAN games (where you can echo to all connected players) to MMO games, you will have to do interest and bandwidth management, and decide who sees what data. You also typically structure the output per client so that you schedule the updates, and send the N highest-priority updates each time slot (up to the allowed maximum bandwidth per client).


Yes, I think this is the problem. I've installed a bandwidth monitor and run my server, the upload rate sunddenly rised from 0mbps to 98mbps. it seems that's the limit of my server. i'll try to modify my code to limit the total output and test again. Thx :)

Quote:Original post by flykingYes, I think this is the problem. I've installed a bandwidth monitor and run my server, the upload rate sunddenly rised from 0mbps to 98mbps.


So I'm guessing your network interface is 100 mbps. With 5 millions packets at 512 bytes each per second, you're trying to send 2.5 GB/sec (20 gbps/s).. no surprise the send buffers become full quickly :)

Y.

This topic is closed to new replies.

Advertisement