• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
canned

MMORPG and the ol' UDP vs TCP

69 posts in this topic

Quote:
Original post by John Schultz
This makes sense for objects that are rapidly changing state when the system is not running lock-step (or don't require ordered state consistency). To date, I have not run into a problem where this method can provide significant bandwidth savings, but I'll keep it in mind as a future option.

We've found that this method is useful for almost all simulation objects in the 3D (or 2D) world. Players, projectiles, vehicles, etc, whose updates constitute a substantial amount of the server->client communication. In reality it doesn't "save" bandwidth, rather it allows you to more optimally use the bandwidth that you have. Because you're not guaranteeing that any particular data are being delivered, the network layer has the latitude to prioritize updates on the basis of "interest" to a particular client. This results in a substantially more accurate presentation to the client for a given bandwidth setting.

Quote:

Given that this thread is titled MMORPG..., have you tested TNL with 2000-3000 player connections, under real-world internet conditions?

The largest "real-world" games we tested with were 100+ player "single zone" Tribes 2 servers. The problem domain is slightly different - i.e. Tribes was a twitch shooter with substantially more interactivity than most MMO type games, but it should translate well into the MMO type domain.

Quote:

Worst case scenario analysis for a MMORPG and 3000 very active players:

3kbytes/sec * 3000 players = 9000kbytes/sec, 72,000kbits/sec, 72Mbits/sec.

This means you'll probably have many fat pipes, as well as extra routing capabilities to deal with varying internet conditions. Given the unpredictability of network conditions, if the server does not actively adapt its bandwidth output, the system is going to fall apart (lots of data, lots of connections, lots of unpredictability).

It seems to me that an MMO service provider is going to want to make sure that it has enough bandwidth to handle peak load, with a healthy margin. It would be a trivial addition to TNL to allow a server-wide maximum bandwidth setting (i.e. 50 mbits/sec) and then adapt the fixed rates for all connections down as new clients connect.
Quote:

Thus, I hope it is clear why I have been defending TCP*: it really is an excellent protocol. Some of its features are not ideal for games/simulations, but the RTO calculation (see appendix A in Jacobson's paper) is required if a custom protocol is to be used in a large-scale, real world internet environment (such as a MMORPG). It's probable that the UDP-based MMORPG's that fail/fall-apart is due to poor bandwidth management.

Yeah, TCP's got some good bandwidth adaptation features. But classifying all data as guaranteed makes for big simulation trouble when you inevitably run into packet loss.
Quote:

In summary, study the history and design of TCP, and use the best feature(s) for custom game/simulation protocols, while leaving out (or retuning) features that hurt game/simulation performance.

Good summary!
0

Share this post


Link to post
Share on other sites
Quote:
Original post by markf_gg
Quote:
Original post by John Schultz
This makes sense for objects that are rapidly changing state when the system is not running lock-step (or don't require ordered state consistency). To date, I have not run into a problem where this method can provide significant bandwidth savings, but I'll keep it in mind as a future option.

We've found that this method is useful for almost all simulation objects in the 3D (or 2D) world. Players, projectiles, vehicles, etc, whose updates constitute a substantial amount of the server->client communication. In reality it doesn't "save" bandwidth, rather it allows you to more optimally use the bandwidth that you have. Because you're not guaranteeing that any particular data are being delivered, the network layer has the latitude to prioritize updates on the basis of "interest" to a particular client. This results in a substantially more accurate presentation to the client for a given bandwidth setting.


I send this type of data in the unreliable/non-guaranteed channel: it makes up most of the data transmitted as well. Data is added and compressed based on each player's viewspace (more compression for far away objects, really far away objects get updated less frequently, etc.). Classic dead reckoning tracks the simulation so that updates get sent only after a divergence threshold is met. That is, even if the data/object is marked 'dirty', an update is not sent unless the interpolated/extrapolated error is significant (the sender simulates what the receiver should be seeing).

The original statement:

Quote:

If OOORS is of limited use, what other class of data beyond guaranteed and non-guaranteed do you see of value? I agree that the biggest problem is too much data sent as guaranteed, which is a network game design issue. I have not yet seen a strong argument for supporting other classes of data. Either the data absolutely has to get there, or it doesn't. Perhaps you can give an example where this is not true?


asked if there were other useful classes of network data beyond reliable/guaranteed and unreliable/non-guaranteed. Your example uses the unreliable channel: it's a management layer between the network layer and the game layer.

Quote:

Quote:

Worst case scenario analysis for a MMORPG and 3000 very active players:

3kbytes/sec * 3000 players = 9000kbytes/sec, 72,000kbits/sec, 72Mbits/sec.

This means you'll probably have many fat pipes, as well as extra routing capabilities to deal with varying internet conditions. Given the unpredictability of network conditions, if the server does not actively adapt its bandwidth output, the system is going to fall apart (lots of data, lots of connections, lots of unpredictability).

It seems to me that an MMO service provider is going to want to make sure that it has enough bandwidth to handle peak load, with a healthy margin. It would be a trivial addition to TNL to allow a server-wide maximum bandwidth setting (i.e. 50 mbits/sec) and then adapt the fixed rates for all connections down as new clients connect.


Networks are unpredictable. You can also think of bandwidth adaption as a form of fault tolerance. See Jacobson's paper (also see graphs/data present in other papers I referenced). The original version of TCP used a more naive approach (see Cerf & Kahns's 1974 paper and read their comments regarding bandwidth. It did not work well in practice, thus Jacobson's paper in 1988).

Quote:

Quote:

Thus, I hope it is clear why I have been defending TCP*: it really is an excellent protocol. Some of its features are not ideal for games/simulations, but the RTO calculation (see appendix A in Jacobson's paper) is required if a custom protocol is to be used in a large-scale, real world internet environment (such as a MMORPG). It's probable that the UDP-based MMORPG's that fail/fall-apart is due to poor bandwidth management.

Yeah, TCP's got some good bandwidth adaptation features. But classifying all data as guaranteed makes for big simulation trouble when you inevitably run into packet loss.


I see the misunderstanding: I'm only proposing TCP/reliable-only when there is no choice (firewall issues, etc.). A TCP-UDP design can work fine (provided the UDP channel is bandwidth managed as well). A custom UDP protocol should implement TCP-like (customized for games/simulations) bandwidth adaption (for both reliable and unreliable data (which will typically be sent in the same packet)).

Quote:

Quote:

In summary, study the history and design of TCP, and use the best feature(s) for custom game/simulation protocols, while leaving out (or retuning) features that hurt game/simulation performance.

Good summary!


Thanks!
0

Share this post


Link to post
Share on other sites
Guild Wars uses TCP exclusively. From what I know there have been no major issues with lag in GW.

That being said obviously a perfect UDP solution would be better than TCP. The problem is it is very, very hard coming up with a perfect UDP solution. In fact it is quite tricky coming up with a UDP solution that is better than native TCP. TCP has had years of evolution applied to it and on the face of it it is fairly straightforward implementing a UDP protocol that seems to work - until you throw several hundred clients on it.

Unless you really, really need the benefits of UDP, I'd suggest just going with TCP. It shaves weeks off your development time (unless you go with a middleware package) and it is often "good enough". Hell, it's "good enough" for World of Warcraft and Guild Wars.

0

Share this post


Link to post
Share on other sites
Great thread! I vote for it being stickied! :D

PS: Personally, I use TCP for my current WoW-killer project. It's fast (as long as you don't send packages every frame, and don't update player positions to players outside a player's zone), and ver reliable. :)
0

Share this post


Link to post
Share on other sites
First, I played AC (and AC2) for many years (months for AC2) and never once had a problem with lag. The only problem the game had was when too many players were in one area and then a portal storm would occur and take you out of the laggy area. Never a lag problem so I don't know what AC you were playing.

Secondly I'd go with UDP.

build a reliable UDP protocol.

TCP has too much overhead for everything you need. Not to mention the back throttling issues talked about above.
0

Share this post


Link to post
Share on other sites
you should use RTP or TCP .
if you decide to use UDP or TCP i suggest you to put below issue in consideration:
1- packet lost ;udp cannot ensure that so one client might send a critial message and imagine it drops on the way.
2- tcp is a relativly heavy protocol so imaging a condition which u have 1000
cuncurrent online users which in the case of role playing games they will send messages almost each seconds .so using udp make sense so it never send ack for each recieve or send.

i never used RTP but seems to be a good replace for TCP as it send acks lightly.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by PlayerX
Guild Wars uses TCP exclusively. From what I know there have been no major issues with lag in GW.

That being said obviously a perfect UDP solution would be better than TCP. The problem is it is very, very hard coming up with a perfect UDP solution. In fact it is quite tricky coming up with a UDP solution that is better than native TCP. TCP has had years of evolution applied to it and on the face of it it is fairly straightforward implementing a UDP protocol that seems to work - until you throw several hundred clients on it.

Unless you really, really need the benefits of UDP, I'd suggest just going with TCP. It shaves weeks off your development time (unless you go with a middleware package) and it is often "good enough". Hell, it's "good enough" for World of Warcraft and Guild Wars.


Being "good enough" is what caused all the server problems with WoW. GW is not really an MMO, it spawns an instance of the world for each player / group so I imagine a lot more is left up the client in the case of GW, though I can't really be sure.

WoW is notorious for server problems.

Again as an tribute to AC, AC servers were rarely down and besides over populated places (sub in pre-marketplace days) you'd not notice any lag.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by John Schultz
Does RakNet use TCP? If not, how do you see IOCP helping a UDP-only based protocol, especially if the server is single-threaded (for maximum performance due to zero (user-level, network) context switching)?
...[snip]...
It would appear that thread context switching overhead might outweigh kernel (paging) advantages with IOCP, especially given the nature of UDP (not using memory/queues as with TCP).


Why would you make a single-threaded server to begin with? You've typically got a bunch of entities to process, AI to manage, etc, and throwing your UDP receive loop into the same thread would aggravate performance to an unmanageable level, wouldn't it?

I've tried several UDP scenarios, such as:

- Having a primary thread to process game data, and a worker thread to receive and dispatch UDP packets (no async I/O)
- Same as above, using overlapped I/O and wait handles
- Using IOCP and a thread pool

I basically wrote a front end to blast packets between two machines on a 100BT network to see how many I lost, how far behind my programs got, etc. I also used a dual-xeon receiver and a single CPU sender, and vice-versa.

By far, and without question, the IOCP app ran the breakneck fastest, with the least CPU usage, and the least number of lost packets. As a matter of fact, I was able to completely saturate a 100BT network with 1400-byte UDP packets to the dual xeon receiver with 0 lost packets and 0 backlog -- and using a fraction of the CPU's time.

None of the other methods I tried scaled up to utilize all available CPU's, nor did they keep up with massive throughputs over and extended period of time. They invariably began to backlog and lost tons of packets.

Oh, I also tried running the program on the same machine (used both the dual xeon and a single CPU machine) using the loopback address. With two programs running full-tilt (one receiving and one sending) only the IOCP solution was able to receive all the packets with 0 backlog and 0 lost packets.

The only "flow control" I implemented was to turn off the send buffer on the socket to ensure the network layer didn't discard my outgoing packet due to lack of buffer space to store it.

If anyone's interested, I'll dig out the source code for the IOCP method and toss up a link.

Robert Simpson
Programmer at Large
0

Share this post


Link to post
Share on other sites
Quote:
Original post by rmsimpson
Quote:
Original post by John Schultz
Does RakNet use TCP? If not, how do you see IOCP helping a UDP-only based protocol, especially if the server is single-threaded (for maximum performance due to zero (user-level, network) context switching)?
...[snip]...
It would appear that thread context switching overhead might outweigh kernel (paging) advantages with IOCP, especially given the nature of UDP (not using memory/queues as with TCP).


Why would you make a single-threaded server to begin with? You've typically got a bunch of entities to process, AI to manage, etc, and throwing your UDP receive loop into the same thread would aggravate performance to an unmanageable level, wouldn't it?


In the case of 100% resource utilization, where processing incoming packets has the highest priority, it's clear that a single-threaded design should be the fastest: no-thread context switching. This would be the limit case: it would not be possible to process packets more efficiently. When AI+physics+gamecode are factored in, if the incoming packet rate is very high, then packets will be dropped if the incoming buffer cannot be processed fast enough. If threads are used to process incoming packets, efficiency would be reduced due to context switching (unless the OS is doing something that improves efficiency).

Quote:
Original post by rmsimpson
I've tried several UDP scenarios, such as:

- Having a primary thread to process game data, and a worker thread to receive and dispatch UDP packets (no async I/O)
- Same as above, using overlapped I/O and wait handles
- Using IOCP and a thread pool

I basically wrote a front end to blast packets between two machines on a 100BT network to see how many I lost, how far behind my programs got, etc. I also used a dual-xeon receiver and a single CPU sender, and vice-versa.

By far, and without question, the IOCP app ran the breakneck fastest, with the least CPU usage, and the least number of lost packets. As a matter of fact, I was able to completely saturate a 100BT network with 1400-byte UDP packets to the dual xeon receiver with 0 lost packets and 0 backlog -- and using a fraction of the CPU's time.

None of the other methods I tried scaled up to utilize all available CPU's, nor did they keep up with massive throughputs over and extended period of time. They invariably began to backlog and lost tons of packets.

Oh, I also tried running the program on the same machine (used both the dual xeon and a single CPU machine) using the loopback address. With two programs running full-tilt (one receiving and one sending) only the IOCP solution was able to receive all the packets with 0 backlog and 0 lost packets.

The only "flow control" I implemented was to turn off the send buffer on the socket to ensure the network layer didn't discard my outgoing packet due to lack of buffer space to store it.

If anyone's interested, I'll dig out the source code for the IOCP method and toss up a link.

Robert Simpson
Programmer at Large


This forum's moderator, Jon Watte (hplus) who works on a MMOG (There), stated that they tested various scenarios, including MP+multithreaded, and found that single threaded was the most efficient. It's not clear if their tests were from MMOG testing, benchmarks, or both.

Your test/benchmark sounds cool: if you could post your benchmark(s) showing that (overlapped I/O+) IOCP+threaded(+MP, etc.) does something extra-ordinary for UDP in Win32, including a means to compare with single-threaded standard UDP sockets, network developers would be interested in running the benchmark (I can test on various Intel/AMD/MP hardware).
0

Share this post


Link to post
Share on other sites
Quote:
Original post by John Schultz
This forum's moderator, Jon Watte (hplus) who works on a MMOG (There), stated that they tested various scenarios, including MP+multithreaded, and found that single threaded was the most efficient. It's not clear if their tests were from MMOG testing, benchmarks, or both.

Your test/benchmark sounds cool: if you could post your benchmark(s) showing that (overlapped I/O+) IOCP+threaded(+MP, etc.) does something extra-ordinary for UDP in Win32, including a means to compare with single-threaded standard UDP sockets, network developers would be interested in running the benchmark (I can test on various Intel/AMD/MP hardware).


It's been at least a year since I even looked at the code, but I'll blow the dust off and post a link to the benchmark program(s) I wrote.

Robert
0

Share this post


Link to post
Share on other sites
Quote:
Original post by anonuser

Being "good enough" is what caused all the server problems with WoW. GW is not really an MMO, it spawns an instance of the world for each player / group so I imagine a lot more is left up the client in the case of GW, though I can't really be sure.

WoW is notorious for server problems.

Again as an tribute to AC, AC servers were rarely down and besides over populated places (sub in pre-marketplace days) you'd not notice any lag.


WoW has an order of magnitude more players than most of us, indie or not, can hope to get. I think it's premature to blame that on TCP.
0

Share this post


Link to post
Share on other sites
Quote:
WoW has an order of magnitude more players


Be careful to separate number of players per shard ("server") from number of players for the game, total. Even a successful indie usually only runs a single shard, and can probably get the same number of players on that shard as you'd get on a single shard for a commercial game.

As an example, A Tale In The Desert is a fairly successful, indie MMOG.
0

Share this post


Link to post
Share on other sites
Maybe it's their backbone network being overloaded. The trouble is we just don't have the information as to why their servers are struggling. They aren't saying. My point was, it's presumptive to blame all their problems on their use of TCP. Until they tell us, we just don't know.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by John Schultz
Your test/benchmark sounds cool: if you could post your benchmark(s) showing that (overlapped I/O+) IOCP+threaded(+MP, etc.) does something extra-ordinary for UDP in Win32, including a means to compare with single-threaded standard UDP sockets, network developers would be interested in running the benchmark (I can test on various Intel/AMD/MP hardware).


As promised here's the link to the source/exe's
http://www.phxsoftware.com/files/udptest.zip

I ran the tests a few times both local-loopback on 127.0.0.1 and between two machines (one an old Duron 950 and the other a dual 2.2ghz Xeon). Someone with better machines and more time than me can fiddle around and come up with better tests of course. Personally, I couldn't really find a scenario where one worked better than the other except in the dual-xeon, where IOCP performance should theoretically be superior to the single threaded approach. However, since I was sending from the old AMD machine, the xeon was able to happily keep up even on the 100BT network with minimal effort (total CPU usage in the 0-2 range) on both blocking and IOCP tests. I would've liked to run multiple senders on multiple machines to a single receiver to see how that fared, but I didn't have enough machines handy.

Still, the little 950 Duron was able to receive ~115mbits/sec on my 100BT network without any packet loss on both tests. Generally speaking the larger the packet, the better the throughput (up to the fragmentation threshhold of course). The 115mbit/sec number was based on 1200byte packets sent 100,000 times.

The readme.txt is as follows:

-----------------------------

UDP sender/receiver test

This project is a quickie project I wrote to make some rough comparisons
between IOCP and straight blocking winsock calls.

It was compiled with Visual Studio 2003.


UDPTEST_IOCP
This version uses an I/O Completion Port framework I wrote a long time ago
and am no longer using in my current IOCP socket projects. However, it was
the socket code and framework I published on CodeGuru and didn't have any
legal restrictions on it, so I decided to use it for this test.

UDPTEST_BLOCKING
I tossed an extremely rudimentary UDP class into this project, and implemented
only the bare necessities to build up and tear things down. It uses a single
thread which blocks on recvfrom() until the socket is closed and the
thread terminates.

USAGE

RECEIVING:
The command-line for receiving packets is:

UDPTEST_XXX <port # to listen on>

In receiver mode, the main thread sleeps for a fixed 15 second timeout. The
sender must be finished sending before the 15 seconds are up. Once they're
up, the program spits out how many packets it got and how long it took to get
them. The actual packet performance timer begins when the first packet is
received and ends when the last packet is received (or the 15 second timeout
occurs, whichever comes first).

Example (listens for incoming packets on port 2525)
UDPTEST_IOCP.EXE 2525


SENDING:
The command-line for sending packets is:

UDPTEST_XXX -c <count of packets> -b <byte size of packets> -s <ip addr dotted form> <port # to send to>

In send mode, the program immediate starts sending -c number of packets of
size -b bytes to the ip address and port specified in the command line. When
finished it will report the number of packets sent, the bytes sent, and how
long it took to send the packets.

Example (sends 50,000 packets 1,200 bytes long to loopback address on port 2525)
UDPTEST_IOCP.EXE -c 50000 -b 1200 -s 127.0.0.1 2525


Robert Simpson
mailto:robert@blackcastlesoft.com

0

Share this post


Link to post
Share on other sites
Here's what I got on 3 test runs ( I actually ran about 15 with similar results). From a glance it looks like the iocp version performs very poorly.

BLOCKING

Reciever

D:\Old D drive\udptest\udptest\udptest_blocking\release>udptest_blocking.exe 7000
Listening on port 7000
Received 100000 packets in 3758ms

D:\Old D drive\udptest\udptest\udptest_blocking\release>udptest_blocking.exe 7000
Listening on port 7000
Received 99703 packets in 3766ms

D:\Old D drive\udptest\udptest\udptest_blocking\release>udptest_blocking.exe 7000
Listening on port 7000
Received 99860 packets in 3769ms

D:\Old D drive\udptest\udptest\udptest_blocking\release>




Sender

D:\Old D drive\udptest\udptest\udptest_blocking\release>udptest_blocking.exe -c
100000 -b 1024 -s 127.0.0.1 7000
Sending 100000 packets size 1024 bytes to 127.0.0.1 port 7000
Sent 100000 packets (102400000 bytes) in 3758ms

D:\Old D drive\udptest\udptest\udptest_blocking\release>udptest_blocking.exe -c
100000 -b 1024 -s 127.0.0.1 7000
Sending 100000 packets size 1024 bytes to 127.0.0.1 port 7000
Sent 100000 packets (102400000 bytes) in 3766ms

D:\Old D drive\udptest\udptest\udptest_blocking\release>udptest_blocking.exe -c
100000 -b 1024 -s 127.0.0.1 7000
Sending 100000 packets size 1024 bytes to 127.0.0.1 port 7000
Sent 100000 packets (102400000 bytes) in 3769ms

D:\Old D drive\udptest\udptest\udptest_blocking\release>





IOCP

Reciever

D:\Old D drive\udptest\udptest\udptest_iocp\release>udptest_iocp.exe 8000
Listening on port 8000
Received 71247 packets in 3692ms

^C NOTE: HAD TO CTRL-C HERE CAUSE THE PROGRAM WAS HUNG.


D:\Old D drive\udptest\udptest\udptest_iocp\release>udptest_iocp.exe 8000
Listening on port 8000
Received 70699 packets in 3691ms

D:\Old D drive\udptest\udptest\udptest_iocp\release>udptest_iocp.exe 8000
Listening on port 8000
Received 88184 packets in 3733ms

D:\Old D drive\udptest\udptest\udptest_iocp\release>



Sender


D:\Old D drive\udptest\udptest\udptest_iocp\release>udptest_iocp -c 100000 -s 1200 -s 127.0.0.1 8000
Sending 100000 packets size 1024 bytes to 127.0.0.1 port 8000
Sent 100000 packets (102400000 bytes) in 3691ms

D:\Old D drive\udptest\udptest\udptest_iocp\release>udptest_iocp -c 100000 -s 1024 -s 127.0.0.1 8000
Sending 100000 packets size 1024 bytes to 127.0.0.1 port 8000
Sent 100000 packets (102400000 bytes) in 3689ms

D:\Old D drive\udptest\udptest\udptest_iocp\release>udptest_iocp -c 100000 -s 1024 -s 127.0.0.1 8000
Sending 100000 packets size 1024 bytes to 127.0.0.1 port 8000
Sent 100000 packets (102400000 bytes) in 3731ms

D:\Old D drive\udptest\udptest\udptest_iocp\release>
0

Share this post


Link to post
Share on other sites
One of the reasons I ditched the old IOCP framework and rewrote it in subsequent code is because it was a little too bulky and had a couple issues I was never able to satisfactorily resolve - hence the freeze you saw.

Also, it doesn't detect hyperthreading - so if you run the IOCP version on a hyperthreaded machine, it'll make too many threads and slow things down. I should probably modify the IOCP version to add a -t switch to control the number of threads that get created.

What are the stats on the machine you ran the programs on?

Robert
0

Share this post


Link to post
Share on other sites
Intel P4 3.4GHz HT processor
Asus P4C800-e motherboard.
Netgear FA310TX ethernet card

We'll if you say there's an issue running on HT's cpus then I think that probablly explains the poor performance.


-=[ Megahertz ]=-
0

Share this post


Link to post
Share on other sites
ASUS P4C800-E 3.2Ghz HT, Intel CSA (Gigabit, non-PCI bus)

(Used same params as MegaHertz)

Blocking:

2583ms, 2611ms, 2577ms (no loss)

IOCP

2728ms, 2658ms, 2730ms (no loss)

ASUS P2B-DS, Dual PIII 700

Blocking:

2343ms, only 8527 packets received (similar on other runs).

IOCP

4782ms, 85770 packets received (similar on other runs).

Across the wire, between machines, the IOCP receiver (MP box) received ~60K vs. ~40K packets for blocking.

Thus, it does appear that on a multiprocessor system, with this benchmark, IOCP can be a win for UDP (I would perform more varied benchmarks before committing to IOCP for a commercial project).

Thanks for posting the benchmark!
0

Share this post


Link to post
Share on other sites
I generally don't sticky threads. However, I'll put a link to it in the FAQ. To avoid infinite necroing, I'll close it -- if you need to ask about any concept in here, I think that question deserves a new thread :-)
0

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
Sign in to follow this  
Followers 0