C# Server/UDP and Lidgren

Started by
10 comments, last by Dzogchener 15 years, 9 months ago
Hi, I am new to this group, and games programming per se. I have worked with TCP many moons ago, so I come to the challenge of networking programming with some old ideas. I have a few questions, and I did scout the forum for answers, and found many, but am still unclear. Please forgive any failures to spot the obvious already mentioned in this group. I wish to create a layer that will allow multiple clients to connect to a server - not P2P. I am expecting longer term to have to deal with high loads, so performance is an issue, but not so much as to force me to use C++. I have come unstuck around two key issues - TCP or UDP, and then how to architect the server. Regarding the TCP/UDP question, I have seem arguments both for and against, and I am not at all sure which one would be the best. I appreciate packet size can be higher with TCP, but UDP then requires other complexities that you get with TCP AFAIK. With the arcitecture of the server, I first looked at Lidgren, and while it is an impressive library, I couldnt work out how to use it in a more server-based fashion. In the past I have used IO Completion ports for this type of thing, and I couldnt see how Lidgren could be used in this fashion. Does it need to be in this fashion? Are there other C# libraries out there that I could look at? I did look around, but didnt find much. Thanks for any advice on this subject. D.
Advertisement
Question 3 of the forum FAQ is about TCP/UDP and which to use. It mostly comes down to how much of an issue latency is. TCP and UDP usually will use a different kind of design. You don't just switch between TCP and UDP while sending the same data in the same way. A typical example for TCP is to send state changes when they change, while with UDP, you frequently send the state of all entities multiple times per second, even if they haven't changed. So decide how important latency is to you and what protocol would be best for your situation. Action-oriented games are most always UDP, turn-based and slower paced games TCP, and RPGs usually fall in the middle where, in many cases, either will work.
NetGore - Open source multiplayer RPG engine
Hi Spodi,

Thanks for pointing me to the FAQ - I made a lame attempt earlier, and failed to find it.

Having read the FAQ, I have a C#/.Net related question for TCP implementation - what are the pros/cons of using the thread pool to service client requests over using BeginSend/BeginReceive? I have a copy of the C# Network Programming book, and the author suggests one of two approaches - either create a new thread per connection (!) or use the ThreadPool. The first suggestion seems rather lame at best, if there are many expected concurrent clients. The second approach seems sound in that the pool manager will do its best to schedule the threads efficiently. But the other option would be to use BeginReceive/BeginSend, which **I think** will allow async io completion ports to be used. I'd be very interested to here peoples thoughts/experiences around this?

Many thanks,

D.
Quote:Original post by Dzogchener
Regarding the TCP/UDP question, I have seem arguments both for and against, and I am not at all sure which one would be the best. I appreciate packet size can be higher with TCP, but UDP then requires other complexities that you get with TCP AFAIK.

...

With the arcitecture of the server, I first looked at Lidgren, and while it is an impressive library, I couldnt work out how to use it in a more server-based fashion. In the past I have used IO Completion ports for this type of thing, and I couldnt see how Lidgren could be used in this fashion. Does it need to be in this fashion? Are there other C# libraries out there that I could look at? I did look around, but didnt find much.


Lidgren is singlethreaded and uses polling on a single socket. It works reasonably well for games which sends a steady stream of small packets. For services such as serving web pages or the like it's probably not the best solution. If packet size is your only problem with TCP then you should probably stick to it. What kind of application is it you're building?
I can't exactly remember the details on the async I/O, but if I remember right, if IOCP is available on the OS, it will use it. Otherwise, it falls back on async I/O. So it just uses the best method it can find.

Spawning a thread per connection and using blocking is also addressed in the FAQ, and its a fine approach for a very small number of connections and a light server. Though for a game server, assuming you're making a more modern-like game, this isn't a wise approach.

I am currently using the .NET sockets in fully async TCP, and am very happy with the results. Everything performs very well and smooth. Just be prepared to deal with a little bit of thread safety.
NetGore - Open source multiplayer RPG engine
Hi fenghus,

Thanks for writing. I am writing a turn-based game, currently with a max of four players per game. The nature of the game means only one player can "go" at a time while the other players watch. So the comm size is small. Having read a lot yesterday on this site, I am going to stick to an async TCP sockets in .Net.

Thanks again,

D.

Hi Spodi,

I think you are right about the approach .Net takes with sockets and IOCP.

Yesterday I played around with multiplexing in a single-threaded server calling the backward-compatible Select() method, but even in the most simplest of cases, when the server did very little, with a number of concurrent connections coming it it was maxing out one of my cores. Converting to async sockets doing the same resulted in just 1% CPU if that. Thats kind of sealed it with me. Of course, the major issue I can see with async IO, is that it gets pretty concurrent pretty quickly.

How are you handling maximum connections Spodi? Are you simply checking a count on BeginAccept callback? I have found its not possible to reject a connection without first calling EndAccept, getting the socket and then calling Close on it. So I have a client connect, and it gets its async callback and calls EndConnect. It is possible that even if the server decides the connection count is maxed out, the clients Connect callback is invoked first, EndConnect is invoked and the client thinks its connected. In the meantime the server decides its maxed out, accepts and then closes the socket (I am assuming that closing the socket after accepting is the right approach?). Here begins the woes of concurrency.

Thanks again,

D.
I more than anything just hacked together my maximum connections routine and have actually never had a chance to test it, but after EndAccept, I just check that if the connection is allowed to continue. If not, I just dispose of the socket (close down the connection, etc). So I guess the connection will be made then destroyed shortly later. Though this is probably exactly what you want. When someone connects to your server but you reject it because there are too many connections, you want to be able to tell them that instead of a generic "Connection failed, the server may be down, we may be too busy, or we may just hate you" message. So in the short period the connection is alive, this will be the time to tell them why they were disconnected. If I remember right, the socket's Close method has the option to wait for all queued data to be sent before destroying the connection (along with an option to wait additional time just in case).

As for whether this will invoke the client's Connect callback, I believe it will. If this is not the behavior you want, you can have the server send an "connection is ok" message, telling the client that they have connected successfully and you will not kill them off. This would actually be a much better way to go than just relying on the Connection callback since instead relying on that you connected to the server, you're relying on that the server tells you the connection is good to go.

Sorry for the really scatter-brained post. Its 3:00 AM - way past my bed time. ;)
NetGore - Open source multiplayer RPG engine
Hi Spodi,

Thanks for writing. I have gone with the more complete solution of returning a message to indicate the reason for not being allowed to connect. It seems to work well. I have a problem today though. I have a stress testing console app that simulates clients coming and disconnecting. Eventually I cannot open a socket on the port - I am told the value is invalid when I call Listen(). Some resources must be being leaked, since I have my server shutdown when the client shutsdown. Given I get this Listen error after completely stopping my programs and running again, it seems this issue is windows/resources related. Are there any good tools to debug this type of thing?

Many thanks!

D.
This falls a bit out of my scope of networking knowledge unfortunately. One thing you might want to check is the current connections open. This can be done by typing the following the command prompt (start -> run -> "cmd"):

netstat -a

If it displays the port you are trying to listen to is currently in the state LISTENING, you may not be closing the listen socket properly/completely. Though I can't find anything being required than just calling Socket.Close().

Though why do you stop listening after the connection closes? Normally a server will remain listening at all times. I guess you could stop listening while you are not actively accepting connections (such as too many connections are already made) but I have never actually tried this in practice since I prefer to give a fail message and a disconnect rather than make it look like the server is just offline.
NetGore - Open source multiplayer RPG engine

This topic is closed to new replies.

Advertisement