Server-Side Concept, Brief Explanation?

Started by
9 comments, last by cr88192 11 years, 1 month ago

Would someone be able to explain briefly/redirect me to some resources about how server-side programming works?

Advertisement

You're going to need to be much more specific.

How specific can I be? I want to know the process, for example, what would I need to handle inbound/outbound packets etc..? I have no experience with server-side so it's quite difficult to specify - otherwise I probably wouldn't be asking in the first place..

And no, I just want an explanation of how the system works rather than a code-specific explanation.

rip-off is somewhat right insofar as it's very hard to give an answer to such an open question. For example, it would be immensely helpful to know whether you plan to write a game server (and if it's that, what kind of game, what latency/bandwidth requirements, consistency needed or not, etc.), or a forum software, or a bug tracker. All these are "server applications" but the tools used and the overall approach is fundamentally different.

For example an online RPG will typically need some sort of consistency, storing the world and the avatars and all significant things that happen in one or several databases. A shooter may get away without doing this, but on the other hand has much more stringent realtime requirements. A timestep of half a second or an occasional delay of 1-2 seconds is usually acceptable in a RPG, totally acceptable for a forum software, but not so for a FPS. Losing the session in a FPS is annoying but acceptable. Losing the contents of a forum or your avatar in a RPG is not acceptable, ever. Some server applications can live perfectly well with losing some network packets, others need reliable delivery.

All these are things that one should know before starting and that someone who is to give a qualified answer needs to know.

Generally, server-side programming is none different from client-side programming. As usually server and client are not within the same process or on the same machine (though X or OpenGL are counter-examples of this), you'll typically need to do networking and some more or less sophisticated form of data encapsulation or serialization, or some form of RPC. All that is more or less the same on both ends, however. The client must "speak the same language" as the server, or it's all good for nothing.

The three big differences between client and server are that the server typically binds to a port and listens (that is, waits for the client to connect), a server must typically handle many clients (a client often only accesses 1 or maybe 2-3 servers), and a server is normally expected to be a lot more robust and be programmed more defensively (though that's not necessarily so). The requirement to handle many clients usually means that you must use multithreading or forking, or a form of event multiplexing (poll/select). There exist specialized libraries for that, too.

Depending on what you're writing and what tools you use, those tools already do much of the heavy lifting for you (say forum software written in Apache/PHP/MySQL), or you will need to either use a networking library or dive into socket programming (say using C or C++ to write a FPS).

Thank you Samoth, and yeah, at first I thought it would work that way but I wasn't sure, thanks for the answer though it was exactly what I was looking for.

As for ports, how would the interaction work? For example, is it client specific? Meaning only a certain game client is eligible to connect to the port that the server has been programmed to run off off? Or is it port specific, do you have to register ports in order to utilize them?

There are always two (actually three) different ports with a connection (assuming you use TCP). The one the server binds and listens to, that is a number between 1 and 65535 that you choose (more or less deliberately). Assuming you're using the socket API, you pass that number as part of your sockaddr structure (e.g. when calling bind or connect or sendto). The other one is the number that TCP secretly chooses on either end to identify the connection. This is a number that you don't need to know and that you can't influence (at least not easily). The network stack will choose something that works, nothing to worry about.

With UDP, it is "somewhat easier" as there is no connection. Your client sends packets to the port that you choose, and your server receives them on that same port number (obviously you must take care they're both talking on the same port). It then sends back packets on the same or some other port number (it is very much preferrable to use the exact same one because of stateful firewalls!), and the client reads them from that port number.

In theory, you can use any port you want, with the exception that binding to a port that is already in use by another program will fail. In practice, you don't want to do that because it may confuse people (or client programs) and it may cause undesired effects and/or reactions. For example, if you run a game server using the port number used by eDonkey or Kazaa, then users using those programs and scanning on those ports will attempt to connect to that service on your server. Further, it might be that your hosting company filters out these ports, so you spend days wondering why nothing works (when you think it should work). Or you might get a cease and desist letter from your local movie industry representative because you're obviously running an illegal file sharing network on a large scale. Or, something else. It's best to simply not take those chances.

Do a Google search for "well-known port numbers" and use something that's not already taken, or at least something that is very unlikely to cause trouble. Also note that ports below 1024 usually require a privilegued process (this is not strictly a hindrance, but a possible security issue if you forget to drop rights after binding).

There is the ever present question, of using TCP/IP or UDP/IP. So, UDP is faster and has less overhead. Still, I recommend to go for TCP/IP instead, unless you have really high requirements on network throughput and optimization. With TCP/IP, you get control of the order of packets, and the possibility to send big packets. With UDP, you have to do the house keeping yourself. Of course, it all depends on you requirements.

A general note about server-side programming. I like to compare it with the Model-View-Controller pattern. The server side of programming usually consists of the Model part.

Also, it is not unusual that the server side need a database to communicate with. The database would then be used to store the state of the game (if it is stored).

[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

http://beej.us/guide/bgnet/

great starting point.

Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.

There is the ever present question, of using TCP/IP or UDP/IP. So, UDP is faster and has less overhead. Still, I recommend to go for TCP/IP instead, unless you have really high requirements on network throughput and optimization.

Quoted for truth. Most programs (including most games) work just fine, without any issues, with TCP, and it is a lot easier to get right for a beginner.

Small nitpick: UDP is not faster, at least not measurably. UDP has a lower latency because there is no in-order delivery of a stream, but there are individual packets. A packet arrives or doesn't arrive. Or, it gets delayed and arrives some time later. TCP will do its best to deliver everything in order, waiting and resending as needed.

Using UDP, you would instead just consider a missing packet "lost" and move on (most of the time even if it arrives later, because by then it is no longer useful). If your data is such that a lost packet doesn't matter but waiting on the packet is more detrimental (say, position updates in a fast-paced game, or 5 milliseconds worth of sound in a VoIP stream), this works out much better. It is often much more disturbing to have a 3 second hiccup than a short click that nobody really notices.

The UDP header is a little smaller, so technically when the bits go through a modem, it's a little bit faster, too. However, on the internet and on your ethernet cable, it's all about "frames per second". A router will forward, say, a million frames per second, it doesn't make any difference whether a frame is 40 bytes or 1280 bytes (or 7168 bytes if jumbo frames are enabled).

There is the ever present question, of using TCP/IP or UDP/IP. So, UDP is faster and has less overhead. Still, I recommend to go for TCP/IP instead, unless you have really high requirements on network throughput and optimization. With TCP/IP, you get control of the order of packets, and the possibility to send big packets. With UDP, you have to do the house keeping yourself. Of course, it all depends on you requirements.

A general note about server-side programming. I like to compare it with the Model-View-Controller pattern. The server side of programming usually consists of the Model part.

Also, it is not unusual that the server side need a database to communicate with. The database would then be used to store the state of the game (if it is stored).

FWIW: in my case, I am developing an FPS, and using TCP/IP.

(note: I typically also use non-blocking socket IO and have Nagle disabled).

the main reason was mostly that it is less effort.

one minor downside of TCP is that it "may" make sense to try to avoid congestion, which is basically a scenario where data is being sent at a faster rate than it can be transmitted to the client.

usually, this isn't too much of a problem with plain game updates for modern/fast connections, but can become a lot more of a concern if also streaming file or world contents (such as the server streaming map geometry, textures, 3D models, ... to the client over the socket).

so, for example, if large data is being sent, it may make sense to cut it up into little pieces and send these mixed with the other contents, and also keep mind of how much data is being sent to the client, ...

also, it may make sense to send messages (like world updates) in a reasonably compact format (like raw bytes, or Huffman coded or Deflated data), mostly as using a plain-text serialization (especially something like raw XML, but even something like plaintext S-Expressions or JSON) is enough to bog down even fast internet connections (IOW: you don't want to be sending entity updates with something like SOAP or similar...).

so, basically, this means like the low-level layer delivers tagged messages (say: TLV / tag+length+value), and then basically on top of this there is a protocol based around sending compressed messages or similar (maybe Deflated, maybe something more specialized).

note that the TLV itself serves a role:

it both eases in multiplexing the data, and makes it easier to determine when a complete message has arrived (if we don't yet have a complete message, we don't process it until the rest shows up), as well as directing various types of messages to be processed in different ways (such as an update message vs part of a file or raw-data payload, ...).

...

This topic is closed to new replies.

Advertisement