Best approach to Server - Client with Server Primacy

Started by
2 comments, last by hplus0603 9 months, 1 week ago

Hello there,

I'm considering changing (again) all the networking tech inside a project of mine, the thing is i'm not certain if any of the approaches i took in the past are good or not, or if it is there some other approach out there which could be better than my personal approaches:

1) 2 x 2 Socket comms with TCP, using a message class to save the data and send it cleaner and safer.

- I have two sockets tcp, one for sending data to the server and one to receive it, that way more data can move between the client and the server more freely and without the need to wait for the channel to finish the last use before sending data in the other direction, that's good.

- However this approach isn't perfect because the ways to recolect data are prone to fail, with that i mean than using this approach i'm forced to feed a concurrentQueue, blockingCollection or a Queue in order to pass the instructions from the methods to the statemachine that is processing them, both clases are prone to fail when a lot of data comes.

- If anybody have any idea how to fix and maybie optimize that approach, be my guess, i tried to pass the statemachine directly from the side than receive the instructions directly from the client, but that cause the issue than i need to process the receive data in a non-response dependant way, making the comunication between the client input and what's going on need some kind of conciliation which just so happend still need a concurrentQueue, blockingCollection or a Queue.

2) See option 1 + Superclass to send big messages parted in several parts

- This kinda works but there was somewhat of a delay sometimes in some messages, however it suffers from the same (and maybie a little worse) of the issues of option 1.

3) 2x2 Socket BUT with “Dungeon Master” approach: that is, the sockets are only used for receiving client instructions and sending the answers to it, meanwhile i let a freaking gigantic Api/Rest give the status of the game on iteration cycles.

- So, this is similar to the method before but is a freaking Api/Rest, the costs are big, do you guys consider this to be the best approach for the state of the map and such? i would think it would be poorly optimized but the fact than guarantees correct transfer make it thinkable to me, i haven't went this route yet but have been consider retaking it after i found better way of serializing custom classes and to implement it in the context of having also a statemachine working in parallel.

Consider C# (.Net 6, not Unity)


Also, if you think on a better approach to async recollection of data than then one in option 1), please let me know, i may consider that as a solution to my issues.

Any observation? Recomendation? Comments or questions?

Thank you for your time

None

Advertisement

My first thought is that you're re-inventing the wheel if you're starting at the socket level. That's fine if that's what you really want to do, especially if your goal is to learn how networking works. But for anything serious you'll need to deal with NAT devices, firewalls, and similar routing, you'll want to deal with security with encryption which these days means TLS/DTLS implementations, you'll want to deal with data compression to minimize what you transmit, you'll need to deal with DDoS mitigation and similar attacks, and much more. It's faster and easier to go with any of the existing major libraries that give you support for that out of the box. Libraries like Steamworks.net, enet-csharp, and others bridge the gap between C#'s managed code and the underlying libraries.

Regardless of the approach you use, most major libraries and custom built ones will use a common data channel with individual message headers, basically creating your own sub-channels. Either option you can build your own flow control, so something that sends “way more data” doesn't saturate communications because you've broken it up internally, and your “freaking gigantic” calls can similarly be buffered on the sender side and reconstructed on the client side.

Regarding the delay, some of it is inevitable. You cannot overcome latency to deliver messages faster than the speed of light or electric signals. You also cannot overcome the throughput of the connections.

Otherwise, send less data so you don't saturate your connections, and segment large data blocks so you can interleave them, otherwise the OS and hardware will do it for you which you may not like. If you want to have “absolutely safe” packet transfer, you'll want UDP payloads of 508 bytes maximum, breaking up anything bigger than that. That's 576 bytes minimum size required by the Internet Protocol, minus 68 bytes for IP and UPD headers, giving a payload maximum of 508. Or more typically 1472 bytes in a payload which is the common ethernet MTU size minus headers, risking losing packets due to fragmenting that is uncommon but not impossible. And you'll also want to merge smaller messages to fill the entire payload, so you're not wasting bandwidth on unnecessary headers. If you're using TCP your networking libraries are already doing that behind your back in implementing the data stream. It's more common to take control of it so you're doing it intentionally, rather than just hoping the system's defaults work.

@Obs-D There is no need to have two sockets open; a lost packet with data in direction X does not usually noticeably impact data in direction Y. There are some edge cases when you fill up the outstanding transmission window, but in that case, it might be better to just round-robin across multiple sockets in both directions – and why stop at 2? Go to 10! If you get one packet lost, that just means 10% throughput loss.

However, in general, packet loss is very seldom “just a single packet” – when there's packet loss, it's either some router burping, which will drop a whole bunch of packets in a row, or it's a constantly lossy connection, like a crappy wifi, where packets will be lost and re-transmitted all the time. In that case, each of the connections will have the same level of bad, so there's really not much benefit compared to a single TCP connection, and the single TCP connection is much simpler. Just make sure to set the send/receive buffer size options big enough for whatever outstanding packets you're interested in supporting.

Every queue should have a limit. This goes for “messages from simulation to socket" as well as the socket itself as well as any other messages (even incoming decoded messages.) If that queue gets filled up, you have the option of applying backpressure and block whatever the sender is, until the recipient can clear the queue, or to break the connection. For gaming, generally, if the client can't keep up, or if the client is flooding the server, the best experience for anyone else in the game, is to drop/kick that client if you back up too much.

Polling state over REST is not a good idea unless your game is a turn-based game, potentially with a web client.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement