Quote:Original post by chessmaster42
That doesn't make any sense at all. If the packet is a struct then the client cannot change any array sizes or anything of that nature. Each variable is static in size. Am I just missing something here perhaps?
Client can change anything it wants. One doesn't even need a client to do that, just run wireshark and spoof some packets.
Also, how will you send a variable-sized list of objects?
Let's say you have two cluster nodes which need to register mutual interest. Each of them sends a list of objects. This list might be empty, have 10 members or have 10,000. The theoretical overlap is 2^32 objects. Even if that many cannot be sent (16gigs), the upper limit is arbitrary. Would you always send a hard-coded packet n megabytes in size, just so you're safe?
Or, delta state. You need to send all the changes that have occured on an object. The motivation for this is minimization of network traffic. So you send a list of tuples (#1, 17)(#4, "NewName")(#58, 2.43)(#78, 0). This takes dozen or so bytes to send.
If you send this as fixed structure, you'll always need to send entire state, thereby defeating the main reason for delta states, which can save 80-95% of network bandwidth. Or, you'll need to specify length of data you're sending. And presto, you have your std::list in your packet.
And how do you send a string. Let's say my packet contains this:
struct NamePacket { char name[32];};
What happens if client sends 32 non-zero bytes. You'll have buffer overrun.
Worse yet!
Using this is horrible for security. Look at
Eternal lands post-mortem. Since they were using such approach, they exposed user's passwords in buffers that weren't cleared beforehand. Approach, which should not happen with explicit serialization. The above lesson is real-life example, not some contrived scheme.
Yes, sending &struct works. But it should only be applied in rare circumstances in trusted and controlled environments for small-scale projects.
Everywhere else the difficulties, both technical and those related to management are impractical, especially since the only benefits (such as performance), are too small, if they exist at all.
BTW: In mostly C-based project, using statically allocated structures that are shared directly across network sending raw data is great.
There used to be a library that used this approach for shared memory C++ allocator, and there was a project that used same approach for distributed shared memory.
This type of approach has definite use, and several practical applications. But for any reasonably complex application, especially one where peers are not trusted or not controlled, I would be hard to convince that sending raw memory has any benefits.