Online Server Architecture

This topic is 3500 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Share on other sites
I didn't read the whole thing (didn't have enough time to) but I can reply to some of it. First off, TCP is typically 20 bytes I believe, not 28. But you also have the IPv4 header which is another 20 bytes. You also mentioned you have a ushort on your messages, so thats 42 bytes overhead before even getting to the information.

Shooters typically use a completely different networking scheme then MMORPGs. A shooter (at least the typical kind) demands very low latency and a lot of changing entities. A MMORPG, on the other hand, is typically much slower and automated. Your character may be smacking away at that dragon, but if its one of those many MMORPGs that have automated targeting, all the server really needs to do is say, "A started attacking B" and send the occasional stat/damage notifications to those who are interested (which is usually just the one fighting). Also, since fights are often much slower and the enemy is usually a NPC, latency is nowhere near as critical.

So how do they get away with so little bandwidth? Well, for one, a FPS usually works by sending the state of nearby entities constantly, using UDP to do this. No data is retransmitted, and its not uncommon to receive an update for something that hasn't changed, but it results in very low latency. TCP-based games, on the other hand, know that their data will eventually reach the target, so they can send updates just when something changes. So now you got a more event-based, slower scheme using TCP and a constantly-sending, "shotgun"-update scheme with UDP, which is going to have much lower latencies at a much higher bandwidth cost. And because of these hugely different schemes is why it can be so hard to merge the two.

Whoop, gotta run!

Share on other sites
christ, lots of questions...

As Spodi says, the choice between TCP / UDP, is in a way, similar to low bandwidth / low latency.

MMORPGs are event-based, require little bandwidth, a reliable transmission, and not for accurate, fast-paced action. A bit of latency, although annoying, won't really damage the game mechanics.

FPS shooters are the opposite, they need to support fast world updates, instant input response and minimum latency.

That's why a FPS mmorg would be pretty hard to code to satisfy both requirements.

There is a third model, used for RTS games, that use a deterministic engine, but you don't wanna go there.

For the player limit for broadband servers, you can work it out. Calculate the average packet size for a player/entity update, multiply by the average network rate, multiply by number of connections, multiplied by the number of 'visible' players/entities seen by that connection. So it will grow very fast as your connection count grows (as it will add more players and entities like grenades, rockets, stuff spawned by players, into the potentially visible set for each connections).

Typically, 16-24 is an acceptable limit. 64 would be really pushing it and would require a high spec broadband connection. Remember that it is upstream performance you are worried about, and it is very poor on Mr Anonymous' DSL.

That's why console games with no dedicated servers are more peer-to-peer, to spread the load among all the clients. And then again, it is still a challenge to support more than 16 players on a broadband connection. Having automated load-balancing is a must to keep the game (relatively) flowing smoothly.

As for authentication, I'm no expert but the way you describe is pretty much how game sessions work. You advertise your game session on a master (match making) server, that gives you a key and unique session ID, and anyone wanting to connect to your server has to get the key off the 'matchmaking' server. Then all the packets are encrypted using that key. So each packet will also have an encryption header, that will increase your packet size (by quite a lot).

The authentication can be three-way, if you have a separate, dedicated login server as well.

Consoles provide that functionality for you, but finding equivalent support for PCs is much more difficult (and costly). You'd be looking at gamespy, demonware, ect... Implementing it yourself is a huge task too, but feasable noneless.

Also, do not forget NAT punch-through, a matchmaking server can help you with that (it does in a way, providing you with the session host's internet address).

Thirdly, if you plan on having broadband users hosting, it can be useful to support host migration, but it is very hard to implement.

Oh, and for a SQL database, this is usually redundant for FPS (fast paced) games. You can use a database for registering game sessions, player stats, data mining, ect... but it is overkill for in-game data. The pace of the game is far too high to be querying servers for entity states. You want minimum latency and the data is non-persistent.

Share on other sites
Quote:
 I am fairly certain that some illegal WoW private servers are running on DSL/Cable
Sure. They could also run on 28.8k modem. That doesn't mean they scale or that they are playable. This type of MMORPGs also limits the bandwidth to 3 kilobytes per sec (not 30) peak, and 1k average.

Quote:
 I was just wondering about what is a logical number of players that could play on a standard broadband connection?
That depends on the logic you're using. 1000, even 10,000 is not impossible. It just requires you to adopt a different simulation model.

Quote:
 and in an online shooter it isn't used a whole lot
Better yet, it's not used at all, since there's no persistent world. It would be used for either leaderboards, or account information or such.

Quote:
 so I think I can suffice with the four separate servers I have mentioned
Quote:
 Now I'm to the point where I've already included a MySQL database, separate web server, separate login server, and then an actual game server.

What exactly did you gain with this, apart from configuration nightmare.
- Web server: bandwidth heavy, requires complex administration
- SQL server: memory/disk heavy, puts load on network and CPU
- Login server: will be sitting idly (128 logins are handled in around a second)

What would make sense, is not having a "game server", but instead having "instance server". So for each room you spawn a new instance. Then, the login server takes care of managing them, and assigning players to them.

You also drop the SQL server, and use a proxy to access that. This gives you the benefit of either using embedded SQL, or keeping all data cached. You mentioned your entire state isn't too large. You can even drop the SQL, since it doesn't sound like you need it.

Share on other sites
Quote:
 Additionally, I'm wondering about the game server. It isn't designed to be a massive world. It is designed to be a small room that contains a small number of players (My goal is to be able to put 4 teams of 32 or 128 players in one room).

This design could end up causing problems as you'll have an n-squared bandwidth increase if all players are within 'relevancy' distance of each other. Most games with large numbers of players are usually designed to keep players in separate smallish groups, you can see the problems that happen when large numbers of players congregate in WoW.

(If you have 32 players all together you need to send 32 updates to each player, so that's 32*32=1024. If you have 2 separate groups of 16 players on the same server but in different areas/rooms/whatever that's 16*16*2=512. Now imagine how that scales if you can design your game so you can have 1000 players on a server but all in smallish groups, compared to 1000 players all in the same room (1million updates!).)

Share on other sites
First, as has been said, the protocol overhead for a TCP packet is at least 40 bytes (more if certain options are enabled). UDP is 28.

Second, if you have 30 users who are all right next to each other, and you want twitch update rates, and you have limited upload speed, then something has to give. Either reduce the number of players, or increase the bandwidth, or remove the twitch requirement.

WoW is an example of the latter: it does not give twitch response to movement at all. If the link is heavily used, you may get one update a second or less for players you can see (and no updates for players you can't see).

When it comes to bandwidth, the larger systems typically are hosted in well connected co-location facilities, where the physical link isn't much of a problem. Instead, you "commit" to a certain average throughput, and if your average goes over that, there's usually a surcharge. Typical commit prices are in the $5-10/Mbit/month range, so a Gbit/s commit is about$5k-$10k per month (very roughly -- each deal is different at these levels). Share this post Link to post Share on other sites I'm well aware of the n2; however, I'm well aware of the saying: "The more the merrier." Let's face it, a game with 32 players is more fun than a game with 16 players. 16 players is plenty, but I really want to get at least 32 in a room. So, I guess I was right when I said that 32 is pushing the max of a broadband connection. The MySQL server is staying. I need it because this game is a mix between an rpg and a shooter (it has rpg rules, but ultimately plays like a shooter). The database is still very important; however, as I said, the database is not really even used during game play. The shards make queries before the game starts and after it is over and the login server makes queries to authenticate users. Quote:  What exactly did you gain with this, apart from configuration nightmare.- Web server: bandwidth heavy, requires complex administration- SQL server: memory/disk heavy, puts load on network and CPU- Login server: will be sitting idly (128 logins are handled in around a second)- Game server: still overloaded The shard server is a single room -- I think my earlier post did not make this clear. However, the master server does not spawn these -- there is a set number. The servers are split programs, they obviously can still be run on one computer and the split makes the difference in speed minimal; however, when the game becomes much more popular the split is the difference in the world. However, I can see where you all are coming from when you say it seems a bit pointless. I honestly don't expect anything good enough to need this architecture to come from this engine; however, I'm programming it this way because I'm optimistic and it is helping me learn how these types of applications work. Furthermore, I want to stay with TCP rather than using UDP. UDP does have a header size that is 20 bytes smaller; however, 20*10*32 is only 6KBps. That could serve a few more players, but it's not going to put 32 more players in a room. The problem is the n2 movement of the players, which there isn't really a work around for, other then simple having less players. Additionally, I don't know if it is just me, but the UDP programs I have created in the past have had problems with players routers rendering them unable to play. The one thing I don't want is another reason for someone not to play my game. They already have WoW, Halo, GTAIV, etc... I don't need any help =p Thank you all for your input. I would appreciate anything else you have to add, especially about the authentication and server architecture. edit: Quote:  Second, if you have 30 users who are all right next to each other, and you want twitch update rates, and you have limited upload speed, then something has to give. Either reduce the number of players, or increase the bandwidth, or remove the twitch requirement.WoW is an example of the latter: it does not give twitch response to movement at all. If the link is heavily used, you may get one update a second or less for players you can see (and no updates for players you can't see). Bah.. Hplus you are a genius that is a trick I used my first online game that I had completely forgotten about. I'm wondering exactly how low I can make this in an online shooter, though. I'm thinking if I put a cap on the amount of data a player can send per second this could fix a lot of things. 48*10 is the average minimum and roughly 32 KBps is what I'm targeting for server bandwidth. I'm thinking that I could allow a max of 2KBps to 32 players. Obviously that is a great deal more than the server can deal with, but I'll have other precautions in place to deal with that. I understand I'll have to work all of this out myself until everything looks right. It is very hard for someone to arbitrarily throw numbers out there, but if you have anything to add, I would appreciate it. Once again, thank you all. Share this post Link to post Share on other sites if you play counterstrike source, you can see the adaptive update working when you spectate. You'll see players on the minimap moving at different rates, depending on their visibility from your camera viewpoint. from 20 fps down to a few updates a second (maybe 2 or three). It is a must if you want to minimise bandwidth in a very busy game. Secondly, have you thought of using Peer to Peer to reduce the server load? I know peer to peer isn't as secured as a pure server clients, but if you really want that amount of players around on DLS / cable lines, you should consider letting the players do some work themselves. They'll have bandwidth to spare (voice comms is a given). 128 players, updating at 20 fps in a shooter environment is a LOT of data. And anything below 10 fps will feel laggy and unresponsive. I'm not sure the internet is mature enough for that sort of gaming requirements. You can prototype on LAN, and that will give you an idea of the load for the servers and clients respectively. Then you may have to think of downscaling a bit. The worse case scenario is pretty scary, to the point where the server could completly overload. In any case, it's a design decision. If you spec your requirements and the constraints of a real world scenario, you'll see if both can fit (which I doubt very much). That, or the gameplay will have to be something more similar to an RPG, :/ Share this post Link to post Share on other sites On a related note, I was recently (a few weeks ago) told about a MMORPG that was implementing FPS elements such as manual aiming with the bow and such, but can't remember the name of it. And no, it wasn't Hellgate:London. When I went to look at it, it seemed that they actually dropped the manual aiming since it was just way too problematic, and now you do the typical "click here to auto-shoot!". The one game I can think of that has actually done something like this is Hellgate:London, but even still thats just heavily instanced which I don't consider being in the same world. If you go to location X, you should see everyone else in the world who is at location X, not just 2 other people who went there at the same time as you. My point? You're up for a huge challenge by trying to mix the two. Its something everyone seems to want to do, but even with the pros the results are less than desirable. But in any case, I don't think you'll be able to put 128 active players in one room on a home connection. Assuming you're even using an event-based update system, if one person moves, thats 40*128 bytes of overhead on TCP/IPv4. Factor in the message length header (2 bytes), the message (lets say about 6 bytes), and you still got yourself (40+2+6)*128 or 6144 bytes for that one movement. Even backed by the most uber of connections, its still going to take some amazing hardware and techniques to get 128 players updates fast. ...even if my math is off somewhere, which I wouldn't be surprised if it is (what can I say, its early), the point remains that it will be difficult^2 to keep 128 players updated about 127 other players. ;) Share this post Link to post Share on other sites I'm trying to think of all the ways to reduce this load as much as possible. I've come up with a few ideas in short amount of time and I'm sure some of you can come up with some even better ones. First, as Hplus mentioned, I'm going to put a limit on the amount of bandwidth each player can hog. I'm also going to update players farther away less frequently than players close in range. I think this is fairly standard practice. Second, originally I was going to make the rooms fairly small; however, the smaller the area the better the chance that more players are in viewing distance. So, I decided that I'm going to change the game play a little bit and make the rooms a great deal larger. I'm also going to give players incentives to not be in large groups. For example, the main play mode is a base control game. Players get points for controlling bases for a certain amount of time and after a team reaches a certain amount of time, the team wins. If I spread these bases out a little bit and add more of them, players quickly become spread fairly thin. So, instead of having average 8 on 8 to 16 on 16 encounters, players will be having 3 on 3 to 6 on 6 encounters. I feel that this could reduce the average player packet from having to send 20-30 players to having to send 6-12 players. I realize that that reduction isn't going to solve anything by itself, though. Finally, I'm considering adding a peer-to-peer part to the game where p2p messages are sent through UDP. I was opposed to p2p and UDP entirely running the system, but I think that if I use the p2p UDP to only control player movements to their own team that everything will work out. I'm concerned if this will be able to be secure, though. I'm planning to let players send messages to players on their team to update their movement. If the want to mod their messages and send false coordinates to their own team, that really isn't cheating, since I don't see how that is giving them an advantage. It seems that I could fairly simply be able to keep players from interfering with the server messages to mod their movement on the enemies screens, but I'm wondering if I can do this as securely as I think. I planning to do this fairly simply, using a private and public key system. The Shard server will send players the IPs they will need to send to and they will be allowed to receive from. Is there a way these messages could be hijacked by players from the other team to allow them to send messages to overwrite the server messages for their movement? I'm still planning to have the server control the movement from opposing teams. But this will reduce the load of the server by a third easily; however, I think this could possibly stress 56K players. They will be receiving tons of UDP messages rather than just one. So, even with only 32 players, 15 players could possibly be sending a connection 28*20*15. That is only 8KBps, if I calculated correctly, which I probably didn't. However, the scalability becomes more of an issue this way I think. If you expand that to 128 players, which is really my biggest target, that leaves 63 players that could possibly be sending 20 messages a second. That becomes 31KBps, which has to go both ways, plus the additional load from the server. That becomes something that could possibly clog up a crappy DSL line and wondering if you could get 256 players in a room becomes a joke. So, I thought that I could modify that system a little so that only a few players per team would be sending the messages and the server would only be sending a few players per team messages. The those players must turn around and disperse those packages to their team. This sounds like a very interesting idea to me; however, it seems like if it were as good of an idea as I think, someone else would be doing this and I don't think anyone is doing this. Furthermore, I'm unaware of a way to easily find the fastest connections and I'm wondering what happens when one of the "important players" loses their connection from the server. My intention was never to create something for commercial games (I'm not that good so it would be stupid to even try), but when I start talking about rooms of 128 players that sort of sounds like a commercial game. Am I just getting ahead of myself? Is it stupid to try and put such small burden on servers and put such high burden on clients? Should I just stop caring about hosting on a broadband connection if I want to put 128 players in a room? I know I could do the coding for what I have described, I just don't know if it is logical and if it will be secure. So, is this goal illogical/unrealistic? If I go back to the 32 player target, will this goal be very difficult to achieve? I tested a while back with a different server on a faster connection with only 16 players and the server handled it fine. However, the strain on the server grows exponentially with the number of players added. Once again, you all have been extremely helpful and I really appreciate everything so far. If you have any ideas (even if they are a totally different approach), I would really appreciate it if you would share them. Thank you. Share this post Link to post Share on other sites Frankly, I really think 128 players is far more than a broadband can handle. 32 is a lot as well, but manageable on a very good broadband connection. However, if you want some sort of team-based thing, that will give you 4 teams of 8 players, which isn't bad. Secondly, the adaptive update isn't just about distance, but more like visibility or culling. You can have a small room, provided it's compartimented heavily. Thirdly, P2P is an option to spread out the network load, but there are issues, such as NAT router problems (two peers can talk to each others), cheating (modems with standby mode), and the general increased complexity. You will still need a session host to control the game flow and what-not, and verify player positions. But if you want a large number of players on DSL lines, I don't think a pure Server-Client model can work, unless you know the host has an extremely fast connection. The clients will have to do a lot of the work themselves. As for testing the quality of service (as it's known), you'd have to implement a server query, that will test a client's bandwidth capabilities at his request. The process can take a few seconds. Share this post Link to post Share on other sites It is unrealistic to host even 32 players on a broadband uplink for a twitch game. If you tried hosting CS:S or similar on a broadband uplink, it wouldn't work, for this same reason. You need to get your game into a better connected locale. I recommend renting a private server for about$100/month at a place like ServerBeach or 1and1 (there is a short list in the Forum FAQ, although you can find more online).

At the end of the day, \$100/month really isn't that much. It's about 10 hours of flipping burgers, or 4 hours of installing printers per month. If you really want to provide a good experience for your players, perhaps that's a trade-off you should make.

Share on other sites
I realize that 128 players is pretty unrealistic (at least for a single broadband connection). I just thought that other servers were pulling it off so I thought that maybe I was doing something wrong.

I want to focus on 32 players for now. The way I have things set up, 32 players will still be a decent bit over the upstream of a regular broadband connection (worst case scenario). I'm calculating that when it gets to about 26-28+ players all in the same viewing area the server will be exhausted. What do you think is the best way to work around this?

I'm starting to like the group idea, but I'm thinking their could be a lot of problems to go along with that. It seems if this were a good idea that other games would be using it. However, I'm also thinking that I could just leave it like all other games and let players figure out that getting into a big blob is a bad idea and to try to avoid it. That seems much less fun, though.

Furthermore, I'm wondering what you think the best way to encrypt messages would be? I could use RSA to send a AES key and then use AES to encrypt all the messages; however, that seems overly secure and slow for an online game. I was thinking that generating a random string used for XOR encryption and sending that through RSA would be significantly faster and secure enough for an online game (I already have the username/password secure).

Share on other sites
Don't worry too much about encryption, that's the least of your worries tbh.

The only balancing you can do to mitigate the worse case scenario is throttling your bandwidth (i.e. reducing the network rate, or prioritise the entities depending on their proximity to the listener). In any case, it will crappify the gameplay that's for sure, but there is not much choice. Something's gotta give. Or again, go P2P to spread the load.

Also, careful level design could prevent that worse case scenario. Big open areas are generally bad for performance, but good for gameplay. Small compartimented areas are good for culling but usually bad for gameplay.

Share on other sites
I think I may have made a huge mistake in my calculations, or I'm just not thinking clearly right now.

My upload speed is 863kbs or 107KBs; however, I've been assuming that since I have DSL that my upload speed is slower than cable, but I remembered that my cable upload speed was only 32KBs, which was the speed I was originally targeting. However, I just read that the average cable upload speed is around 18KBs. Furthermore, I wasn't figuring in the IPv4 header into the figure of speed that would be needed.

I wouldn't come here to ask this if I didn't already calculate this many times. This is my problem:

I'm working with 32 players and a refresh rate of 20 and the TCP/IP header is 48 bytes. So, just that alone is 32*20*48 or 30720 or 30.4KBs. Furthermore, the movement message has a 4 byte header and then each player that has changed position adds another 2 bytes to that. So, 4 + 2*31 is 66. After I factor the 66 in I get 32*20*(48 + 66) = 72960 = 72.1KBs.

72.1KBs is well within my capacity; however, I wasn't paying attention to the numbers when I was testing this and I just figured that cable would be faster than my DSL connection, which is not the case.

I'm feeling like I made a mistake somewhere, though. With these numbers, cable couldn't even host a 16 player game with 10 updates a second, which is not very much, really. Even if I use UDP, cable will still struggle if it is as slow as I'm reading. Even when I lower updates drastically when players are grouped, that only goes so far.

I really felt like 32 players was a small goal; however, I'm quickly proving myself wrong. If I want to have a decent refresh rate I'm going to have to drop down to 8 players.

So, have I calculated the 72.1KBs correctly? Is that a reasonable worst case scenario (I know there are other messages as well, but movement is the vast majority)? Is 72.1KBs much over the standard broadband connection's capabilities? What do you think is a decent target to aim for in terms of upload KBs? What do you think is a decent target to aim for in terms of players that a broadband connection can handle? At this point I'm laughing at the thought of 128 players and I'm starting to think 32 players may be almost as ridiculous.

Researching anything on game servers is quit difficult, since most people want to keep their tricks to themselves. This is going to be a much less conventional server network, so I'm willing to use p2p more and more than I was at first; however, I would still like to keep things secure. How hard is it to eliminate cheating if you allow p2p? As I've mentioned a few times, if you only allow p2p between team members, I think it would be much harder to cheat than if you allowed p2p between everybody. Am I going to have to incorporate p2p if I want to put 32 players on broadband?

Once again, I'll bring up the group idea I mentioned earlier. If I group the 32 players into two groups of 4 and only send to one player in that group, that shrinks the upload from 30.4KBs to roughly 7KBs. Even if the server sends out messages directly to those players every once in a while, the load is still reduced drastically. Can anyone give me reasons why I should not do this (remember this is not an MMORPG this is an online shooter)?

Also, feel free to continue to comment on the encryption.

Once again, you all have been very helpful. Thank you for all the information.

Share on other sites
Yes, your calculations are reasonable. A tick rate of 20 is way too much when trying to cram lots of players into the connection -- lower it to 10. However, that will only give you a few more players (because of n-squared).

Quote:
 I just thought that other servers were pulling it off

They don't. They either use a tick rate that's really low (1/second; even less for far away objects); or they use a lot more bandwidth, and host servers in a colo.

Share on other sites
Quote:
 Original post by yahnFurthermore, I'm starting to worry about authentication from the login server to the game servers. I spent a lot of time working on a moderately secure login server and I would like the authentication to the game servers to be equally secure. Currently, I'm using what I think is private key security. When the game server joins the serverlist server, the serverlist/login server sends a private key to the game server and stores that. When a client sends a request to join a server from the login server, the login server generates a public key concatenates that to the private key and sends the hash of that and the public key to the client. The client sends this public key and the hash to the game server. If the game server gets the hash you sent it, then it is satisfied. That seems pretty secure to me, but I'm no expert. What do you think?

I believe I don't understand your security set up. Anything that sees your key and hash go by can pretend to be you. If you don't encrypt the message as it is traveling both ways then it isn't secure. A player should generate his own public/private key so that no one knows his private key ever. If the server knows his private key then it is no longer private.

A secure handshake goes like this. server and player both have a public/private key pair when the player goes to log on you send them a random number encrypted with their public key, the player decrypts the random number and encrypts it with their private key(this is called a digital signature) then encrypts that with the server's public key. When the server decrypts it with its private key and then your public key to finally get the original random number(the random number is called a nonce) you are fully authenticated.

Share on other sites
Also you haven't mentioned are you doing any motion estimation To help out a low tick rate?

Share on other sites
The password exchange is secure in that the only way to break it is by taking the challenge and the salt the server sends the client and concatenating that to a brute force attack until you get the hash that the client sends the server. Additionally, it all takes place in SSL. Perhaps if this were credit card information I were dealing with, I would want to be a little more secure, but I think that will suffice.

I'm sending the motion a little differently then I first described. At a refresh rate of 20 times a second, I send whether the player has changed speed or direction; however, 4 of the 20 times, I send the players actual coordinates to make sure the clients don't get out of sync. So, for example, if the player is holding down the left key, instead of updating the direction constantly, I only update it when the player presses and releases the key because the rate at which the direction/speed is changing is constant. However, every once in a while the clients and the server get out of sync so I send the coordinates 4 times a second.

This has made the movement a little twitchy if you really pay close attention, but it isn't very noticeable unless you are looking for it. Additionally, it has effectively halved the worst case scenario -- it is pretty hard to be hitting keys fast enough to make the server need to update 20 times a second. Often times players need to be updated only a couple times a second or even less. However, a package goes to a client nearly 100% of the time so it doesn't reduce that much. Even if I the speed and direction change 6 times a second, which, honestly, is a bit much, it reduces 32*20*(48 + 2 + 4 + 31*6) (155KBs - no optimization) to nearly 32*20*(48 + 2 + 4 + 10*3) (53KBs - optimized as much as I can figure) (I still have to factor in when all of the coordinates are sent).

Sorry for quickly changing topics from server architecture to handling movement.

Once again, you all have been very helpful. Thank you for your time. I will appreciate anything you have to add.

Share on other sites
with P2P, it is a lot easier to cheat obviously since the client is in control of some things. But packet encryption will make it a lot harder (modifying packets would be impossible for example), which is one of the easiest way to exploit a game.

However, P2P security are game specific to go into details, but usually a server would have override privileges if he detects something fishy.

As for your P2P option, that's certainly possible.

Usually, game entites (or components of it), have a set authority that dictates who is in charge of that component / entity. So it can be the server, the client himself, or some other remote entity (the squad leader). Whatever it is is up to you and your design. Again, the main problem with P2P is keeping connectivity and a solid connection graph.

However, you have to be careful when you have entities with multiple authorities controlling parts of them (for example, the server controlling the general player state, and the client controlling his position / orientation). They can suffer 'split personnality' if you know what I mean.

As for optimising the data needed to be sent, one we currently develop and has been used to great success in other games (Quake3, UT, Source engine...), is delta-compression.

This ultimately ensures that you send to a client only the changes from his last aknowledged state. The main drawback is memory consumption since you have to cache game states for a few seconds to be able to build delta-packets. And also sending a full state would mean sending a LOT of data in one shot, hence the 'snapshot', reliable state stages. There are other drawbacks as well, as packet loss and increased latency introduce a negative feedback loop, forcing the host to send more data as the connection quality decreases, but that can be controlled with basic throttling.

But the benefits far outweight the drawbacks, especially when it comes to design and code dependencies.

As for encryption, whatever's your poison. I never really had to worry about that as we the network libraries we use take care of the security layer.

Share on other sites
Quote:
 But packet encryption will make it a lot harder (modifying packets would be impossible for example), which is one of the easiest way to exploit a game.

That's not true. If you have an active man in the middle, then he can decrypt the packet, change it, and then re-encrypt it. Similarly, an active man-in-the-middle can defeat a Diffie-Hellman key exchange, by simply pretending to be the client on one end, and the server on the other, hence generating two sets of keys. The only "solution" is to use private/public keys for signatures, where there is a secure out-of-band channel for exchanging the public keys.

However, for games, it doesn't matter that much, because someone who wants to cheat can attach to your process using the debugger API, or DLL injection, and then poke the values he wants into your memory addresses. For a game, you simply should not worry that much about encryption for protection, because it doesn't work, and it's not necessary.

Share on other sites

This topic is 3500 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Create an account

Register a new account

• Forum Statistics

• Total Topics
628722
• Total Posts
2984396

• 25
• 11
• 10
• 16
• 14