• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

103 Neutral

About Rells

  • Rank
  1. Same goes for knowing the player ID of another player I would think. However, what you say is worth some thought. It would require doing a mapping from source ip/port (which is 6 bytes in ipv4) to the player stream handler. Certainly doable. And since the port is likely to change with each client creation of the source socket, it might work. Would have to make sure the client creates a single datagram handling outgoing socket and uses that rather than comming from a dozen ports. Interesting idea.   Its still spoofable of course, everything non-encrypted is. And whether I can stack on encryption depends on a ton of other issues.
  2. Why do you think that the hash table that the OS is using to distribute different port traffic to different sockets would be any more efficient than your own hash table? Why do you think that the number of ports would change how much buffering space is in the kernel, or how much capacity your network card have? There's only one network card, right? And if you had 10 socekts with 1 MB of buffer space each, why couldn't you have 1 socket with 10 MB of buffer space just as well? It *may* be that those questions actually have answers that say you should use multiple sockets, but I would bet against it. Start with the simple solution. Complexificate things only when you *know* that you need to. There are many servers today that do hundreds of thousands of packets per second over a single socket with a single thread. (Although most of them I know of are using libevent and C/C++ rather than Java -- I don't know how much overhead Java adds.) Also, are you trusting a player ID provided in a packet? If so, players may cheat. The best identification of players is by source IP+port, which you get when you call recvfrom() or similar functions.   I am not saying multiple sockets is the way to go. I was asking for opinions. It seems the tone of your reply is blasting me for that.   As for the final deployment, I will probably use a data center managed by a third party rather than do my own network configuration and in that situation I would assume there would be more than one network card on the rack.   As for the overhead Java adds, I dont know if its incredibly relevant. If the clock cycles that the compiled code adds is the highest worry of the app, I will have both a very powerful system and a very big smile on my face. I think the IO rates across the net will introduce far more overhead. And I am using Java because I can take advantage of some tools rather than reinventing the wheel. I can also precompile the byte code to native (rather than using JIT) if the need arises.   As for the player ID, I will point out that its not that hard to spoof a header in an IP package and make a packet seem like it is comming from anther source. If you can alter the player ID in the live stream, you can alter the IP & port. The question then becomes one of potentially using encryption of the data stream. Is there an encryption tech out there that is sufficiently fast as to not introduce unmanageble overhead but yet provide good enough protection of the stream? I believe there are some alternatives to choose from. But really that is a worry I will tackle later. A journey must be walked a step at a time, not leaped to the end.
  3. Oh I am aware of the other issues. :) But one issue at a time. Thanks for the feedback.
  4. Greetings, this is my first post here so excuse me if I ask or say something stupid. :)   I am working on a multiplayer game and I am putting together a prototype implementation of the back end. I will be using UDP to provide support for the realtime aspects of the player interraction and I am having something of an internal dilemma. There are two different ways I could implement the realtime aspects and I am having trouble deciding between the two and would appreciate some opinions especially from those with experience.   Option 1: I can have a single server listener on a particular socket that recieves UDP messages and then has a HashMap of actual destinations inside the server. As a message comes in, it gets routed to the correct processor for that player using the encoded player id to identify the correct destination. The pros to this is that the implementation is relatively straightforward on the server side. My concerns is that the single port will become innundated with tens of thousands of packets per second and cause a bottleneck in processing. Also the routing algorithm itself, while very fast even in Java, will introduce some overhead as well. The overhead with the HashMap implementation should be O(1) but should not be entirely dismissed.   Option 2: I can create a complex system whereby the client sends udp messages to a particular port provided by the server and there is a server directly listening to that port. This would have the benefit of allowing the underlying IP protocol to do the routing and that would possibly be faster than the hash map implementation. Each player would have one of X number of dedicated ports to send UDP messages to and no post recieve routing would have to take place.   Option 3: Go for a hybrid system where there are many possible open ports and each open port will handle n clients.   Common: In either case the server will send data to the client on a single UDP port that the client has opened through the firewall (though I havent investigated how well this works with console systems). Since there would be no need of routing on the client side this should be efficient. Also there is a possibility, due to load balancing, that i might have to redirect the clients to connect to other servers in the cluster. That can be done with a UDP message changing the destination for client messages.   Other Concerns: I have already discounted TCP as being inappropriate for this application since strong realtime concurrency is required and old packets are irrelevant.   However, I would also like to find some way to encrypt the data stream between the client and server, possibly using Fast AES in a manner similar to the HTTPS handshake protocol. This would introduce decrypt overhead and put more pressure on the CPU resources of the single router.   Any opinions on the preferred strategy (especially if you have actual experience with the downfalls and benefits of implementing the strategies) would be appreciated.    
  • Advertisement