Jump to content
  • Advertisement

hplus0603

Moderator
  • Content Count

    12238
  • Joined

  • Last visited

  • Days Won

    1

hplus0603 last won the day on April 15

hplus0603 had the most liked content!

Community Reputation

11509 Excellent

7 Followers

About hplus0603

  • Rank
    Moderator - Multiplayer and Network Programming

Personal Information

  • Role
    DevOps
    Programmer
    Technical Director
  • Interests
    DevOps
    Programming

Social

  • Twitter
    jwatte
  • Github
    jwatte

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. hplus0603

    Pub/Sub for game Servers... Are you?

    I think you don't have enough experience yet to realize that, the more things change, the more they stay the same. Yes, Redis wasn't available in 2010. But, you know what? Redis can't magically do something that code written in 2010 couldn't. You had to do a little more yourself, but given the primitives available in Erlang, not a lot more. And the solution in Erlang scales horizontally with adding more nodes, whereas Redis does not (Redis Cluster is a joke, technically speaking.) (Yes, you can do application-level sharding for particular use cases.) If you have a specific kind of game that you want to build, with specific use cases and requirements that aren't met by existing technology, you may be on to something! If you're just trying to "build a system," without any kind of strong evolutionary pressure on the features and implementation of that system, then experience (not just mine!) says you won't build something that's all that useful. The reason is that many billions of dollars, and tens of thousands of man years are spent on networked games every year, and the market will already have explored most implementation nooks and crannies for the kinds of games that have already been generally funded and delivered. It doesn't matter if it's games, financial trading, car maintenance or movie production -- if there's already a large market, and you don't really have a very specific use case that's not currently served by the market, then you're quite likely not going to make something successful. Or to put it another way: Do things differently, is not particularly compelling and generally don't build successful projects. Do different things, is where it's possible to truly innovate and push the envelope of what's possible. And sometimes, you need to do things differently to be able to do the different things, but it's the different things that are the reason you win. That being said, building systems is fun and educational. As long as you learn the right lessons, and draw the right conclusions, testing a bunch of things and pushing them until they break is always a good way of gaining more experience. After all, good choices come from experience, and experience comes from bad choices 😄
  2. hplus0603

    Pub/Sub for game Servers... Are you?

    "horses for courses" also means "choose the right tool/engineer/solution for the job." (There are some other sayings that are more colorful on that topic.) Everything cool in computers today was probably invented by IBM for mainframes in the '70s. Virtual machines? Check. Container deployments? Check. RISC? Check. Message queueing systems? Check. Untyped prototyping scripting languages? No, that was invented in the '50s. Six years might feel like a long time in an area like, say, front-end JavaScript frameworks, where the platform you're building on, and the businesses doing the building, all evolve daily, because they're not settled yet. Physical networking, however, has had a lot more development going into it. It's not like the fundamental differences between TCP and UDP, or the use-cases they were designed for, have suddenly changed, because the speed of light or the propagation of electricity in wires seem to be exactly the same now, as then :-) So, crossbar. comes from electric engineering, where it's a "matrix" or "patch panel" that lets you connect any thing to any other thing. I've often seen it used for various kinds of rendez-vous primitives, such as the bottom level of a message queue broker implementation.
  3. hplus0603

    Pub/Sub for game Servers... Are you?

    The Erlang solution got started in 2010 or so, and shipped for real in ... 2012? It's still going strong and doing its thing, so I don't think it's "older technology." It's seen at least one major upgrade, to add redundancy in places it didn't have it. (Btw, I think the WhatsApp team is almost entirely on Erlang) It really is an example of "horses for courses."
  4. hplus0603

    Pub/Sub for game Servers... Are you?

    I've personally tried other things 🙂 The closest I got to something working was a TCP based session protocol/crossbar for messaging, which ended up at < 100 ms latencies for hundreds of thousands of users (real, not simulated.) In the end, we ended up using it more for web service data invalidation, than actual game data. (This was an Erlang cluster, connected to through a variety of websocket, plain SSL socket, and HTTP long-polling) It used the topic-names subscription model, but had the benefit of being largely chat-group-like in ordering, no physical proximity there. Access control was by far the most annoying thing, using the most resources. So, when I say "I've seen only two systems actually work out," it's because, among all the systems I've seen, those are the two that have longevity. I've kept up with a variety of engines, libraries, post-mortems, and conference sessions. Note I'm not saying "UDP or bust" -- obviously, the "starcraft" lockstep model over TCP is fine, although again, there the players are in discrete little clumps, much like chat rooms. If you're building Hearthstone-like games, or Clash of Clans-like games, or Starcraft-like games, then what you're suggesting sounds fine. But maybe now is the right time to actually tell us in a little more detail what particular kinds of games you're targeting? If you're targeting Unreal style games, or Planetside style games, or GTA-V-Online style games, or Destiny-style games/matchmakers, then I don't see how your approach will be good enough to displace the task specific libraries, for example. Hence: If you go with option 2, pick a genre! In the last two large games I helped build the engine for in some way: 1) Social online 3D world: PubSub on top of a custom Erlang cluster (started way before Redis was even a thing.) It worked out OK, mainly because there was very little physics/game rules -- the use case was very similar to "chat rooms" and "stock quotes" where obviously this model works well. 2) Large multi-game world where player groups "surf" game instances, and each game instance is fully scriptable/customizable by users: PubSub doesn't work for game mechanics; networking built on top of RakNet with heavy customization. (Additional gnarl: some games allowed client authoritative state)
  5. hplus0603

    Pub/Sub for game Servers... Are you?

    Redis is not RAM, Redis is network-attached-RAM, which is an order of magnitude less efficient than co-locating the RAM with the routing/signaling server. This is the layered, web-service-like view of the world, and I have never seen that work out for a significant game. The reason is that games have significantly lower latency bounds than web services, and also have significant gains from small efficiencies that end up letting you push more active players onto the same server ("RAM crossbar.") Hence, games that are dependent on their networking implementation always end up implementing their own system that lets you blend content, encoding, visibility, and transport into a smart cross-layer algorithm. The two kinds of game networking libraries/frameworks/systems I've seen are 1. Very low-level libraries (Enet, RakNet, etc) that deal with packet addressing, sequencing, NAT punch-through, and perhaps serialization. 2. Full-stack libraries, where the defintion and assumptions of the library have to match your game mechanics. The built-in networking in Unreal Engine, for example, falls in this bucket. When a game developer builds a networking system, it always ends up being option 2, and it may very well be built on top of some library from option 1, but there really isn't a layering or separation that actually makes sense and is efficient enough to try to find some point between the two to provide as a library. These are the attractor states. If you want to play in scene 2, you have to pick a particular networking "genre" and impose a lot of opinion on the user of your system.
  6. hplus0603

    Pub/Sub for game Servers... Are you?

    At the very highest level: As long as you meet your goals, it doesn't matter how you do so. At a slightly lower level: TCP based systems versus UDP based systems have different scalability and performance concerns. Most of the "web services" people don't care about UDP because it's not a good match for what they need, which means there's less easily accessible open source solutions for UDP than TCP messaging cross-bars, even though UDP may be better for some particular game (depends on the game.) At a slightly lower level than that, implementation concerns like "are clients easily available for the platforms I care about" and "is there good support for expressing and efficiently encoding the kinds of data I care about" may point you towards one solution versus another, assuming that they both support the scalability goals that you need. And, at the bottom, you end up with "does this system have enough hooks for me to do content-based routing and optimization, and support optimizing the mesh for the needs of my specific game as I grow." Most games never get to have that problem, though, because they die before they get that far, so ti's usually not super high on the checklist of necessary properties. There are many cases where a messaging crossbar is useful, but they are all slightly different. A "chat room" has a well defined set of people who all need to see each other, which means that a topic-based pubsub system is a great match. A "game world movement" system has less of that, because in a world where player A only sees player B, player B sees A on one side and C on the other side, and player C only sees player B (because of distance-based cut-offs,) there is no single "topic" for getting data you want; instead, what each player sees is custom to that player. You can build this as a set of topics and subscribe players to multiple topics -- A subscribes to B, B subscribes to A and C, and C subscribes to B, in the above case. Then you need to create/tear down subscriptions when people move around, which ends up creating a lot of churn that you have to optimize the system for. You also need to make sure the system doesn't allow some client to just subscribe to all possible topics, and "cheat" by seeing the entire world. An alternative then is to build the cross-bar in RAM in a game server, where queries on "who can see what" are efficient and can be done very frequently, and just send the stream of "data this client needs to see" without worrying about establishing/tearing down different topic subscriptions. This ends up generally being lighter weight long term, and thus the architecture of many MMO games that focus on world position/movement as a game mechanic. (As opposed to, say, Hearthstone-style card games or whatever.) If you have more details requirements for all of the parameters that matter (encoding types, lookaside capability, subscription churn rates, and so forth,) then you can make a better call on whether to use a traditional topic-based broker architecture, or something more custom.
  7. hplus0603

    Auth tokens

    The data in the token is potentially encrypted and signed, but that doesn't protect it against interception and re-use in transit, unless you also have transport security (such as HTTPS.)
  8. CORS is a layer 7 protocol concern. It sounds like the OP has a layer 3 or 4 networking issue. Other than "there are things that sometimes don't make data available where you think it should be," CORS has nothing to do with TCP/UDP/IP layer problems. UPnP is terrible. It requires users to trust that arbitrary devices on the inside should be able to arbitrarily configure rules for your firewall. This is not a good security model, which is why it's often off by default, or turned off by people who care about security. If your game depends on UPnP, then you're in trouble. I think there are a few basic networking rules that need to be clarified here: 1) A typical home router implements NAT, and has one public IP address on the "outside," and has a DHCP server that hands on prive IP addresses to the inside. 2) When a machine on the inside wants to talk to the outside, it goes through the router, which re-writes the source packet IP:port from the inside values, to the outside IP address and some randomly chosen source port (which will hopefully stay persistent for the given inside IP:port tuple.) 3) When traffic comes back to the re-written IP:port, the router again re-writes going the other way, and forwards to the inside IP:port pair. 4) For IP protocols without ports, such as ICMP used for "ping," it only re-writes the IP address, but because there's no other identifying information, it can't tell the difference between answers intended for more than one internal machine. Thus, if you have two devices on the inside which both "ping" the same outside IP at the same time, only one of them will typically see the responses (unless the firewall has heuristics to try to make that case work.) 5) Port forwarding basically just lets you add another entry in the router NAT table, saying that "packets coming in to this port on the external IP, should be re-written to this other internal IP:port." The main difference is that this entry is persistent, rather than created when the internal host tries to communicate to the outside. Connecting to your router's admin interface and setting up a persistent port forward to your hosts' internal IP address is more robust than UPnP. However, make sure your host doesn't change its internal IP between sessions! (This can happen when DHCP leases expire -- you may want to look into assigning static IP addresses, or "locking" DHCP leases, if your router supports this.) 6) Because these forwarding rules are dynamically created based on observed connections, two clients, both behind different NAT firewalls, can actually talk directly to each other, if we can only arrange for the session IP:port table to have the right rewrite rules. This is what NAT punch-through (with an external introducer server) is for. Read up on the STUN and TURN protocols for one approach to make this work. 7) this may be one of the problems above Many routers don't like "hairpin" connections, where you try to connect to the external IP address of the router, from the internal side. It's called "hairpin" because it requires a 180-degree turn, where the packet arrives at the router from the inside, which should mean that it should be re-written and forwarded to the outside interface, but then it's targeting the actual address of the outside interface (your public IP) and thus the packet needs to both go out on, and come back in on, the same interface, requiring a "sharp 180 degree turn" in the software. (Obviously it shouldn't actually send the packet on the hardware.)
  9. Also, there are worse things than making a game that gets hacked. Making a game that nobody cares about is worse. Or not finishing the game at all, which is the most common problem :-)
  10. My experience (and following other people on this board over time) suggests that you'd probably be better off if you did include simulation frame numbers with data, but you do what you do :-) And, after all, Roblox defaults to client autoritative, because that's an easier model for developers to get started with, although it allows you to move authority to servers if you choose to. And Roblox is now a $2.5 billion company, so it's clear that there's room for all kinds in the world! Also, Unreal Engine also allows you to be client authoritative if you want, and some games are. If those games get popular, they get hacked. Oh, well. Btw, if all you want to do is make a multiplayer game, you could do worse than downloading Roblox Studio and try it out. They take care of all the physics/rendering/matchmaking/networking, and you write your game in Lua. It's not for everyone, but it's super simple to get something real written!
  11. Riak has a lot higher overhead per transaction than Redis. It's a different kind of beast -- more of a document database (like Amazon Dynamo, or Cassandra/Scylla,) than a network-attached-RAM server. Redis is more like memcached, although with persistence. But still not durability! Redis will checkpoint, and you will lose anything changed after a checkpoint when you crash. (It also has other modes, but they too have a variety of problems.) If you need actual persistency, just use MySQL or perhaps Postgres. Or one of the high-performance reliable key/value databases like rocksDB or foundationDB if you don't need advanced schemas/indexing. (just promise me you won't use MongoDB -- it promises many things, and ends up failing anyone who tries to scale a business and evolve an application with it.) Regarding shards that move with players: Two players may want to go different directions, so the only way you can avoid having to move players between shards would be to have one shard per player. There are network topologies that work exactly like that -- the original military DIS standard, as well as the over-engineered HLA replacement, are structured like that. They end up bottlenecking on the network, typically at much fewer simulated entities than you could squish into a single process if you used static geographic sharding. Btw, I can see how the word "shard" is overloaded. In systems architecture, "vertical sharding" is when you separate different functional areas to different systems, that you can scale individually. "horizontal sharding" is when you can distribute the load within a single system onto many different machines (in games, typically geographically.) What players of MMORPGs call a "shard" is generally called an "instance" in systems architecture -- you spin up multiple full copies of whatever your system (game) is, and let players choose one instance to interact with. Instancing is kind-of like another level of horizontal sharding, except not really, because the workload across instances is not interchangeable; a datum (player character) only makes sense within the instance.
  12. My experience from massive simulations is that any attempt to "overlay" simulations (not clustering them geographically) will fail to scale, because the communications overhead (networking) is orders of magnitude higher latency and lower bandwidth than NUMA RAM. Giant persistent worlds are totally possible, and have been built for 20 years, but they all end up sharding by geography and replicating entities across shard borders in one way or another. If you don't need any kind of physical simulation or physically-based rules (enemy targeting, spells, meelee combat, etc) then you can probably go with a fully sharded "web style" network topology, but then, what would your game be? The world can only see so many Hearthstones or Clash of Clans. Anyway, I say Redis is great, because I've used it for a number of different use cases, and I've seen several other companies use it, and it's great as long as you stay within a single machine. Once you need a cluster of machines, many of the things that are good about Redis, such as transactional cross-key updates, stop working. If all you use is pub/sub, and you shard the pub/sub key space across separate Redis instances, that can scale higher, but it has the draw-back that you get NxM network connections -- each interested subscriber process needs to connect to each Redis instance, and because both Redis and subscribers scale by number of players, it ends up growing as O(N-squared) albeit with a very small "k" value. Redis is less great at persistence, because once the data store gets big, you need machines with tons of RAM, and forking the Redis process to persist it will lock up the kernel for seconds at a time if you have enough RAM. You can somewhat alleviate this by sharding across tons of small Redis instances (say, 32 GB each or less) but then you instead have a bigger "M" problem for connections. Use a real database for persistent data.
  13. Redis is great, as long as you don't need to scale past a single machine per game instance. If you don't need persistence, you could just use pub/sub and leave it at that. Another option for pub/sub is ZeroMQ, btw. It's decidedly non-persistent, and you may need to build your own central broker that actually lets everyone else rendez-vous to pub-sub, if you don't want everybody to connect to an existing server. But, more importantly: Why do you have multiple different servers that need to know player location? Why not build a single server process? The constant factor overhead in trying to build a "scalable" solution that distributes frequently-changing CPU-decided data (like physics simulations) across multple machines is gigantic. Game servers are not web applications. How about putting all code that needs player location onto a single machine, and just read the locations when you need it? Once you run out of vertical scalability on that machine (at 1000 or 10000 or 100000 players) then shard your world.
  14. hplus0603

    Games and blockchain.

    Two years ago, maybe. These days, not so much ...
  15. hplus0603

    Games and blockchain.

    Why would you use a block chain to raise money when there is Kickstarter? Block chains solve exactly one problem: How to reach consensus between multiple, disparate, actors, who do not inherently trust each other.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!