• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

  • Days Won


hplus0603 last won the day on June 20

hplus0603 had the most liked content!

Community Reputation

11331 Excellent


About hplus0603

  • Rank
    Moderator - Multiplayer and Network Programming

Personal Information

  • Location
    Redwood City, CA
  1. You should look up the "priority queue" data structure, which is generally implemented as a heap. This means that insertion is O(log N) and extraction of the next event to run is O(1) Even if the queue is just a big ol' vector, insertion will be O(log N) for searching (binary search) and O(N) for the memcopy to make space for the event in the vector; turns out memcopy is very very fast compared to chasing pointers so that O(N) is unlikely to dominate. Compared to scanning all players and other entities for all possible events every simulation tick, this will be a lot more efficient.
  2. Most game engines will have a priority queue of timed events. Thus, you won't check all the things that could happen, "have you happened yet?" Instead, you'd insert a record in the event queue saying "call this object function at this game time." The game loop will then, in each simulation tick, advance time, then run all the event records that have a time less than or equal to the new game time. That way, when the player dies, you simply insert a record saying "disconnect this player after 7.3 seconds" and it will happen at the right time. This mechanism is often used for other things, too, such as the expiration time of buff spells, expiration of invites, and so forth.
  3. There are two kinds of tokens: 1) A long, random, string, which you use as a primary key on the server to look up user ID, expiration date, and so forth. A good way to generate this is to generate 32 bytes of random data (using /dev/urandom, or CryptGenRandom) and then encoding as base64 or base85, so that it consists of entirely printable ASCII characters. 2) A token that consists of some "real" data and a signature. For example, the token could be "version:userid:generatedtime:signature" where "signature" is computed as HMAC(version:userid:generatedtime, secret-key) and only your servers know the secret-key. Your servers can then verify this token by looking at the generated time, making sure it's in range, and then verifying the signature. The signature is likely using SHA256 and encoded similar to how you'd encode the long random string in the other case. You'd use "version" to be able to add more data in a compatible way later, or to update the private secret every so often if you fear it leaking out. The benefit of 2) is that it doesn't require a database lookup to generate or validate. The benefit of 1) is that it's simpler, and there's no private secret to share on your servers (and that someone could steal.) These tokens are very similar to session cookies, stored by web sites when you log in. Web browsers used to store cookies in plain files on the user's hard disk. I think that with "browser sync," the mechanism is a little more involved these days, but in principle, if someone can already infect your computer to the point where they can read your files, you can't protect against them anyway, so encrypting on the local machine is probably not necessary.
  4. Users also migrate between device families (I used to have a iPhone 6+, now I have a OnePlus 3T). Users also use multiple modes (I watch Netflix on PlayStation, laptop, phone, and media PC). Users may also use iCloud, or Dropbox, or OneDrive, or some other such mechanism on their computers. So, yes, the mobile platforms have things that can help a little bit. Similarly, the platforms also already have logins/accounts! You can use the Google account, or the iTunes account, as an identifier, even without using a specific device. It all depends on what your specific use cases are, and what kind of permissions/dialogs you want to have your user accept before using the app.
  5. Kylotan is right: You don't want to store the password itself on the client. Instead, have a table on the server of "userid -> issuetime -> issueip -> token" which stores strong-random tokens (256 bit.) Presenting this token to the server will serve as authorization, until the token is revoked (by being deleted from the table.) Separately, store the bcrypt() or scrypt() of the password with salt on the server. issuetime is useful to expire very old tokens (your protocol may detect tokens that will expire in the future and issue a newer token;) issueip is useful for logging purposes to ferret out the russian/philipines/egyptian/whatever-it-is-today sock puppet spammer rings. I would probably not generate any of those things on the client, because the client still needs to upload the data to the servers, and the client can't know what names are unique; instead, I would have a service on the server which is "play now" and generates a unique name (the server knows this; the client can't!) and password and authorization token, and returns the needed data to the client. You could also take this opportunity to set a cookie on the client with the appropriate username (and perhaps token, too) assuming you use the clients' web stack to connect to the server. You probably want to also design the tutorial such that the user has a little bit of fun, then pops up a dialog saying "to get your 100 free gems, enter an email address or phone number, and we will send you your reward code so you can keep playing!" Don't actually send the password in email/SMS; instead just use the email/SMS to support a traditional password reset flow. Finally, while "guest1234" is straightforward and somewhat traditional, I prefer auto-generated names like "BlueAppleDampPony" but that's just me :-)
  6. Using tested libraries is good. Generally, the attack against a game focuses on the player and the player's device, or perhaps the servers, rather than the connection in between. When returning "a few IP addresses" from the web server, I'd suggest returning DNS names might be more standard than returning IP addresses. Other than that, what you do can work fine.
  7. I'm assuming the question you're trying to solve in 1/2 is "how do clients find the lobby server to talk to ?" Hard-coding IP addresses is a really bad idea, because even if you have a "static IP," you will sometimes be forced to change because of ISP re-organization, or you may want to change ISPs, or whatnot. Returning raw IP addresses from POST requests is slightly less bad, but still have problems for users that require transparent proxies with DNS masquerading, or if you grow to the scale that you want to do regional IP balancing, for example. The solution that the internet is designed around is options 3: You define a domain name for your lobby server, such as "lobby.mygame.com" and let clients connect to this name. The DNS infrastructure is robust, well understood, and highly compatible across the world. I'm assuming the question you're trying to solve in 3 is "how does a lobby server know who a player is and what their stats are?" Lobby servers are just another kind of application servers. You can build a two-tier system (lobby servers talk to your databases) or a three-tier system (lobby servers talk to web API servers talk to databases) depending on how much effort you want to spend on it. If a user logs in on a website first, you may want that website to generate a session token. Typically, some tuple containing user id and token issue time, as well as a HMAC of user-id:issue-time:server-secret. When a client connects to another server in your cluster, they can provide this session token, and the server can verify that the time is reasonable, and that the HMAC matches when given the server secret (that the client doesn't know) and thus the user-id is also legitimate. This way, the client doesn't need to keep sending its password to the servers. If the users log in to lobby servers right away, then you can make the lobby servers issue the same kind of ticket, or you can just structure your TCP protocol to require authentication up-front (name+password.) Presumably you use TLS for this.
  8. For Unreal Engine, with Blueprints, you will likely also want to read this: https://wiki.unrealengine.com/Steam,_Using_Online_Subsystem https://docs.unrealengine.com/latest/INT/Programming/Online/Steam/ Note that Steam won't host your actual game servers for you; they'll just host social features and matchmaking. You will have to code your game so that one player "hosts" the game server, and other players "connect" to that hosting player. Finding other players (who are hosting) can be done through the Steam servers.
  9. And why do you think that the attacker would not patch the "get running processes" call to filter out the processes they don't want you to know about? Whatever you do on the client, can be un-done by the attacker, because they control the code running on their computer. If the attacker is only using pre-existing cheat tools, then you can prevent those tools from being effective by knowing what those tools do, and implement specific counter-measures. (Checksums and scrambled values are helpful against tools scanning the process for known values, for example.)
  10. Yes, that's a totally fine way to serialize the board state of a game of chess. I'm assuming that when pieces leave the board, you'll mark their slots with something like "**" instead of a board position. You can also concatenate both of the strings into a long string; you don't need to separate each of the players' states into separate strings. To support queen promotion to more than one queen, you may need to also store state for which pieces are now "queens." This is necessary to render the board correctly, so it's part of "board state." Additionally, you need to store state for whether each player has castled or not (because you can only castle once,) and which pawns have moved two spaces (because of enpassant capture.) These states are NOT visible on the board, and are thus only GAME state, not BOARD state, so they are important to enforce rules but not render the board. Finally, "sockets" and "databases" are totally different. "Sockets" are how remote players get messages. "Databases" are how you store persistent state so it stays around if the server reboots. All network traffic, no matter what the layer (UDP, ping, HTTPS, SSH, VoIP, ...) uses sockets as the programming API. When a player connects to your web server, that's a socket. When your web server makes a query to a database, that's typically also a socket connection, over which the persisted database data is transferred.
  11. In general, though, "players seeing players" is a n-squared problem. If all players are within view range of each other, then all players need to see all players, which requires N players state updates generated for N players. In general, you'll want to cache what the actual state is on the server, rather than compute it when anyone wants to observe it. The simulation updates the actual values of each player -- this is just once per player. Then, the network-view iterates through each player, and sends the state for each player that player can see. There's no re-computation at that point, just packing a bunch of states into a packet, and then sending it. For interest management (which is the formal name for the mechanism of "who can see what") you'll often end up needing a spatial index (quad tree, hash grid, etc) to efficiently figure out who can see what. Otherwise, each time any player moves, that player has to check ALL other players for which players are nearby or not. It's better to scan only a subset of players when you need to.
  12. Sending position updates from the client means that the client can cheat, and send a position that's further away than it should be, and thus make the client run faster. The generally accepted solution is to send inputs (logical inputs: movement speed and turning speed, not explicit keyboard events) to the server, and the server then sends position to each of the clients when it sends snapshot updates of the entity states. Separately, typical send rates for games are 15-60 times a second. A typical update packet might be 10-20 bytes. Add 28 bytes for IP/UDP overhead, and you still end up with at most 48*60 bytes per second sent to the server, which even a dial-up modem can deal with, so when you say "a lot of data," what measurements have you actually been taking?
  13. You are correct, Source does not rewind the entire simulation. It only rewinds for hit tests. The point of my answer was that you can mix and match the different approaches, until you get a solution that matches your particular game. And, because each game has different gameplay requirements, there can't be a one-size-fits-all solution to this problem. For 32 players, I would not use rewinding on the server. In general, I prefer to never rewind on the server, and instead discard late inputs, but it does depend on what kind of feel (and what kind of simulation cost) your game has.
  14. The methods are not mutually exclusive. In fact, the "source networking model" is a little bit of both. For an approach that uses entirely method 1, look at "GGPO" which is a networking library for fighting games. Because fighting simulation is typically cheap, rewinding and replaying simulation is not a big deal in that context. In method 2, you would add slightly more than just RTT/2 to the buffer time, so that packets generally don't arrive late. When packets arrive late, you simply discard them. If you assume that the player will do whatever they did last tick (if moving forward, keep moving forward, and so forth,) then you will guess right-ish enough. When packets are lost you will of course have to correct the player whose packets were lost. This may cause snapping on their screen, the magnitude of which depends on the time duration between loss and correction. If your simulation is not 100% deterministic, you will likely want to correct all clients on some regular schedule anyway, because otherwise they will drift out of sync over time.
  15. If your runtime is JavaScript, you might not be able to get perfect determinism across all browsers and platforms, because JavaScript only has double precision floating point numbers, no integers or anything like that. Double precision can be deterministic if everyone runs the same binary on the same CPU architecture, but that's not true of the web. It is also possible for peer-to-peer systems to have each client simulate the state of all the other clients, and "vote" on which other clients may be attempting to cheat. This works, until the cheaters outnumber regular users in some particular game instance, at which point cheaters win the vote. If score (or resources like gold etc) are important in your game, cheaters will find a way to set up a ring where they all collude. The final nail in the coffin for large peer-to-peer games (where "large" means "N > 2" pretty much) is the growth in packet overhead. If you have 100 players, then you need to send your state, every tick, to 99 other players, and receive their state, every tick, from 99 other players. The packet overhead and overhead in keeping those connections open is massive. Meanwhile, in client/server, you only have one connection open, to the server, and you send to only one remote system, the server. You receive from only one remote system, the server, but you do receive 99 different states, packed into a single packet, so the overhead is cut down a lot. For many games, the TCP packet overhead (40+ bytes) is bigger than the payload-per-player, so this really matters. Also, most residential internet has much faster downlink than uplink performance, so sending as much as you receive (99 streams times one packet per stream per tick) will run into the uplink limitation much sooner as the game scales up.