Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

  • Days Won


hplus0603 last won the day on April 15

hplus0603 had the most liked content!

Community Reputation

11457 Excellent


About hplus0603

  • Rank
    Moderator - Multiplayer and Network Programming

Personal Information

  • Industry Role
    Technical Director
  • Interests


  • Twitter
  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I would recommend against dynamoDB, as it has less guarantees than SQL but doesn't actually scale much better. Amazon RDS is alright; it can use the MySQL or Postgres API, so if you're used to MySQL, you can even use MySQL for development, and then use RDS when deploying.
  2. The main problem with NoSQL overall, is that it requires you to know all your access patterns ahead of time. And because secondary indices are usually cumbersome or expensive, if you have more than the plain "given a key, find me a bag of data" then NoSQL starts showing its weakness. Another thing that NoSQL got first, but SQL is now getting too, is in-RAM databases. Depending on performance needs and cost / operations specifics, you may want to look at databases specific for RAM. On the other hand, if you have data with a "long tail" (old data that's seldom accessed,) then all-RAM is almost certainly the wrong choice. Also: Putting more RAM into a SQL database host, to make it cache better, often reduces the cost difference between in-RAM and on-disk. If you're familiar with MySQL, there's nothing wrong in just using that. It works great, and can scale far. If you need specific features of Redis (atomic push/pop, sets, pub/sub) then you might want to include that, too, with the caveat that you have to put a time-to-live on all data, because when Redis runs out of RAM, that's it -- no more data! Another interesting option is FoundationDB, which recently went open source when the company was bought by Apple. It's a key/value store that supports federation/distribution -- you can add nodes to get more capacity, without changing the semantics of the database. Redis, and most other NoSQL databases, by contrast, don't allow transactional work across multiple hosts.
  3. Good judgment comes from experience. Experience comes from bad judgment. Welcome to software development, where stuffing a double precision value into a 16-bit integer blows up half a billion dollars of rockets: The process of software engineering involves figuring out how to reduce the scope and frequency of failures. You can never fully eliminate them!
  4. Sure! Assuming the base clock is well chosen, then that will work fine. I just indicated one possible implementation. Distributed systems are never simple. Real-time systems are never simple. Distributed, real-time systems (like games,) are never simple. That doesn't mean that you must build ultra-complex solutions, but I don't see what's complex about "establish a baseline, measure time since baseline, divide time into ticks using a consistent mechanism?" Assuming client and server clocks proceed at a rate of one second per second (which is a fair assumption most of the time,) this will keep client and server in good sync, once they establish an appropriate offset. The necessary adjustments come from TimeInGameTicks() and MainLoopSimulation(), and the way that ticking the simulation from the main loop may simulate zero, one, or more ticks, based on what the current offset is. What does your profiler tell you happens during these frames? Do you measure the absolute time (in milliseconds) for each iteration through your main loop, each graphics frame, and each simulation frame, and log outliers?
  5. You need to use a high accuracy clock to determine what the current time is. QueryPerformanceCounter() is fine. GetSystemTime() or TickCount() or timeGetTime() are not as good. int64_t baseValue; double multiplier; int32_t baseGameTime; void InitClock(int32_t timeNowInGameTicks) { ::QueryPerformanceCounter((LARGE_INTEGER *)&baseValue); int64_t pcc; ::QueryPerformanceFrequency((LARGE_INTEGER *)&pcc); multiplier = 1.0 / pcc; baseGameTime = timeNow; } double TimeInSeconds() { int64_t pcc; ::QueryPerformanceCounter((LARGE_INTEGER *)&pcc); return (pcc - baseValue) * multiplier; } int32_t TimeInGameTicks() { return TimeInSeconds() * TICKS_PER_SECOND + baseGameTime; } void AddDeltaTicks(int32_t deltaTicks) { baseGameTime += deltaTicks; } Once you can measure seconds accurately, you should establish a "baseline" timer value, where you know the "baseline" game tick number. Then you derive the tick number you're supposed to be at by measuring the distance in seconds from base time to now, and multiplying by tick rate, and adding the base tick number. You do not want to re-initialize the base time too often, because each time you do, you may "miss" or "gain" some fraction of a tick. Instead, adjust the output tick clock by full ticks only, by adjusting the baseline tick value. Now, if the server tells you your input arrived X ticks too late, you should add X+2 to the base game time, and then set a flag that means "don't listen to the server for the next second." If the server tells you your input arrived X ticks too late, you should add 2-X to the base tick value, and again, set a flag that means "don't listen to the server for the next second." This is to avoid oscillation in clock correction. The server should only tell you about being too early once you're more than 4 ticks early. The values 2 and 4 can be tweaked, as can the value "one second," but those are good baseline values for games that send messages 10-60 times a second and simulate 30-120 times a second. The game simulation loop is simple: int32_t simulatedTimeInTicks; void MainLoopSimulate() { int32_t ticksNow = TimeInGameTicks(); if (ticksNow - simulatedTimeInTicks > 10) { warning("too big a time jump -- skipping simulation"); simulatedTimeInTicks = ticksNow; } while (ticksNow - simulatedTimeInTicks > 0) { StepSimulationOneTick(); simulatedTimeInTicks++; } }
  6. Your if() case will execute both the "snap time" and "tick slow" branches when diff < -60 (and same for > 60.) Other than that, I don't quite see why packet loss would change your simulation rate. Typically, you use the local clock to drive local simulation, and just change the offset between "Local clock" and "game tick" and the local clock moves ahead even when a network packet doesn't arrive.
  7. Sounds like you have a bug either in how you calculate the offset on the client, or how you print the offsets. This is very hard to debug without good test cases, so I suggest you apply debugging and logging to the problem until you get it where you want it to be.
  8. It's possible that your measurements are measurement artifacts, rather than anything inherently wrong. Do you run the server and client on the same machine? They might interfere. Do you run your networking over TCP or UDP? If TCP, have you turned on TCP_NODELAY?
  9. It sounds like you are trying to learn too many new things at once. You're trying to learn threaded programming, you're trying to learn network programming, and you're trying to learn game networking all at once. This won't work. You need to pick one of the three problems (say, threaded programming) and learn that first, until you really know it. Then, pick the second problem (say, network programming,) and learn that, until you really know it. Then you can learn the specifics of game networking without continually tripping over the challenges of the other requirements.
  10. In practice, you'll want sync to be such, that the packet from the client arrives at the server slightly ahead of when it's needed. The "slightly" value should typically be about three standard deviations of your jitter, so 99.7% of all packets arrive ahead of their being needed.
  11. You should only listen() once on a socket. Separately, it's impossible for us to know what "works until the client disconnects" means. What happens when the client disconnects?
  12. Thanks for sharing! You might want to keep the timing information in your general packet headers, to be able to adjust to changing network conditions (both lower and higher pings, as well as changing variance) on the fly, without having to see an "impossible" event. I imagine you can actually do this even for the initial set-up -- if timing is part of the packet headers, you don't need to explicitly send separate timing messages.
  13. However much time is left before it's time to run the next simulation step. So, if you run simulation at 100 Hz, remember when the last simulation step started. Then when going to read from the network, calculate how long until 10 ms ahead of that time, and use that for timeout. nextTime = clock() forever { nextTime += stepLength simulationStep() while ((diff = nextTime - clock()) >= 0) { pollNetwork(timeout=diff) } } If your simulation runs so slowly that it takes longer than one simulation frame to simulate, you will obviously be in trouble, so you may want to detect that and quit the game, let the user know, or something like that, if that happens a lot in the current game instance. After all, SOMEONE will try to run the machine on an old Pentium MMX 120 MHz they have sitting in a closet.
  14. Edit: Use "code" tags instead of spoiler tags to show quoted code! You get to those with the <> icon in the editor, or using brackets around the word code and /code.
  15. Not really. How much experience do you have in game programming? And in networking? Because game networking programming is a skill in itself, that requires that you first understand game programming, and network programming, separately. If you haven't built some networked application, and some game, already, perhaps you could build a non-networked game and a non-game network application first?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!