hplus0603Member Since 03 Jun 2003
Offline Last Active Today, 09:47 AM
- Group Moderators
- Active Posts 11,722
- Profile Views 29,043
- Submitted Links 0
- Member Title Moderator - Multiplayer and Network Programming
- Age Age Unknown
- Birthday Birthday Unknown
Redwood City, CA
Posted by hplus0603 on Yesterday, 02:15 PM
All MMORPGs need servers, so general server-management infrastructure is probably useful -- think "docker" or "elastic beanstalk" or such. Totally general, not game specific.
Similarly for storage infrastructure -- pick something general and well known, either relational (MySQL or Postgres) or NoSQL (Redis or Cassandra, or the cloud-based BigTable or SimpleDB.)
All MMORPGs need some kind of graphics, and user input, and audio. There are APIs in browsers for these things, and those APIs have a bunch of warts, so compatibility helpers are useful, as are 2D and 3D renderers, as ar even full-fledged game engines. That's a super-big, hundreds-of-man-years area.
Regarding quests, that's also kind-of generic, but also kind-of genre specific. Quests are found in most games -- campaign mode of RTS-es, single player FPS, single-player RPG, action-adventure, and a bunch of other game kinds need this just as much as "MMO" anything.
Now, once you start getting into any more details, you end up with genre specific requirements.
There are services around the periphery that can be useful to all modes (game matchmaking, server selection, account management, etc) but the gameplay does dictate the optimal architecture, and if you don't choose the optimal architecture, you will be at a cost disadvantage compared to your competitors.
Do you need server-side simulation? Is that simulation turn based or real time? Does it include physics simulation or just game rules? The approach you take matters. Travian is different from World of Warcraft is different from Planetside.
Calculate-forward-on-request (like Travian, etc) is very different from ticksd-simulation with physics (like Planetside.) Even the way that you represent state and marshal to/from network data may be different, based on requirements.
Posted by hplus0603 on 26 September 2016 - 09:10 AM
This also lets the client do all the animation and estimation of when the building is complete; once the progress bar on the client is complete, the client simply asks the server what the state of the building is, and updates it.
That way, you only need to query buildings by their object ID, not by time. If you need the full state of the world, query all objects that belong to the player in the game instance, and derive their done-ness at the current date.
The client might want to periodically ask for a dump of all state anyway, to make sure it stays in sync, but this can be pretty infrequent, depending on the pace of gameplay. Every 1, 2, 4, 8, 16, and 32 seconds after a command has been given seems alright, with the presumption that once a command has been accepted by the server, the chance of it somehow reverting is slim.
I don't see how you're going to implement a traditional Tick method with PHP
I've seen PHP code that uses a process that hits a URL on a schedule to implement something like that.
I can't say that I like the pattern, because it wastes hardware resources a lot, but it can be made to work.
The benefit of PHP is that it scales horizontally very well, as long as you keep state in databases and network attached memory.
The drawbacks of PHP are that it's a home-grown hodge-podge of a language, which is a real problem for large projects, and its hardware resource requirements (constant factor) are very large.
Posted by hplus0603 on 15 September 2016 - 10:10 PM
Also record system state, such as the clock value each time through the main loop.
Then, and this is the real important bit, build the reverse -- a reader, that, instead of reading from a socket, and reading the system clock, read from the file and returns those values to the program.
Now, you suddenly have a fully debuggable system, where you can pause/stop and single step as much as you want, without losing state.
And you can re-play as often as you like with the same state.
The replay files also make for great QA tools -- run an automated test at top speed without any graphics or delays, and make sure that the events you expect should be happening, do happen.
And, the final tip of that ice berg, is making record/playback available to players. But that's really just icing on the cake. The amount of time you save in development is the real win!
Posted by hplus0603 on 13 September 2016 - 09:39 AM
- If there are multiple players, the position of player B on player A's computer when running simulation step X will be different than on the server (bacause of latency)
- If the server and client have different CPUs, slight implementation differences where the last bit of some math function is different, and the butterfly effect makes it diverge
- Random generators used for simulation outcomes may end up with different seeds or being called in different order
- If you use a software physics engine like ODE or Bullet, the order that the constraints go into the physics simulation may vary, leading to subtle math differences
- If you use a GPU physics engine like PhysX, you will additionally get math bit mis-matches across different GPUs
Options 2 .. 5 can be solved with a carefully constructed simulation engine.
Option 1 is a killer for FPS-type games, because the only real solution is to wait to simulate until all commands/positions of remote players are known, which means that you have a round-trip time of latency between command and action.
This is why RTS games have a "Yes, Sir!" acknowledgement animation when you give units commands -- it hides the round-trip latency.
Posted by hplus0603 on 09 September 2016 - 09:40 AM
(Also, InnoDB is faster than Memory engine :-)
MySQL is super great at online read/write use cases, and as long as you can keep your load to a single machine (and then horizontally shard to scale) picking MySQL (or MariaDB, or PerconaDB, or whatever for you want) is fine.
MySQL is not so good with large, complex queries -- an analytics database might be better as something else.
MySQL is not so good with online schema changes. We have a table of >150 million rows, which we cannot change, because all of the "online change" tools end up either failing, or live-locking and never finishing.
MySQL is not so good with advanced relational algebra -- there's really no such thing as auto-materialized views, advanced triggers with optimization, etc.
PostgreSQL is in many ways the dual of MySQL. It's amazingly strong on almost everything MySQL isn't. However, it is lower performance for heavy online load with simple read/write traffic.
I'd rather look at Microsoft IIS than at Oracle, and I'd rather look at iBM DB/2 than IIS, if I had to look at enterprise SQL databases. But I'd probably rather look at Amazon SimpleDB and Google BigTable before I went that route, anyway -- if you really need scalability, those approaches are proven to scale much better.
Posted by hplus0603 on 08 September 2016 - 03:40 PM
If you store each player's data in a file named after the player, then the best way to update is to re-write all the data you have about the player to a temp file, then move the player to a backup file, then move the temp file to the player file.
This is known as a "safe save" and avoids partially-overwritten player files when the server crashes. (flush/sync the file system to commit, though!)
On UNIX, you'd do this:
char fn_new; sprintf(fn_new, "%s.new", playername); char fn_target; sprintf(fn_target, "%s.save", playername); char fn_old; sprintf(fn_old, "%s.old", playername); int fd = open(fn, O_RDWR | O_CREAT | O_TRUNC, 0644); write_data_to(fd); fdatasync(fd); close(fd); link(fn_target, fn_old); // ignore errors, if this is the first save rename(fn_new, fn_target); // check errors here sync();Windows is slightly different as it doesn't have the same kind of hard links and doesn't allow rename-with-replace, but same idea.
Depending on how long and how tricky user names you allow, you of course want to quote the username before turning it into a filename :-)
(a username like "../../../../../../../../etc/passwd" would be pretty popular otherwise!)
SQLite is fine for storing local data on a local machine, single-player games, and such. It collapses under large parallel load, and may even corrupt the database on disk when doing so.
Posted by hplus0603 on 08 September 2016 - 02:10 PM
The NoSQL approach to storage may be able to support your high-write-rate use case better than traditional relational databases.
There are also some in-RAM SQL relational databases coming out that you could look into.
It may be possible to achieve those numbers with a single instance of MySQL or Postgres on a single server, but that would be hard and require very highly tuned queries and tables.
(For example Postgres has significant write amplification around updates to rows that have secondary indices.)
You will at some point need to horizontally shard your database across multiple hosts, and that will reduce your ability to do consistent transactions between users. (Commit both user A and user B in the same transaction.)
The main issue is one of cost. Building a game like you suggest will not work, economically, at scale. Unless the game deals with big money -- like real casino games, or whatever.
For games, I would assume that the cost isn't worth it. The simplest trade-off, which most games take, is to keep all game state in RAM in a game/app server, and only commit back to a database every so often (every 10 minutes, or when something really important changes.)
If the server crashes, players' state rolls back to whatever was last checkpointed. This should happen very seldom, and assuming important things affecting the economy, like trade between characters, is committed immediately, you're good.
An improvement on that, if you really want durable last-available-state, is to stream user state to a durable queuing system.
The queuing system could be configured to "last received state" for each topic (where a topic is a player,) and you'd have something that checkpoints the player from the queue into the database every so often.
Because message queues can run in durable mode, and generally have better throughput than relational databases, this might be cheaper to operate.
Posted by hplus0603 on 08 September 2016 - 11:38 AM
2) You do not want to use a database as your main IPC or state distribution mechanism. It should not be used to communicate player movement updates, or player chat, or anything like that. Databases are used when you absolutely need transactional integrity and durability. What are you doing 3 times a second to 4,000 users that needs durability and transactional integrity?
Posted by hplus0603 on 06 September 2016 - 08:52 AM
There really is no simple solution to this game loop stuff.
For PCs where the render time is too divergent to test and control for.
Consoles can just lock at 60 Hz and call it good.
Used to be, PAL locked at 50 and caused trouble for NTSC games and vice versa; with modern digital TV systems that's no longer as much of a problem.
Also: One of the main drawbacks of the Unreal Engine is that is doesn't let you fix the timestep. Certain kinds of physics wlll occasionally go "boing" in Unreal games on PC because of this, when a time step becomes longer for some reason.
So, it's possible to ship certain kinds of games with certain kinds of networking on variable time steps. I just wouldn't recommend it :-)
Posted by hplus0603 on 05 September 2016 - 11:32 AM
Use timestamps instead of ticks, and introduce time into all input values
And then take that to the point of counting all events in simulation timesteps, were each timestep is fixed. 60 per second, or 144 per second, or whatever.
I documented the canonical way to implement this a long time ago: http://www.mindcontrol.org/~hplus/graphics/game_loop.html
It still works very well!
Posted by hplus0603 on 26 August 2016 - 10:24 PM
That depends on what firewall you're using.
Which in turn depends on what OS you're using.
You're not giving us much to go on, here :-)
If this is Windows, and you're running your game server as a service, then perhaps this is a problem where it runs under different credentials and the firewall doesn't apply your config -- that depends on what firewall you're using I guess.
Posted by hplus0603 on 26 August 2016 - 10:23 PM
libcurl is pretty good.
You can then "log in" the C++ client by posting name/password to the web service, which might return a Set-Cookie header with a random, hard-to-guess session ID, and you can provide that Cookie header in subsequent requests to the web service.
The good news with that is that you can then use the same login mechanism if you build a web app to manage your data :-)
Posted by hplus0603 on 24 August 2016 - 12:14 PM
how do I resolve the problem of sync between client and server?
You have two main options:
1) Temporarily display the player in an incorrect/extrapolated position, and keep updating the position to be "more correct" based on what you receive from the server. This gives immediate command response on the client, but will display the client slightly out-of-sync with the world.
2) Only send commands to the server, and update the player to "walking" on the screen only when the server sends the new state back. This will always display a world that's in sync, but it will cause command latency between "start moving button pressed" and "actually starts moving on screen."
Exactly how you implement each of these options varies based on game mechanics and other specifics.
In most cases, the client won't ever be out of sync with the server, because there is no discrepancy.
Only when something is different (a door has opened on the server, not on the client yet; another player is in the way on the server, not on the client yet; etc) is the correction actually visible.
You can choose to lerp that correction, or just snap the player.
The simplest way to correct the player is to store the commands the player has received, and "wind them forward" each time you receive a new state update from the server.
Posted by hplus0603 on 23 August 2016 - 03:36 PM
The issue is that not everyone is that careful and looks at the url or the SSL certificate before typing this information in.
This is true, but there is no better solution :-(
Well, you can have players purchase a cryptographic token from you, but that's not a low-friction onboarding experience :-)
Or i use the current method which doesn't store personal userdata but is by no means secure due to the fact that the master-key is stored in the client
One option is to start there, and then give the user the option of signing in with a Steam ID if they want to be able to play from other places, able to upload files, etc.
Let the user decide.
Note: If you have that "easy onboarding" option, then if there is any kind of abuse possible of your system (using it to host file uploads, etc,) then that's what the bad people will use, so be sure to take that into consideration!
Are they (the majority of them) using SQL injections to do this?
There are about three attack vectors:
1) SQL injection, or other hosted-software vulnerability (WordPress, Drupal, ImageMagic, etc.) This generally gives you database dumps and perhaps admin interface access to the site.
2) Host vulnerability (Heartbleed, etc) This generally gives you command-line access to the site, which you can use to discover databases that you can then dump, insert payloads in hosted pages, etc.
3) Social engineering ("I am Robert, the County Password Inspector!") This generally gives you some kind of employee access to the site from the command line, again.
The question, then, is what the bad guys are after. If you store credit card details, absolutely that! If not, perhaps a list of emails and passwords to try on other sites. Or perhaps just another box they can run DDoS, email spam, and fradulent web sites from.
If you keep your development code on another system, with tight security, and use good source control, and good automated deployment methodology, you will minimize the impact from most such events, once you can detect that they occurred.
Wipe the hard disk, re-deploy to new OS image, restart servers; done!
Posted by hplus0603 on 23 August 2016 - 03:16 PM
Just Say No to blocking network calls!
Second, for an RPG, I bet that TCP will work well enough. That way, you don't have to worry about whether a client got the update that there's a rock in a particular place or not.
Just send out a packet whenever the client desires to do something (move to a place, move in a direction, cast a spell, whatever) and on the server, collate these and send to all clients X times a second.
For an RPG, I bet 10 times a second would be plenty.
Just make sure to pack the entire set of updates into a single frame/packet/send() call, and turn on SO_NODELAY.
If the client has the same map as the server, the client will 99% of the time predict correctly what the server does.
It can just go ahead and render whatever it it believes should happen.
When it receives updates from the server, it can compare that response with what it, itself, thought the state was at that time (the time the updated pertained to.)
It can then re-play whatever inputs the player provided between that time and now, to show the next player position.
For "important" actions, like the result of spells, the result of fighting, purchasing/transactions, etc, you should play a local "wind-up" animation when the player first initiates it, but you should wait to show the result until the server sends "fireball blasted here" or whatever back.
That way, you will never show the player "you succeeded" only to take it back again 300 milliseconds later.
Billions of rows in a database is generally a bad idea, not because you couldn't store it on a hard disk (a 4 terabyte hard disk can store a lot!) but because the height of the index becomes very tall. And one component of database performance is number-of-indices times height-of-indices.
For most game character data, I would just store the character stats/abilities as one big JSON blob that's stored as inline text (a long varchar or maybe text field.)
This reduces the index height (number of rows) to the number of characters, which is better.
If you really plan on having millions of players, though, at some point, you're going to want to horizontally shard your data -- characters with ID 0 - 999,999 live on instance A; ID 1,000,000 - 1,999,999 live on instance B; ...
Then keep a table of ID range to instance mapping, so you talk to the right database instance for the given customer.
You will of course need a central table that goes from "customer name" to "customer id" so that you know which database to look at; this may have to live in a central database, but should be a much smaller table with a single index.
If you have really quite a lot of customers, even the mapping from "customer name" to "character id" will be too big to keep on a central server; at that point, you may shard that initial table based on "first character of customer name" or "hash of customer name modulo 20" or whatever. But you'll likely never get to that point, as pretty much Google and Facebook have that problem :-)