Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 03 Jun 2003
Offline Last Active Today, 12:28 PM

#5312665 browser games - when to update server data?

Posted by on 26 September 2016 - 09:10 AM

For building games, storing the "finish time" (or perhaps the "start time" and "build duration") is absolutely the right thing to do.
This also lets the client do all the animation and estimation of when the building is complete; once the progress bar on the client is complete, the client simply asks the server what the state of the building is, and updates it.
That way, you only need to query buildings by their object ID, not by time. If you need the full state of the world, query all objects that belong to the player in the game instance, and derive their done-ness at the current date.
The client might want to periodically ask for a dump of all state anyway, to make sure it stays in sync, but this can be pretty infrequent, depending on the pace of gameplay. Every 1, 2, 4, 8, 16, and 32 seconds after a command has been given seems alright, with the presumption that once a command has been accepted by the server, the chance of it somehow reverting is slim.

I don't see how you're going to implement a traditional Tick method with PHP

I've seen PHP code that uses a process that hits a URL on a schedule to implement something like that.
I can't say that I like the pattern, because it wastes hardware resources a lot, but it can be made to work.
The benefit of PHP is that it scales horizontally very well, as long as you keep state in databases and network attached memory.
The drawbacks of PHP are that it's a home-grown hodge-podge of a language, which is a real problem for large projects, and its hardware resource requirements (constant factor) are very large.

#5311026 Syncing issues (algorithm description inside)

Posted by on 15 September 2016 - 10:10 PM

I find that building the networking such that you record every packet that comes in, with the game step at which it comes in at, and the full payload, is super helpful.
Also record system state, such as the clock value each time through the main loop.
Then, and this is the real important bit, build the reverse -- a reader, that, instead of reading from a socket, and reading the system clock, read from the file and returns those values to the program.
Now, you suddenly have a fully debuggable system, where you can pause/stop and single step as much as you want, without losing state.
And you can re-play as often as you like with the same state.

The replay files also make for great QA tools -- run an automated test at top speed without any graphics or delays, and make sure that the events you expect should be happening, do happen.
And, the final tip of that ice berg, is making record/playback available to players. But that's really just icing on the cake. The amount of time you save in development is the real win!

#5310604 Syncing issues (algorithm description inside)

Posted by on 13 September 2016 - 09:39 AM

There are a number of reasons that simulations can diverge. Examples include:

- If there are multiple players, the position of player B on player A's computer when running simulation step X will be different than on the server (bacause of latency)
- If the server and client have different CPUs, slight implementation differences where the last bit of some math function is different, and the butterfly effect makes it diverge
- Random generators used for simulation outcomes may end up with different seeds or being called in different order
- If you use a software physics engine like ODE or Bullet, the order that the constraints go into the physics simulation may vary, leading to subtle math differences
- If you use a GPU physics engine like PhysX, you will additionally get math bit mis-matches across different GPUs

Options 2 .. 5 can be solved with a carefully constructed simulation engine.
Option 1 is a killer for FPS-type games, because the only real solution is to wait to simulate until all commands/positions of remote players are known, which means that you have a round-trip time of latency between command and action.
This is why RTS games have a "Yes, Sir!" acknowledgement animation when you give units commands -- it hides the round-trip latency.

#5310125 Could you recommend a lightweight database?

Posted by on 09 September 2016 - 09:40 AM

Also, you should use MySQL with InnoDB. All the other storage engines are less reliable, and MyISAM is down-right dangerous.
(Also, InnoDB is faster than Memory engine :-)

MySQL is super great at online read/write use cases, and as long as you can keep your load to a single machine (and then horizontally shard to scale) picking MySQL (or MariaDB, or PerconaDB, or whatever for you want) is fine.

MySQL is not so good with large, complex queries -- an analytics database might be better as something else.
MySQL is not so good with online schema changes. We have a table of >150 million rows, which we cannot change, because all of the "online change" tools end up either failing, or live-locking and never finishing.
MySQL is not so good with advanced relational algebra -- there's really no such thing as auto-materialized views, advanced triggers with optimization, etc.

PostgreSQL is in many ways the dual of MySQL. It's amazingly strong on almost everything MySQL isn't. However, it is lower performance for heavy online load with simple read/write traffic.

I'd rather look at Microsoft IIS than at Oracle, and I'd rather look at iBM DB/2 than IIS, if I had to look at enterprise SQL databases. But I'd probably rather look at Amazon SimpleDB and Google BigTable before I went that route, anyway -- if you really need scalability, those approaches are proven to scale much better.

#5310028 Could you recommend a lightweight database?

Posted by on 08 September 2016 - 03:40 PM

Thare are lots of ways to skin that cat.

If you store each player's data in a file named after the player, then the best way to update is to re-write all the data you have about the player to a temp file, then move the player to a backup file, then move the temp file to the player file.
This is known as a "safe save" and avoids partially-overwritten player files when the server crashes. (flush/sync the file system to commit, though!)

On UNIX, you'd do this:

  char fn_new[100];
  sprintf(fn_new, "%s.new", playername);
  char fn_target[100];
  sprintf(fn_target, "%s.save", playername);
  char fn_old[100];
  sprintf(fn_old, "%s.old", playername);
  int fd = open(fn, O_RDWR | O_CREAT | O_TRUNC, 0644);
  link(fn_target, fn_old); // ignore errors, if this is the first save
  rename(fn_new, fn_target); // check errors here
Windows is slightly different as it doesn't have the same kind of hard links and doesn't allow rename-with-replace, but same idea.

Depending on how long and how tricky user names you allow, you of course want to quote the username before turning it into a filename :-)
(a username like "../../../../../../../../etc/passwd" would be pretty popular otherwise!)

SQLite is fine for storing local data on a local machine, single-player games, and such. It collapses under large parallel load, and may even corrupt the database on disk when doing so.

#5310005 Could you recommend a lightweight database?

Posted by on 08 September 2016 - 02:10 PM

So, you can use a database that gives you 10,000 durable writes per second. I would suggest Redis, or perhaps MongoDB (which I'm not at all a fan of) hosted on a server with an SSD RAID and plenty of RAM.
The NoSQL approach to storage may be able to support your high-write-rate use case better than traditional relational databases.
There are also some in-RAM SQL relational databases coming out that you could look into.

It may be possible to achieve those numbers with a single instance of MySQL or Postgres on a single server, but that would be hard and require very highly tuned queries and tables.
(For example Postgres has significant write amplification around updates to rows that have secondary indices.)
You will at some point need to horizontally shard your database across multiple hosts, and that will reduce your ability to do consistent transactions between users. (Commit both user A and user B in the same transaction.)
The main issue is one of cost. Building a game like you suggest will not work, economically, at scale. Unless the game deals with big money -- like real casino games, or whatever.

For games, I would assume that the cost isn't worth it. The simplest trade-off, which most games take, is to keep all game state in RAM in a game/app server, and only commit back to a database every so often (every 10 minutes, or when something really important changes.)
If the server crashes, players' state rolls back to whatever was last checkpointed. This should happen very seldom, and assuming important things affecting the economy, like trade between characters, is committed immediately, you're good.

An improvement on that, if you really want durable last-available-state, is to stream user state to a durable queuing system.
The queuing system could be configured to "last received state" for each topic (where a topic is a player,) and you'd have something that checkpoints the player from the queue into the database every so often.
Because message queues can run in durable mode, and generally have better throughput than relational databases, this might be cheaper to operate.

#5309978 Could you recommend a lightweight database?

Posted by on 08 September 2016 - 11:38 AM

1) You do not want to use a database server as a game server. Packing all the services on a single machine is okay for development and testing environments, but any proper deployment needs to factor database services into different hosts from game services.

2) You do not want to use a database as your main IPC or state distribution mechanism. It should not be used to communicate player movement updates, or player chat, or anything like that. Databases are used when you absolutely need transactional integrity and durability. What are you doing 3 times a second to 4,000 users that needs durability and transactional integrity?

#5309677 How to structure my pure client/server model properly?

Posted by on 06 September 2016 - 08:52 AM

There really is no simple solution to this game loop stuff.

For PCs where the render time is too divergent to test and control for.
Consoles can just lock at 60 Hz and call it good.
Used to be, PAL locked at 50 and caused trouble for NTSC games and vice versa; with modern digital TV systems that's no longer as much of a problem.

Also: One of the main drawbacks of the Unreal Engine is that is doesn't let you fix the timestep. Certain kinds of physics wlll occasionally go "boing" in Unreal games on PC because of this, when a time step becomes longer for some reason.
So, it's possible to ship certain kinds of games with certain kinds of networking on variable time steps. I just wouldn't recommend it :-)

#5309535 How to structure my pure client/server model properly?

Posted by on 05 September 2016 - 11:32 AM

Use timestamps instead of ticks, and introduce time into all input values

And then take that to the point of counting all events in simulation timesteps, were each timestep is fixed. 60 per second, or 144 per second, or whatever.
I documented the canonical way to implement this a long time ago: http://www.mindcontrol.org/~hplus/graphics/game_loop.html
It still works very well!

#5308167 Windows Firewall Troubles

Posted by on 26 August 2016 - 10:24 PM

"What is the problem?"
That depends on what firewall you're using.
Which in turn depends on what OS you're using.
You're not giving us much to go on, here :-)

If this is Windows, and you're running your game server as a service, then perhaps this is a problem where it runs under different credentials and the firewall doesn't apply your config -- that depends on what firewall you're using I guess.

#5308166 Custom Level Upload-Server

Posted by on 26 August 2016 - 10:23 PM

Ah! You need a HTTP/HTTPS client library in C++.
libcurl is pretty good.

You can then "log in" the C++ client by posting name/password to the web service, which might return a Set-Cookie header with a random, hard-to-guess session ID, and you can provide that Cookie header in subsequent requests to the web service.
The good news with that is that you can then use the same login mechanism if you build a web app to manage your data :-)

#5307658 Questions about "reliable" UDP protocol and DB interface for MOG

Posted by on 24 August 2016 - 12:14 PM

how do I resolve the problem of sync between client and server?

You have two main options:

1) Temporarily display the player in an incorrect/extrapolated position, and keep updating the position to be "more correct" based on what you receive from the server. This gives immediate command response on the client, but will display the client slightly out-of-sync with the world.

2) Only send commands to the server, and update the player to "walking" on the screen only when the server sends the new state back. This will always display a world that's in sync, but it will cause command latency between "start moving button pressed" and "actually starts moving on screen."

Exactly how you implement each of these options varies based on game mechanics and other specifics.
In most cases, the client won't ever be out of sync with the server, because there is no discrepancy.
Only when something is different (a door has opened on the server, not on the client yet; another player is in the way on the server, not on the client yet; etc) is the correction actually visible.
You can choose to lerp that correction, or just snap the player.
The simplest way to correct the player is to store the commands the player has received, and "wind them forward" each time you receive a new state update from the server.

#5307495 Create player accounts on custom server (for steam users)

Posted by on 23 August 2016 - 03:36 PM

The issue is that not everyone is that careful and looks at the url or the SSL certificate before typing this information in.

This is true, but there is no better solution :-(
Well, you can have players purchase a cryptographic token from you, but that's not a low-friction onboarding experience :-)

Or i use the current method which doesn't store personal userdata but is by no means secure due to the fact that the master-key is stored in the client

One option is to start there, and then give the user the option of signing in with a Steam ID if they want to be able to play from other places, able to upload files, etc.
Let the user decide.
Note: If you have that "easy onboarding" option, then if there is any kind of abuse possible of your system (using it to host file uploads, etc,) then that's what the bad people will use, so be sure to take that into consideration!

Are they (the majority of them) using SQL injections to do this?

There are about three attack vectors:

1) SQL injection, or other hosted-software vulnerability (WordPress, Drupal, ImageMagic, etc.) This generally gives you database dumps and perhaps admin interface access to the site.

2) Host vulnerability (Heartbleed, etc) This generally gives you command-line access to the site, which you can use to discover databases that you can then dump, insert payloads in hosted pages, etc.

3) Social engineering ("I am Robert, the County Password Inspector!") This generally gives you some kind of employee access to the site from the command line, again.

The question, then, is what the bad guys are after. If you store credit card details, absolutely that! If not, perhaps a list of emails and passwords to try on other sites. Or perhaps just another box they can run DDoS, email spam, and fradulent web sites from.
If you keep your development code on another system, with tight security, and use good source control, and good automated deployment methodology, you will minimize the impact from most such events, once you can detect that they occurred.
Wipe the hard disk, re-deploy to new OS image, restart servers; done!

#5307491 Questions about "reliable" UDP protocol and DB interface for MOG

Posted by on 23 August 2016 - 03:16 PM

First, any kind of "force" or "wait for response" function will end up blocking the UI of your game and be a terrible user experience.
Just Say No ™ to blocking network calls!

Second, for an RPG, I bet that TCP will work well enough. That way, you don't have to worry about whether a client got the update that there's a rock in a particular place or not.
Just send out a packet whenever the client desires to do something (move to a place, move in a direction, cast a spell, whatever) and on the server, collate these and send to all clients X times a second.
For an RPG, I bet 10 times a second would be plenty.
Just make sure to pack the entire set of updates into a single frame/packet/send() call, and turn on SO_NODELAY.

If the client has the same map as the server, the client will 99% of the time predict correctly what the server does.
It can just go ahead and render whatever it it believes should happen.
When it receives updates from the server, it can compare that response with what it, itself, thought the state was at that time (the time the updated pertained to.)
It can then re-play whatever inputs the player provided between that time and now, to show the next player position.

For "important" actions, like the result of spells, the result of fighting, purchasing/transactions, etc, you should play a local "wind-up" animation when the player first initiates it, but you should wait to show the result until the server sends "fireball blasted here" or whatever back.
That way, you will never show the player "you succeeded" only to take it back again 300 milliseconds later.

Billions of rows in a database is generally a bad idea, not because you couldn't store it on a hard disk (a 4 terabyte hard disk can store a lot!) but because the height of the index becomes very tall. And one component of database performance is number-of-indices times height-of-indices.
For most game character data, I would just store the character stats/abilities as one big JSON blob that's stored as inline text (a long varchar or maybe text field.)
This reduces the index height (number of rows) to the number of characters, which is better.
If you really plan on having millions of players, though, at some point, you're going to want to horizontally shard your data -- characters with ID 0 - 999,999 live on instance A; ID 1,000,000 - 1,999,999 live on instance B; ...
Then keep a table of ID range to instance mapping, so you talk to the right database instance for the given customer.
You will of course need a central table that goes from "customer name" to "customer id" so that you know which database to look at; this may have to live in a central database, but should be a much smaller table with a single index.
If you have really quite a lot of customers, even the mapping from "customer name" to "character id" will be too big to keep on a central server; at that point, you may shard that initial table based on "first character of customer name" or "hash of customer name modulo 20" or whatever. But you'll likely never get to that point, as pretty much Google and Facebook have that problem :-)

#5307032 Friends living abroad have really laggy connection to me, why?(using raknet,...

Posted by on 21 August 2016 - 10:28 AM

dropped TCP packets are just waited for at the router to be resent again

Routers, in general, do not wait for re-transmission before they forward, because they work at the IP layer, not the TCP layer.
For "reverse NAT servers" this changes, as they play the role of end-points, but when you talk about "router" in general internet parlance, you typically mean the boxes that sit in the middle of the network.

The re-transmission in TCP is entirely handled by the endpoints -- the machine doing the sending, and the machine doing the receiving on the other end.
And if one packet is dropped or delayed, ALL THE PACKETS AFTER THAT PACKET ARE HELD when they are received, waiting for the dropped/delayed packet to be re-sent.
By the time the dropped packet is detected, re-transmitted, and received on the other end, anything that was sent after that is also late, because it's been sitting in the kernel, received by the host, but not delivered to the application, because of the in-order guarantee.

Regarding spraying 100 packets at the same time, that doesn't really help, because the main reason a packet is dropped, is that there is congestion at some node on the network.
Once there is congestion, if the node receives 1 packet from you, or 100 packets from you, they will all be dropped at that time until the congestion recovers.
If you want to try duplication, consuming bandwidth in an attempt at higher robustness, it's much more robust to perhaps include the last 2-3 packets as copies in the next packets you send, so there is some time between each. This will add a bit of overhead, but if you structure your data right, it will RLE compress really well, and thus won't actually consume 200-300% of the bandwidth.
If the network drops three successive packets with a packet-send delay between each, the congestion is likely so bad that you're going to have a bad playing experience no matter what.