• Content count

  • Joined

  • Last visited

  • Days Won


hplus0603 last won the day on October 28

hplus0603 had the most liked content!

Community Reputation

11383 Excellent


About hplus0603

  • Rank
    Moderator - Multiplayer and Network Programming

Personal Information

  • Interests


  • Twitter
  • Github

    They seem to claim that they only expect you to make web pages on their service, and any kind of application/web service that makes API calls are unlikely to work well on their service. There are "shared web servers" -- these are the cheapest, and is the kind that the OP initially set up. These give you a directory to upload your web files to. Those web files might be intended for some particular web development language (PHP, Ruby, Python, etc) and typically use a "one request is one process" model. You receive a HTTP request, you do whatever queries and processing you need to do (if any) and you respond with a web page or some other web resource (image, XML, JSON, ...) You cannot upload code that's not supported by the web server host -- you can't build your own C code for processing network requests -- and you cannot run processes that stay alive for longer times, talking to many players at the same time. These are okay for "my own photo album" or "jimmy's pizza ordering page." If the page sometimes takes a second or two to load, that's not a big deal. All the traffic goes over HTTP, which is a request/response, bulk transfer protocol on top of TCP, with no particular guarantees around latency. Also, typically there are 1000s of "home directories" on each of these servers, serving a bunch of different, small, web pages all at once. (The server knows which directory to look in based on the URL and Host: header in the HTTP request.) Then, there are "metal servers" -- a computer is plugged in to the network, and has an OS on it, and you get administrator/root access. It's up to you to make sure the hard disks are partitioned the way you want them, that the software you need to run on there is installed, and that security exploits get patched. You can build whatever software you want and install on these servers. For action games, which almost always use UDP networking, this is great, because you can spin up a game server on a known port, and point game clients at this server/port. Draw-backs are that you need to be much more aware of how to administer servers in a safe, secure, and efficient manner. In addition to game servers, web sites that are bigger than suitable for a shared web host can use one of these servers (or more) and install their own web server and database server as the software that runs on these servers. This allows you to avoid the "noisy neighbors" problem of shared web hosts. Because these servers are a full computer that costs a lot of money to buy, server hosting companies will charge significantly more per server instance. Typically a few hundred per month, at least (more if you want fancy memory / RAID disks / multiple CPUs and so on.) Then, there's an intermediate step. You don't need a big server with 256 GB of RAM and a 10 TB flash RAID array and dual 10 Gbps uplinks to the router. You just need some fraction of that -- one or two cores, one or a few gigabytes of RAM, a dozen gigabytes of disk space for your software, and a terabyte or two of network transfer per month. You're OK with sharing the same physical hardware with others, as long as you get the guaranteed memory and processing power that you pay for. Various "virtual private server" ("VPS") solutions deliver this kind of server. For most intents and purposes, it "looks like" a small metal server (and has some of the same problems of needing security patches, administration, and so forth,) but the cost of actually sticking hardware in a rack in a data center is spread across many people who each share a defined portion of the server. Thus, this is cheaper per month than a full server. This is what Amazon Lightsail, Linode, Dreamhost VPS, and Interserver end up selling you. Amazon has the best name in the business and the most mature set of infrastructure tools, and thus charges the most. Dreamhost has a business that's more around easy-to-manage web sites, and provide a lot of value-add if you use their software versions, and thus charge extra for that. (They also have shared web hosting.) Linode gives you a little more than Amazon for your money, while having a smaller variety of tools around their images, but are still a well-known actor with some really big customers. And, finally places like Interserver is just "give me a server and get out of the way" bare bones, which lets you get down in price a few more dollars at the bottom end. On the other hand, their data centers and internet uplinks are more oversubscribed than Linode or Amazon -- you get what you pay for. There are of course many more online server providers of various kinds -- Heroku, 1and1, Peer1, ServerBeach, Rackspace, and the list goes on. The four above are the ones that I have direct experience with that has been good enough to recommend if you need their particular kind of service. Once you outgrow VPS/cloud solutions and metal-leased servers, you will build your own data centers in some co-location facility, negotiate for bandwidth with upstream providers like Cogent and Level3, and screw your own servers into your own (rented) racks, where you have to build your own in-datacenter network infrastructure (routed or switched? Overprovisioned or nonblocking?) and have to worry about things like power density (130W CPUs perform great, but if you try to cram two of them per motherboard times 42 of them in a rack, you will draw more power than most power distribution and cooling will let you get away with -- are watercooled racks worth it?) Co-location is cool and all (or, if you're facebook/google, you build your entirely own data centers) but very few indie games grow to the point where they need to worry about that. Find a VPS vendor you can trust (linode or amazon) and stay there for as long as you can.
  2. WindSlayer Private Server

    Based on that information, I would say that, to create a private server, you need to be very highly skilled in reverse engineering of software. If OP (DaddyEso) has those skills, then this might be a fun year-long project. If OP is looking for resources to do that work, I think that would be very hard to find, because people with those skills generally have more rewarding projects to choose from.

    I think this is the problem. Trying to use MySQL as a messaging layer is extremely inefficient. Cheap web hosts have thousands of people share the database, and trying to add more queries will not work. I don't think bandwidth is your main problem. First, you'll want to use a real web host. You can use Amazon Lightsail for $10/month, or Linode for $10/month, or something like that. You will have enough CPU power and memory to run small servers on those hosts under your own control. You will also get enough network bandwidth that you don't need to worry about that as a limitation. Second, you may be able to get to 10 players on such a server using the mechanism you have; you may even be able to throw more money at it and get to 100 players using MySQL (at potentially hundreds of dollars per month per server.) But that's a waste of money. You'll want to use a server that doesn't need a database to answer most questions. The way to make that happen, is to use a server stack where the server has memory and persistence. PHP doesn't do this, but servers like Node.js or golang do (as well as Java, C++, C#, and other such languages.) You can also get persistent servers in Python. You can still use HTTP for your protocol -- it's not very efficient, but it can be made to work. (Websockets would be better.) But, make the queries go to data in RAM, and respond based on data in RAM, rather than the database. Instead, only read from a database when the player first logs in, and write back to the database when data that really matters changes (inventory change, trade, experience gain, those kinds of things.) This will keep the database much less overloaded. I doubt this is accurate information. Each web request will have more headers than just the request URL, and each response will have a lot more headers than just the payload data. And I've never seen a cheap web host that intentionally limits the bandwidth per user, although they sure do oversubscribe their network interfaces, so how much data you can get through depends a lot on how busy other users on the same host is. I highly recommend starting with a virtual private server from Lightsail or Linode or Dreamhost VPS or Interserver, and you have the ability to scale up from there.
  4. WindSlayer Private Server

    What does disappear is the will to enforce said copyright. However, if the companies shut down, but the asset were bought by someone or some other company, then a current owner actually exists. If you're building this just for fun (not for profit) you may have less to worry about, but the best way to get an idea of what's a good or bad idea would be to talk to a licensed legal professional in your jurisdiction. Regarding building a server, that may be simple if the game is straightforward, if the protocols are well documented, and if the game doesn't use encryption. If there is no existing server to compare to, and if you're not good with reverse engineering, and if the game uses encryption that makes it hard to even figure out what it's trying to do on the network, you may have much bigger challenges.
  5. They use Unreal 3. I believe they run physics on server and client, and send control inputs from players to all players, and round-robin updates of the server simulation to the client. This means that someone may fall out of sync for a little bit (if some input is missed,) but will be brought back into sync at the next full update. Physics updates aren't particularly expensive -- you need position, orientation, velocity, and spin. You could easily encode that into 3bytes*3 for position, 2bytes*3 for orientation, velocity, and spin. 27 bytes per entity. Doesn't matter if it's a car, or a person, or a missile, or a piece of debris (assuming you care at all about that debris.) Generally, you'd only send the POVS for the main chassis, and let the wheels and other animated bits just animate based on the local physics, without making things like suspension extension be the exactly the same on all clients. Finally, regarding server load: The clients are calculating all those cars, too. AND all the sound and rendering, that the server doesn't have to deal with. Why do you think the server wouldn't be able to keep up?
  6. You need to send a flow of state updates anyway, because you need to put the clients back in sync when they de-sync, even if they simulate. Only in the lock-step deterministic model do you get away with sending less. The only question is how much / how often you send the state updates for each entity, and how big the entity state is. Regarding talking about "game objects," that's fine as far as it goes, but you will also need to talk about "properties" of those objects, and "serialization" for those properties, and "change notifications" (or "diffs") of those properties of those objects. Once you have that, you can build a reasonably efficient representation of what actually updates. It also matters whether you use TCP or UDP. If you use TCP, you can assume that the player has seen everything you send, although sometimes with a longer delay. If you use UDP, you can assume that players will see updates reasonably quickly, but may miss some of those updates, so you need some kind of acknowledge (and perhaps not-acknowledge) to tell the sender what the receiver actually got and didn't get.
  7. What do you mean by "network heavy"? What kinds of properties change about your game entities; how many entities are there; and what other mechanism would use much less data? Yes, all networked games know how to send property updates for their entities, and how to send the initial game conditions when a player joins. For games that don't support late drop-ins, everyone is just told "here's where the players are, and here's the level file you load from disk," for any game that supports late joining, the state of any entity that changed (or spawned, or died) compared to the initial level file will have to be sent on join. No way around that! In general, game networking engines keep track of which player has seen what changes from what objects, and in every network tick (which is typically between once a second and 100 times a second, depending on game) the game will look at all the un-acknowledged changes for a particular player, select those that are "most urgent," and pack those into a packet that it queues to the player. An update that has been sent, but not yet acknowledged, may be given less urgency for a little bit, but if it remains un-acknowledged, it may pop up in priority again. Typically, priority for a particular update for a particular player will be based on what that player is close to, who/what that player is interacting with, how big/scary/dangerous the entity is, and so forth, in addition to how long ago the player last got an update for the entity. If you detect that the player suffers packet loss, you can assume that you are sending too much data, and reduce how often you generate-and-send packets, and also increase the tolerated time between entity updates. You can also introduce a "view distance," where entities too far away aren't shown to the player, and reduce this distance to reduce amount of bandwidth consumed. Similarly, if you detect that there's no packet loss, there may be more network bandwidth available, and if you're not already sending everything you want at the frequency you want, you could turn up the frequency/size a little bit. There are other models. Most importantly, the "lockstep" model, where players only send inputs, and the input from all players for step X is received on the server before the simulation is allowed to advance to that step. Similarly, each client must run the same simulation with exactly the same (deterministic) math, and will wait until they have the inputs for all players sent to them before they advance simulation. Command latency will be high, and is usually hidden through some kind of "yes, sir!" acknowledgement animation (hence, why this model is most common for real time strategy games, but also works for RPGs, turn-based games, and so forth.) Late joining (and dropping) in a deterministic game is often quite disruptive, and these games tend to have more lag in cases of those type of events, not to mention all the command latency involved, so this is not necessarily any better or less bad than the other models.
  8. Yes, you can run whatever simulations you want at whatever step rates you want. They will not be deterministic if they differ, though. And, a player running at 40 hz may be able to just jump onto a ledge, where the 20 hz version of the simulation barely misses, causing a fairly noticeable gameplay glitch. If server runs at 20 Hz, there's no benefit in sending packets more often than that. You're saying you want to "save on server CPU" -- what measurements do you have that you are in any way limited on your server CPU throughput right now? If your game works better with 40 Hz simulation on both sides, perhaps it would be useful to just run at that rate, and only worry about lower rate if you ever measure and find that it's actually a problem?
  9. You need to send the latest time step from the server to each client somewhat frequently. If the client finds that it is about to step ahead of the server, it should delay for a small amount of time, and then resume the regular ticking. Also: You can not, in general, assume that JavaScript will be fully deterministic, if you use floating point numbers. And, it turns out, in JavaScript, all numbers are floating point. So it's only deterministic if you don't use numbers.
  10. Unreal Carrying data over from PC to mobile

    If you use an existing API (Google Play services, Apple Game Center, Facebook Game Room, Steam, or others) then you use the SDK that comes with that API. The SDK will also come with a bunch of examples and documentation of how you can use it. If you end up building your own, then you'll typically build a web service layer that can receive requests from games, log in players (using cookies or session tokens) and store/retrieve data. Probably also do sanity check on the data, to try to catch some cheaters. The specifics depend on the particular platform you end up choosing. Without knowing what you want to accomplish in more detail, and what specific SDK you're using, it's hard to give any more specific advice. Get the SDK, try the samples, read the documentation, and then ask more if there's somthing in particular you don't understand!
  11. How do you handle if a grenade explodes? Everyone who has a reference to that grenade need to remove that reference. It's the same problem. Either option works, and they each have benefits and draw-backs. If you don't do a lot of object/object interaction directly, the ID may be easier. If you do a lot of objects-talking-to-objects, then a cached pointer is likely better.
  12. ZLib compression from UE4 to Boost ?

    If you "encode into ansi" then any binary byte that gets encoded into more than one output character will be bigger than it needs to be. You should send raw binary data. It sounds in your posts above as if you're trying to wedge your binary data into a std::string, and encoding as ansi/text/characters/utf8 to somehow avoid problems with std::string representations. This is the wrong way around. Marshal binary data into a std::vector<> instead. (Or some other binary buffer class of your choice.)
  13. ZLib compression from UE4 to Boost ?

    It still seems like you're needlessly adding size to your data.
  14. Creating a server for a shut down game

    Not unless the developers built some kind of test mode into the game. That's unlikely.
  15. ZLib compression from UE4 to Boost ?

    Good you got it working, but that's till doing it wrong. std::string is not for binary data. You should be using std::vector<char> or some other binary buffer format, rather than std::string. Encoding to ASCII neutralizes the entire idea of compression, because it expands the representation of the data again before it goes over the wire.