Banner advertising on our site currently available from just $5!
hplus0603Member Since 03 Jun 2003
Offline Last Active Today, 01:45 PM
- Group Moderators
- Active Posts 11,205
- Profile Views 21,578
- Submitted Links 0
- Member Title Moderator - Multiplayer and Network Programming
- Age Age Unknown
- Birthday Birthday Unknown
Redwood City, CA
Posted by hplus0603 on 08 April 2015 - 10:21 AM
What tool do you use to test whether it "works" or not?
What does Wireshark say about outgoing UDP packets while you're trying to broadcast?
Also, there are more examples of this, such as: http://metronuggets.com/2013/03/18/how-to-send-and-receive-a-udp-broadcast-in-windows-phone-8-and-win8/
If you run this code, does that work? If so, what's different about your own code?
Posted by hplus0603 on 08 April 2015 - 10:17 AM
What are your goals?
And then, given those goals, how are you trying to accomplish those?
I take it from your post that your client is written in Unity3D. Is your server running Unity3D or not?
Posted by hplus0603 on 03 April 2015 - 12:02 PM
In general, to avoid jitter, you must either accept a guess about the future that will be wrong, or you must interpolate between two known-old positions. Which in this case means that, when you receive the position that's 160 ms old, your entity will just have reached the position at 320 ms old, and will start moving to the 160 ms old position over the next 160 ms.
Posted by hplus0603 on 02 April 2015 - 05:47 PM
Posted by hplus0603 on 02 April 2015 - 05:46 PM
To give you better information, we need to know:
- what kind of game server? Is this for an existing game, like Counter-Strike or Minecraft, or a game you wrote?
- is it for matchmaking, scoreboards, community, etc?
- is it for gameplay, as in a MMO, or for "secure" game servers?
- how many players?
- how much can it cost to operate per concurrent player?
- what country are you in, and does the country of the game server matter?
- do you prefer Mac, Windows, Linux, BSD, ...
... the list goes on.
Posted by hplus0603 on 26 March 2015 - 09:15 AM
Are you sure about status field? Isn't HTTP code enough?
If the data is transferred outside of a HTTP-aware wrapper, then the HTTP status will be lost.
This can happen in a few ways:
1) Inside the application, it's common to pass along a dict<string, object> or similar representing the decoded body
2) If using on-disk caching separate from HTTP caching (which may be necessary when using out-of-band invalidation channels) you're reading a file, not a HTTP server, and would have to somehow synthesize a HTTP response if your unit-of-information is at the HTTP level
3) If you switch to other transports, such as message queues or push notifications, there are no HTTP status codes (see for example Facebook Mobile's use of MQTT)
Posted by hplus0603 on 25 March 2015 - 09:53 AM
You can treat a missing version header as "serve the latest."
Separately, for telling which client is making the request, you can look at the User-Agent: header -- you can arrange for your game to send a unique user agent which might be gamename/buildnumber, for example.
Also, there are a few things that you want to nail down ahead of time, and make them ALWAYS TRUE.
For example, if you use JSON, you may want the response to always contain the following fields:
status: "success" or "failure" error: integer (if failure) message: human-readable message (if failure) data: actual-payload-dataThen wrap all the HTTP clients you use in a wrapper that can tell success from failure, and deliver the decoded payload to the application. A central library that does things right, and whose behavior you can easily reason about, goes a long way once the time comes to make changes.
You probably also want to make sure that your HTTP client always recognizes 300-level redirects and the Location: header, as well as a well-defined code (like 410) for what happens when you want to tell a client you're no longer prepared to serve it. Good user errors for 503 (temporarily unavailable) would be good, too.
Separately, you should build the client library to not blow up if the JSON fields come in a different order at different times. You should also not blow up if there are extra fields that you don't recognize -- just ignore them. Finally, if you expect a field, and it isn't there, you shouldn't blow up, but instead substitute some empty/null value in the decoded value; this may cause the game to detect an error, but should not throw an exception or cause a crash.
Using a version specifier is somewhat useful if your API evolves slowly. An alternative is to just tack the version number onto each service. https://api.yoursite.com/user3/ for users-version-3, and https://api.yoursite.com/purchase8/ for purchases-version-8. (This is similar to how we have interface names like IDirect3D12 -- it's a well-tested method that works.)
Oh, and always use HTTPS if at all possible.
Posted by hplus0603 on 22 March 2015 - 12:27 PM
So, I wanted to send data 4096 byte by 4096. Should I keep that method or any better?
No, that's fine. Also, you're likely to see less than the full buffer available to send each time you tick the network generation, unless the user is in a very crowded area.
What should I do if WSASend returns with these errors
Wait until the next network tick and try again.
can WSASend operation complete with transferred bytes zero in any situation?
I suppose if you called on it to transfer zero bytes, it could complete successfully with zero bytes.
I would expect that if the connection was closed on the other end, you'd get an error, rather than a completion with zero.
If you don't call it with an argument of zero at any time, it's probably OK to treat this as an error condition.
Posted by hplus0603 on 21 March 2015 - 08:19 PM
You don't need to "first build the packet, and then copy into the ring buffer" -- just build the packet straight into the ring buffer.
Posted by hplus0603 on 21 March 2015 - 03:20 PM
First, there is a queue of "entities I need to update the player about." This is something like a list or set or whatever of pointers-to-entities, or, perhaps, ids-of-entities. These are dynamically allocated, but each entity has a fixed small size, so allocation is efficient.
Second, there is a queue of "data to actually write on the socket." This is something like one fixed-size array of bytes per player, often used like a ring buffer. This is pre-allocated once the player connects, so it's contiguous and a known quantity. Typical size of a record: 16 bytes (8 byte "next" pointer, 8 byte "entity" pointer.)
The "update" loop for a player then looks at how much available space there is in the outgoing buffer, and if there's enough, takes some of the queued entities, and generates updates for them into the outgoing ring buffer, and de-queues those entities from the update queue. (or puts them last, or whatever mechanism you use) Typical size: 1024-8192 bytes (too much may cause a lot of latency building up in the queue for slow connections)
Finally, the "send" loop for a player will, each time the socket comes back as writable, bytes between the current head of the ring buffer, and end of the ring buffer, and passed to send(). send() will dequeue as much as it can fit in local kernel buffers, but no more, and tell you how much that was. You remove that much from the ring buffer.
Note that, when you send data, you send many commands/updates in a single packet. You want to pack as much as possible in a single call to send().
If you're using UDP, then you remove the ring buffer. Instead, when it comes time to send a packet, you generate enough updates to fill whatever packet space you have. Let's say you make the maximum UDP datagram size 1200 bytes; this means you attempt to fill 1200 bytes of updates from the head of the queue (minus whatever protocol overhead you're using for sequence numbers or whatnot.) You may also want to put the entity update links into an "outstanding updates" queue if you receive acks from the other end. This "generate data and call send()" happens in one go, and thus you only need a single static buffer that is the size of the maximal packet size, to share between all possible senders.
When there are 300 players in one area, you don't generate data for 300 players at once. Instead, you enqueue 300 players' worth of entity updates into the list of entities, and generate 1200 bytes at a time, and repeat this until there are no entities to update. (Although you will always have entities to update unless the player is alone in the world :-)
Also, when a socket is sendable, it means that a call to send() will not block; it will dequeue your data and put it in some kernel/network buffer. Thus, you don't ever need multiple outstanding send requests. If you use overlapped I/O, then you will typically keep one overlapped struct and one buffer per player, pre-allocated, just like the ring buffer in the TCP case (and, for overlapped I/O with TCP, that's exactly what you will be using.)
Posted by hplus0603 on 21 March 2015 - 03:10 PM
One is your physical networking topology: Does each of your players find other players on their own, or does the traffic go through a central server?
The second is your gaming topology: Does each of your players send/receive updates to/from all other players, or is one player elected "host/leader/serer" and receives/collates/broadcasts for all others, or do you have a central server doing that?
The typical solution (that is most common, most robust, fits most cases, etc) is:
- discovery of other players (matchmaking) and making sure their firewalls cooperate (NAT punch-through) is coordinated by a central serer
- one player is assigned "host" of the game, and the other players connect to that player
- once the match is made and the game is started, the central matchmaker server doesn't see more traffic from the players
This is, at the game level, a client/server set-up, but at the physical networking layer, a hybrid between central-server, and clients-talking-to-clients, so from a physical networking point of view, it counts as being "peer to peer" in that the central server isn't involved in gameplay traffic.
Posted by hplus0603 on 19 March 2015 - 09:02 PM
You can connect from that Unity server to another program running on the same machine by simply listening on a known TCP port (such as 12345) and having the Unity game server connect to localhost:12345 using regular sockets. Or, if the operations you do are batch-like, perhaps use HTTP, even though it's just local traffic.
Posted by hplus0603 on 19 March 2015 - 10:35 AM
The question that then arises is: There is latency in sending player data along. If I see you on my machine, and I fire at you, it may look like a clean hit on my machine, but on your machine, you had already turned left, and would not have been hit by a bullet fired in the particular direction I fired. The truth on the server is somewhere inbetween. How to resolve this? There are three basic options:
1) Always use the server view, and display remote clients "behind time," and players will learn to "lead shots" compared to where other players are running. What everyone sees is actually things that happen. Cheating is not possible. (Other than local assists, like transparency hacks, aimbots, etc.)
2) The client sends the position and last-received time-stamp of the hit target to the server. The server puts the indicated target back to that point in time, essentially emulating what the client "would have seen," and indicates hit if the client is not lying. Cheating can be done by acting on "old data," some players used a network cable with on/off switch to buy themselves a few seconds when they needed it.
3) Forward-extrapolate the display of remote players. This means players will aim "better" as long as the forward extrapolation is correct, but it means that players will "miss" when the entity they are aiming at actually turned so the extrapolation was wrong. This can also be combined with making hit geometry larger the faster someone is running, as well as reducing the possible turning radius the faster someone is running. This shares many properties with option 1, but displays differently and thus looks "much better" 90% of the time, and "much worse" 10% of the time.
Posted by hplus0603 on 17 March 2015 - 09:54 AM
should I create a Roomclient connect it to the server, on the same machine
That's one of many possible system architectures. That could work fine.
Should each message be timed or do I send out a heartbeat
If your game sends continuous updates (user input, entity updates) then that's heartbeat enough -- when you haven't received any data, or made any progress in the protocol (whatever that means for your protocol) for X seconds, consider the client dead and drop it.
If your game is turn-based or asynchronous (and "rooms" are more like "chat rooms") then you probably want to send a heartbeat message if you haven't sent another message in, say, the last 5 seconds. If you've gone 15+ seconds without receiving anything, drop the client.
Adjust as necessary.
Posted by hplus0603 on 16 March 2015 - 07:36 PM
Am I allowed to dos attack my own server
It's called "load testing" as long as you're doing it on the up-and-up. Also, don't do this with any kind of "shared host" or "virtual slice" host -- that would be bad for others sharing the same host.
Things like this do they attack the router and isp or my server app
A distributed denial of service (flood attack) is all about choking of the narrowest pipe between you and the world. Typically, that will be between you and your ISP. Typically, the ISP and others using the ISP will also suffer. One thing ISPs can do is to temporarily cut off any traffic to your IP address from their upstream/peers.
A non-distributed denial of service typically exploit known problems in a service, such that a smaller number of connections can consume disproportionate amounts of resources. For example, if logging in consumes a lot of database resources, someone might write a script that just logs in, over and over, and thus make the database choke up and not let (many) legit users log in. Or they may send a very, very long network packet to your server, which if there's a bug, makes it run out of memory and crash the process.
I was thinking of just a time out value when up drop the client
That's usually a good idea. That being said -- I hope you're using select(), or I/O completion ports (on Windows,) or epoll/kpoll (on UNIX) rather than synchronous read() or recv() from sockets. A client connection is typically a buffer, a pointer to some client state, and a socket. Each time the socket is readable, you call recv() once into the buffer, trying to fill the buffer up. Then you inspect the buffer, and see if there's a full packet in there -- if so, decode that packet, and dispatch it, and look for another packet.
Finally, if you already have a login server, you could just spawn a new process for the room, and let the login server tunnel/forward the data to the room process, using the login server as your proxy. That way, you only need one port open to the internet.