Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


hplus0603

Member Since 03 Jun 2003
Offline Last Active Yesterday, 09:38 PM

#5132842 UDP HTML5 JavaScript no plug-in approach? Browser JavaScript application to C...

Posted by hplus0603 on 19 February 2014 - 08:50 PM

UDP is not well supported in current browsers.

Given that WebRTC can do "browser to browser" then it can also do "browser to server;" you "just" have to build the server side to look browser-like on the network.
You could, for example, us the WebRTC C++ API in your server.


#5131781 Real-time multiplayer in a browser with node.js and HTML5 is a myth?

Posted by hplus0603 on 16 February 2014 - 01:30 PM

Super-dyanamic langauges like JavaScript (or PHP, or Python) can be highly productive while developing, but end up costing not only a bit of performance (a LOT for PHP, a little for JavaScript) but also in maintainability.
Check out this link, about how a single missing "var" statement ruined the launch of a product: http://blog.safeshepherd.com/23/how-one-missing-var-ruined-our-launch/
Sadly, all too common in JavaScript projects (although using jshint helps to some extent: http://jshint.com/ )


#5131773 Keeping my water simulation in sync over the network.

Posted by hplus0603 on 16 February 2014 - 01:11 PM

Personally, I think quantized 16-bit values (0 == bottom allowed position, 65535 == top allowed position) would compress better. And even better if you first do one level of lifting.
Lifting is when you encode the first value, and then encode the delta to the next value. This will typically compress much better because you'll get a lot of small values as long as you have smooth changes.

Looking forward to hearing how it goes!


#5131600 Keeping my water simulation in sync over the network.

Posted by hplus0603 on 15 February 2014 - 01:30 PM

Sending a randomly selected subset of samples each frame would work to make sure everybody will, over time, see similar-ish things.
Given the chaotic nature of fluid dynamics, I expect there to be some "minimum threshold" of send data below which you will not see much benefit in sync.
Also, you probably want to make sure that the updates are volume preserving -- when you get an update that you're off, that volume has to come from/go to the neighboring cells.

Another option is to use compression. The state of the sea is likely to compress very well (be locally smooth.) You could look into something like sending a grayscale JPG of the entire sea every once in a while. This will quantize the update somewhat, but also get everybody on the same page.
Another trick when using lossy compression for updates is to re-initialize the master simulation with the output of the compressed data, to make sure everyone starts from the same state.


#5131596 Number of Packets & Fairness

Posted by hplus0603 on 15 February 2014 - 01:23 PM

The game is not "fair" if someone has a better experience than someone else.
This goes for network connections, GPU speed, mouse responsiveness, player skill, ...

That being said, I think "choking" the connection is the wrong answer. If the connection can't keep up, then it can't keep up, and you'd better tell the player what the problem is and then drop the connection.


#5131443 Lockstep RTS: in between updates for higher visual framerate?

Posted by hplus0603 on 14 February 2014 - 07:36 PM

You can make a difference between "displayed position" and "simulated position."

Specifically, you can update the "displayed position" each time you render (at whatever the render frame rate is,) but update the "simulated position" only during your lockstep update tick.

You can interpolate the display position between the previous and current simulated positions. This will add some display latency, but won't display anything un-truthful. This is great for RTS-es.
Or you can use a forward extrapolator, like the Entity Position Interpolation Code, to predict what the position/orientation will be for an object, and display to that, which will perhaps somewhat overshoot positions etc, but give the illusion of less display latency.


#5131339 Real-time multiplayer in a browser with node.js and HTML5 is a myth?

Posted by hplus0603 on 14 February 2014 - 12:03 PM

OpenGL ES 2.0 compliant -> LLVM -> Emscripten -> Browser


FWIW, at work, we do just this for our newer and upcoming projects!
We actually have a C++ engine that targets native for mobile, and targets Emscripten for browsers.
There is a cost, both in development and runtime, for using Emscripten over using a native JS engine, but that cost is shrinking, and the cost of building two engines (one for native, one for JS) would have been *much* higher.


#5131077 Real-time multiplayer in a browser with node.js and HTML5 is a myth?

Posted by hplus0603 on 13 February 2014 - 11:32 AM

Many routers are configured to prioritize tcp over any form of UDP


Every now and then, I hear this, but I have not seen any actual data or evidence of this being the case.
UDP is used for voice-over-IP, which is a large, important technology for the backbone carriers (that carry telephony and internet data interchangeably.)
UDP is also needed for DNS look-ups, which happen before most TCP connections even start; a slow DNS resolver is a more visible source of "slowness" to a user than almost any other reason!
I would be highly surprised if it was general behavior on the internet to drop UDP and favor TCP.


#5130940 Real-time multiplayer in a browser with node.js and HTML5 is a myth?

Posted by hplus0603 on 12 February 2014 - 07:38 PM

World of Warcraft uses TCP. A bunch of real-time-strategy games use TCP. Whether you *need* UDP or not is dependent on your gameplay.

If your gameplay is twitch-based, first-person-shooter style, then no, TCP probably won't be a good choice.
If your gameplay can accept some command latency or sync latency, then TCP can give you much better compatibility than UDP, because you can run it in the browser (websockets, etc.)

For most games, I'd rather have 10,000 players that play something because it's fun and easily accessible, and sometimes complain about an occasional lag spike, than 10 players that can download and play an installable game for some particular platform.

Another reason to use UDP is if you need player-hosted servers, in which case the UDP NAT punch-through story is better (higher compatibility) than TCP.


#5129933 Continuous Asset Data Streaming - issues/problems with

Posted by hplus0603 on 08 February 2014 - 04:33 PM

I would hope the required thru-put might average closer to 1Mbps


You said what you're doing is on a scale that hasn't been seen before. Second Life streams user-generated content, and uses at least that amount of bandwidth to do it. Whether user-generated streaming content will be good enough for the masses (and attract enough users who are good at generating content!) is a real challenge.

OK, so if you support 1,000 simultaneous online players, your first bandwidth bill will be $10k. I hope you have the budget needed to get that off the ground!
(Also: If you're buying less bandwidth, there's a point where the per-megabit cost will be pretty high because of fixed overhead costs.)


#5127764 Question about OS for server hosting

Posted by hplus0603 on 31 January 2014 - 10:35 AM

a good chunk of your resources go to running the GUI


That's not actually true, unless by "resources" you mean "power to run the GPU."

even on Microsoft's Azure cloud service most instances run some kind of Linux


I think that's a misleading statement. That's like saying that, just because Linux has WINE, most Linux installations run some form of Windows. True, you can run Linux as guests on Azure, but the implementation and management is based on Hyper-V and Windows Server.

There are a large number of systems that use all Windows servers, from original EverQuest to Xbox Live to Stack Overflow. It is totally possible to run a distributed, all-Windows server farm, and if you're much more familiar with Windows than with Linux, and people-time is more precious than server-licenses, then that's clearly a possible route.

Personally, I would never do it; each time I've been involved with using Windows for any kind of server for real, it's ended in frustration and a port to Linux. But it is absolutely doable and possibly the right choice for the right team.


#5127560 Async design

Posted by hplus0603 on 30 January 2014 - 03:04 PM

Transparent message passing is the way to go


Except when it isn't.

and I am quite amased why not everyone has transitioned to the paradigm already.


I can think of many reasons, such as perhaps having tried it and it turns out that "transparency" always comes at a cost. I gave a few other reasons in my original response, such as when you have a need for authentication over networks. As I also said in my original response, ZeroMQ has done this for a long time, and there are successful systems built on top of that.

the moderator gives obvious BS advice


Specifics would be much more useful.


#5127227 Async design

Posted by hplus0603 on 29 January 2014 - 10:25 AM

Why would you use a polling loop for main? This means that, after I decide to quit, "nothing" may be happening for up to a second. I'd presume that, if you have a good queuing API, the main loop could blocking wait on a queue where you send it a "quit" message (rather than setting a boolean to false.) And if you don't have blocking waits, you could at least use the OS-specific mutex/condition variable/event to achieve the same thing.

 

Other than that, mostly-transparent network/memory messaging APIs have some nice properties. ZeroMQ for example took that to heart and had significant success with it. The two main drawbacks, in my mind, are that sometimes, you NEED to know, for example when implementing and requiring authentication. And sometimes, you NEED to know, for performance reasons. As long as those don't get in the way, the code can be very nice and clean.




#5126354 securing online web based game - HTML5

Posted by hplus0603 on 25 January 2014 - 01:22 PM

The client is under the control of the enemy (the cheater,) so there is NOTHING you can do on the client to avoid cheating. Obfuscation may make it a little harder for a casual cheater, but a determined attacker will absolutely have no trouble breaking through that. They can read your code. They can even build their own client that sends their own packets. You must assume that the hardware device on the other end is un-trusted.

 

Encryption, such as SSL, when implemented correctly (such as SSL,) can protect against men in the middle. That is, if the attacker is located on the same open WiFi as an innocent player, or if the attacker has a data tap in some router on the Internet, then SSL will help. SSL will not help against an attacker that can control the machine that the code is running on.

 

And, if money, or things of real-world worth, are involved, the incentive to cheat goes from "fun toy" to "people are doing it for a living."

 

So, what is a poor poker server to do? The answer is to not tell the client anything that it shouldn't know. In a real poker game, I only see my own cards, and how many cards another player has, not what those cards are. I don't see what the next card in the deck will be. Thus, the client should not be told what the deck is, or what the other players hands are, until a card is drawn, or a player reveals their cards.

 

The best way to make sure your server is secure, is to design the server with an open API. Make it clear that ANYONE can build a client that plays poker on your server. You may provide one that conveniently runs in a browser, but you should design the server interactions as an open API that anyone could connect to and send messages to. Once the server is implented such that it doesn't matter who wrote the client that talks the API, and it doesn't matter who formulates the API requests, then you have achieved the first layer of security.

 

Once that's done, you may need to worry about people who use bots -- and people who use multiple bots, to gain a collusion advantage against other players in the same game. A first option might be to never assign two players with the same remote IP address to the same table/game. Another might be to tie players to accounts that cost money, which raises the bar to cheating by the cost of an account. Unfortunately, games typically want as many players as possible, and thus want the cost of entry as low as possible, which works against this particular anti-cheat design.




#5124724 Network Tick rates

Posted by hplus0603 on 18 January 2014 - 05:03 PM

I am assuming a fixed time step here. Let's call it 60 simulation steps per second.

 

If the client cannot simulation 60 times per second, the client cannot correctly partake in the simulation/game. It will have to detect this and tell the user it's falling behind and can't keep up, and drop the player from the game. This is one of the main reasons games have "minimum system requirements."

 

Now, simulation is typically not the heaviest part of the game (at least on the client,) so it's typical that simulation on a slow machine might take 20% of the machine resources, and simulation on a fast machine might take 5% of machine resources. It is typical that the rest of available resources are used for rendering. Let's say that, on the slow machine, a single frame will take 10% of machine resources to render. With 20% already going to simulation, 80% are left, so the slow machine will display at 8 fps. Let's say that, on the fast machine, a frame takes 1% of machine resources to render. With 95% of machine resources available, the fast machine will render at 95 fps.

 

Some games do not allow rendering faster than simulation. These games will instead sleep a little bit when they have both simulated a step and rendered a step and there's still time left before the next simulation step. That means the machine won't be using all resources, and thus running cooler, or longer battery life, or being able to do other things at the same time. Also, it is totally possible to overlap simulation with the actual GPU rendering of a complex scene, so the real-world analysis will be slightly more complex.

 

Note that, on the slow machine, simulation will be "bursty" -- by the time a frame is rendered, it will be several simulation steps behind, and will simulate all of those steps, one at a time, to catch up, before rendering the next frame. This is approximately equivalent to having a network connection that has one frame more of lag before input arrives from the server.

 

When a client sends data to the server such that the server doesn't get it in time, the server will typically discard the data, and tell the client that it's sending late. The client will then presumably adjust its clock and/or lag estimation such that it sends messages intended for timestep T somewhat earlier in the future. This way, the system will automatically adjust to network jitter, frame rate delays, and other such problems. The server and client may desire to have a maximum allowed delay (depending on gameplay,) and if the client ends up shuffled out to that amount of delay, the client again does not meet the minimum requirements for the game, and has to be excluded from the game session.






PARTNERS