It definitely goes into some of the networking concepts/algorithms/formats they used for the game. But some aspects weren't immediately clear to me. I was wondering if someone could provide some additional insight.
1. They mention the server ticks at 10fps. Do they mean that network updates are sent 10 times per second, or do they mean the server simulation loop runs at 10fps (and the network update is coupled with it)?
2. I think the concept of "curves" is pretty cool... when applied to positions for example, they only send *new* keyframes and avoid sending redundant data. However, after reading I wasn't sure if they generate new keyframes at 10fps too, or if they have a different rate for generating those. I know special events like collisions can also generate keyframes, but I didn't understand if keyframe creation was generally done on each iteration of the game loop.
3. How does a client smoothly go from one position to the next? The server generates keyframes, but by the time the client receives them, they're old. Let's say the last packet on the client was generated by the server at t = 0.5 and received at t = 0.6. When the client's game loop runs, t = 0.67. How does the client compute the position at 0.67? Is it just extrapolating using the line equation (where the "true" line segment is from t = 0.4 to t = 0.5)?
4. What if a unit stops? Based on the client's keyframes, the unit would continue to move. I'm thinking they can send at most one redundant keyframe (so the curve is now a line with two identical y-values), or they could send a "stop" pulse curve (one of those immediate events they talk about). Thoughts?
5. Speaking of pulse curves (instantaneous events), it seems like the client would just receive N of these during a server update, and process all of them on the next tick (because they're instantaneous on the server, but they're already old when they're received on the client). Or, is the server somehow generating these in advance? For example, when a unit will fire, the server knows that the unit fires two shots, so it can send both shot events right away. But the first shot would still be old, wouldn't it? And, depending on lag & the time between shots, the second one might be old too.
6. Based on the previous "shooting" example, how would health be in sync with the attack? If health is a separate curve, the health packets and the shot packets might reach the client at different times (or, if the client plays the shot animation but has already received the health keyframe)... this means health number and visual attack won't be in sync. Would they wait and send the health keyframe along with the attack keyframe? Or, would they send both in advance, and the client is responsible for "playing" them at the same time?
The original post is great, I'm just fairly new to networking (but fascinated by it) and would appreciate any help in making more sense out of this.
Overall, I would say there is no level of knowledge *required*. You might even learn something new. (i.e. I got started with Unity at a jam). But I think it's important to communicate this to potential teammates. Usually there's a bit of mingling that happens during the first 30-60 minutes of a jam, and you can use that time to find a team in which you fit in.
+1 on ByteTroll's request for topics like advanced rendering techniques... sometimes these are covered in different forum posts, but it would be great to see articles.
Other theme ideas:
-native code for mobile
-a specific OS
-build systems (assets/data/code)
-use of scripting languages
-script language compiler implementation
-scripting integration with game engines
-making a game in a tool not meant for games (like Excel or Powerpoint)
-"three-chord song" (many musicians write songs that just use different combinations of the same three chords.. perhaps three simple mechanics could be mixed and matched to make a great game that seems complex, but is really made up of just those three basic elements)
-animation (UV, vertex, skeletal,etc)
-texturing (programmers could chime in with procedural texture generation or articles about packing textures into an atlas)
-specific game genres (racing, shooters, etc)
-postmortem month (what went right/wrong/etc during someone's project)
-hardware (something you tried with Sifteo Cubes or Oculus Rift)
-motion control (gameplay opportunities, filtering motion input, etc)
What problems do you think someone would come across?
Unsupported features, losing access to a useful set of Java UI elements, possibility of getting into trouble with native memory management, etc
Are there any apps that somewhat bizarrely use this approach?
I bet there are a few utility/enterprise apps that use the NDK, but I've only seen games use it.
Is there really any speed benefit? (I would normally expect there to be but apparently there are some weird quirks with how "native" code runs on android)
I would love to measure this at some point. Some algorithms probably benefit more than others. I think there is a speed increase, but it may not be huge. A big reason why people use the NDK is ease of porting (not having to translate to Java from C/C++).
Wouldn't you lose support for x86 and MIPS handsets?