1) I redid all my cameras. Aside from GC, I have any number of other projects going. Little prototypes, testbeds, etc... For example, I've got a fledgling reboot of the ARPG version of GC that started this whole thing. Whenever I get sick of the turn-based nature of GC I work on that. (That happens quite a bit.) In all of these projects, I end up either reusing a hacked camera from something else or writing a new one. Since most of my projects tend to be third-person things (I hate the cramped feel of a first-person camera) it seemed to make sense to refactor all of my cameras and use a common setup. So that's what I did.
The new camera system is a lot easier to use than the various hacked cameras. It offers plenty of options to tweak it. The internals are more consistent and clean. The camera operates as a hierarchy of scene nodes, with each scene node serving a given purpose. You can specify orthographic or perspective, a scaling ratio for orthographic, the view angle above the horizon, the view angle around the vertical, whether or not these two angles are adjustable, a zoom distance, whether or not the camera is zoomable, etc...
The camera also has a number of customizable options such as "soft" or "hard" tracking of the target object. The camera is designed to follow another object. In some applications, it is appropriate for the camera to lock solidly onto the camera, such as in an ARPG like Diablo. In other applicatioins, it is more appropriate for the camera to smoothly spring to the target location, such as when the target changes and an abrupt switch would be disorienting. This is what GC uses.
The camera has a special purpose node in the chain that is the target of Camera Shake. By sending a special event, ShakeCamera, with 3 parameters: Shape Speed, Shake Magnitude and Damping, you can trigger a camera shake that uses a random vector and a spring function to apply an oscillation with damping to the shake node. Events such as explosions, earthquakes, and so forth can signal shake, and the camera will respond accordingly.
Also implemented is the ability to do a reverse raycast from the camera target toward the camera, testing against objects flagged with a special Solid flag, and finding the nearest intersection with such. The camera can be clamped to this nearest intersection, with a spring function to smoothly translate the camera back out to range once the intersection no longer occurs. This can be useful for over-the-shoulder third person, especially in tight places with lots of occlusion.
The camera is implemented as an Urho3D component that is added to a scene node. Another component is added to any entity that wants to control the camera, with activation/deactivation functions to enable camera switching.
All of this is wrapped up in a highly customizable camera package that can be easily imported into any Lua Urho3D project. With the same camera code, I can create a camera system such as World of Warcraft (freely rotatable, intersection clipping, combined with a WASD-style character controller), or Diablo 2 (orthographic, 30-degree above the horizon locked, 45 around the vertical locked, constant zoom distance), or Torchlight 2 (perspective, locked angle above the horizon, locked around the vertical, zoomable), and so on.
2) Occluded object ghosting. If you've ever done an isometric, or isometric-ish, game then you've encountered the issue of objects being occluded by walls or other objects. Enemies hiding behind walls, treasure chests and items lying unseen behind other objects, and so forth. Some games choose to solve the problem by alpha-fading the occluding objects, usually with a progressive, soft fade-in and fade-out. Diablo does this. Other games choose instead to draw the objects such that they are visible through walls, usually by drawing them as some kind of outline or silhouette. This is the method that Torchlight 2 uses.
I spent some time implementing this latter setup:
I attempted it in two different fashions. The "obvious" means of doing it is to draw all solid world geometry first, then to draw the ghosts using materials that do not write depth, and that use a depthfunction of greater. An additive blend-mode makes the ghost stand out. Then the objects are drawn using their "normal" materials. However, if you are in an engine with pre-defined passes, it can be difficult to partition the geometry like this. I was able to implement it this way using 2 cameras in Urho3D, drawing world and ghosts in the first camera, and objects in the second. It "worked" but there were a couple issues, chief being that shadows were calculated and drawn in the first camera only, so none of the objects drawn in the second camera were correctly shadowed.
The second method was to just draw the ghosts after all solid geometry was drawn, including the objects themselves. Of course, this means that the ghost will overdraw parts of the object itself; for example, when drawing the part of the ghost corresponding to an arm viewed on the far side of GC, the arm's ghost will overwrite the nearer parts. To combat this, I apply a small negative bias value to the ghost's depth, which essentially offsets the ghost nearer to the camera, ensuring that it only overwrites those depths that are greater than the ghost's depth by an amount equal to Bias or greater. By choosing the depth bias appropriately for the size of the character and the zoom level, you can ensure that the ghost only overwrites things it is supposed to overwrite.
This method enables you to draw ghosts without the extra passes, but it can be finicky. If you choose an unsuitable depth bias, there can still be overwriting. Also, camera zoom or large objects can play hell with bias selection, since what might work for one particular frame might not work so well for another with a different depth range. Right now, I'm just using an experimentally determined depth bias with a camera whose zoom is constrained to a rather tight range, but it might be better to determine the bias algorithmically and change it on the fly.
3) Blender Cycles now has baking! Well, it's still in development. It exists, though, and you can grab a build off graphicall.org if you want to try it out. (I use this one, as it is fairly regularly updated and includes a bare-bones UI for the baking, rather than relying only on command line Python.) The baking is still very much under development, so there are issues, but so far baking normal maps and straight diffuse maps works great. I'm hoping that this branch makes it into the trunk soon, but even if it doesn't it is awesome to have access to it right now. This is where a lot of my time has been spent lately.