Thats an interesting paper. I wonder why developers havn't taken up to using that approach instead of sticking with the TBN basis and normal maps. I mean it requires a LOT less storage, and seems cheaper for animation too as you have less data to transform.
It has a lot of strengths and weaknesses to it. I see it as an excellent addition to one's bag of tricks.
Anyway, it is slowly beginning to make it's way into production and is also now implemented in Blender 2.57 which is available for download:http://www.blender.o...ad/get-blender/
It's used both in the off-line renderer and in the glsl 3D view renderer which means the bump you see in 3D view is the same as what you see in the final renderer (impact wise).
It's also available during texture paint which allows you to paint bump maps straight onto the 3D model (or texture) and then view the lit result in real-time as you paint.
The primary issue with this method is that, for textures, the texture filtering is not great on older cards (and some vendors). And as you take the derivative of the filtered signal
it affects results more then when you sample a texture of precomputed derivatives such as a normal map (essentially).
The other issue is that a normal map has components always in -1;1 range regardless of amplitude. For a bump map it's an issue to get both big low curvature hills and low frequency detail into the same bump map.
That being said you're correct that there are also lots of advantages. No tangent space, works under any kind of deformation and it's an easy drop-in.
It also works well with proceduralism, mirrored textures (without effort). For textures in BC4 you're only using 4 bits per texel and still getting relatively good quality.
It works trivially under adaptive tessellation. It's highly practical and will become more useful as texture filtering gradually gets better.
I even did a version that operates as a post-process (but not for blender). Works very well. During in-process (rasterization) I output the depth, interpolated vertex normal and the height.
In the post-process I compute the derivative of the surface position analytically (exact) since ddx and ddy is not available and since numerical approximation isn't good enough
for the derivative of the surface position. For the height on the other hand it is so I compute the derivative of the height by looking at a 3x3 neighborhood.
I'm using sobel basically but before doing so I bleed/lerp the center height into the neghbor height using the difference of the view-space Z.
The blend value is essentially t = exp(-K * abs(Zdiff)); This bleed is just a temp in the calculations I don't actually modify the height values in the main buffer.
This bleed is to make it fail gracefully when the neighboring pixels aren't connected to the center pixel.
As I was saying I don't use this numerical approximation for the derivative of the surface position because it needs to be more precise. So I compute it the exact way.
It's a little complicated to explain but I hope the overall idea is coming across. Works well anyway.