I recently read the Shader X5 article and I tend to agree. It was not obvious to me what difference it makes to calculate the tangent space frame in the fragment shader rather than calculating it per vertex. I would certainly appreciate further discussion of the two techniques.
We need to back up a bit.
The idea of normal mapping is:
1) Artist creates super-high-poly mesh.
2) Artist creates low-poly mesh.
3) The normals from #1 are "baked" into a texture, using the UV's of mesh #2.
4) When drawing mesh #2, instead of using it's actual per-vertex normals, use it's texture coordinates to sample the normal-map.
This allows mesh #2 to appear to have the same detail that mesh #1 originally had (except around the silhouette).
However, this has some issues. If you animate/transform mesh #2, the normal map/texture will be incorrect -- it's normals will no longer match the animated/transformed mesh.
A nice, flexible, robust solution is to add some more steps -- "tangent-space normal mapping" (which is so common that most people just call it "normal mapping"):
3.B) Transform the normals in the normal-map from model-space to tangent-space. This requires that every vertex defines what tangent-space is, so every vertex requires a tangent, bitangent and a normal.
4.B) After sampling the normal map, transform the sampled value from tangent-space back to model/world/view/etc space. This requires that the exact same normal/tangent/bitangent values from step 3.B are used.
Now the mesh can be animated/transformed without any issues.
So when it comes to different methods for "inventing" a tangent-basis per vertex, it doesn't matter as long as the exact same values are used when authoring the normal map, and when rendering at runtime.
You can think of tangent-space normal maps as a kind of compression -- if you use the same basis during encoding and decoding, then it can be (nearly) lossless. If you use different basis for each step, then you'll rotate, flip, scale, and/or skew your normals...
If your art is coming from a regular art tool, then it will create the normals/tangents/bitangents for you -- just make sure to preserve the exact values that have been generated by it. Otherwise your artists will forever complain about normal maps being "not quite right". Your artists also need to be educated about how tangent-space "compression" works, so that they can successfully retain the same tangent-space data all the way through their workflow into your game.
If you're art is being made by your own tools/procedures, I hear that Morten S. Mikkelsen's "mikktspace" algorithm is somewhat of a standard these days.
https://svn.blender.org/svnroot/bf-blender/trunk/blender/intern/mikktspace/mikktspace.h
https://svn.blender.org/svnroot/bf-blender/trunk/blender/intern/mikktspace/mikktspace.c