Why? I would think spherical using the same number of bits versus a competing method would be superior.
8-bit channel Images from the knarkowicz link, spherical and octohedral:
Spherical bunches up precision at certain parts of the sphere. e.g. Look closely at the top of both of them -- the first has circular banding, which is very obvious in some cases. Octohedral more evenly distributes it's precision across the entire sphere, so you get the same quality in all directions (instead of some directions being great and some being shite).
At 16 bits per channel, you can use almost any encoding you like and get good results, so just pick a cheap one
But is it faster than a (IIRC) cube map lookup for crytek's BFN's?
BFN has a super-cheap decode function, but a moderate encode function (which yep, involves a texture fetch -- I'm currently using this with a 1024*1024 2D lookup texture). The relative costs will depend on the GPU model, and what else your gbuffer shaders are doing at the time...
What about HDR?
You don't generally put HDR colours into a gbuffer.
11_11_10 / 10_10_10 fixed point in linear colour space is about equivalent to 8_8_8 in sRGB space, so IMHO it's not usable for HDR, unless you've got a very small range and are ok with banding artifacts. I've shipped one game where we used 10_10_10 fixed point with a pow2 gamma space to minimize banding, but it wasn't good
11_11_10 / 10_10_10 floating point is just barely enough to get HDR working, maybe (if your API/GPU supports it).
16_16_16 fixed or floating point is good. Floating point wastes a bit on storing the sign, plus you have to deal with NaN's and infinities , but it gives you a range from 0 to ~60k with logarithmic precision distribution, which is actually good for HDR data (as your tone-mapper probably has logarithmic weighting). Fixed point means you get that extra bit and don't have to worry about inf/nan, and you get to choose your own maximum scale value (which is a pro and a con).