Then "we" aren't helping -- theagentd posted the thread and is reiterating that the original (unanswered) question of the thread was about compressing world-space normals in a vertex buffer...
We are not talking about world space here.
Also, view-space normals can have both + and - z values with a perspective projection! Making the assumption that z is always positive will cause strange lighting artefacts on surfaces at glancing angles.
You can drop one component and reconstruct it with this trick, but you do need to store a sign-bit somewhere to correctly reconstruct the missing component.
32-bits per component for normals is definitely overkill, 16-bits per component will definitely suffice. You can either use a fractional integer format (where -32768 is unpacked to -1, and 32767 is unpacked to +1, automatically for free, by the input assembler before being fed into the vertex shader), or half-floats (which are also automatically unpacked for free).
Even 8-bits per component may be enough if you actually make use of all 24-bits of data. When storing normalized values, then most of the possible values in that 24-bit space are unusable (because they don't represent normalized vectors), but there is a crytek presentation somewhere (search for "best fit normals") that makes the observation that these unusable/non-normalized values can be normalized in your vertex shader to re-create the original input value fairly accurately. The theory is that at data-build-time you take your high-precision normal, and then search the full non-normalized 3*8-bit space for the value that when normalized will best recreate the input value. At runtime, the decoding cost is 1 normalize in the vertex shader (again, the input-assembler should be able to do the scale/bias from int8 to float for free). This increases the precision of 8-bit world-space normals by about 50x.
Also, you may be able to get away with using half-floats for your positions if you break the mesh up into several local chunks -- each chunk with it's own local origin in it's center -- and use an appropriate transformation matrix for each chunk to move it back to the correct world location.
Half-floats have amazing precision between 0-1, decent precision between 1-2, but start getting pretty inaccurate for large values, so you'd have to experiment with scaling factors too. For vertices that are a large distance from the origin, the visible artefact will be a quantisation of positions, which will make originally-straight lines look wobbly and smooth curves look faceted.