Possible to pack a normal vector into a diffuse one?

This topic is 2303 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Say you have two vectors, one for color, and another for normal <dx,dy,dz> <nx,ny,nz>

Is there a way you could pack this into a vector <vx,vy,vz,va> such that you could recreate the two orginial vectors, from the packed one? You can make an assumption that the normal vector is normalized, if that helps.

Some have tried taking dot products, but I don't think that would work in three dimensions.

Thank you!

Share on other sites
irrelevant and wrong. Pardon Edited by Tournicoti

Share on other sites
If you transform it in view space you can assume that its z component is always negative
Not true -- imagine standing in a room where the roof slopes upwards away from you, and the floor slopes downward away from you. If rendering with perspective, the floor and roof are both visible, but their view-space normal is pointing away from you.
When storing x&y only, you also need an extra 1 bit to store the sign of the z value.

Share on other sites
I don't really understand ... Are you talking about the case of showing the back side of a triangle ? Because I always invert normals these faces. I do this in my project and I don't see any problem at all

Share on other sites
I don't really understand ... Are you talking about the case of showing the back side of a triangle ?
No. I'm talking about triangles where the front is visible, even though it's normal is pointing away from the viewer.
e.g. A smaller corridor connecting to a larger one, via a sloped floor/roof.
Red is the players FOV, green are the surface normals. Blue is the incorrect surface normal that you'll be reconstructing if you don't account for the fact that view-space Z can be negative.

Is there a way you could pack this into a vector such that you could recreate the two orginial vectors, from the packed one? You can make an assumption that the normal vector is normalized, if that helps.
As above, you can discard one of the components of the normal, but keep it's sign (you can encode the sign into the diffuse colour, e.g. make the red channel only 7-bits for colour, and 1 bit for sign(n.z)). This gets you down to 5 components instead of 6.
Alternatively, you can transform colour into 2 chromaticity components and 1 luminance component. You could then scale the vector by the luminance component and throw it away (again getting you down to a total of 5).

Actually, if these are floating point vectors, and your diffuse colour is 8-bit, you might be able to pack them together.
If we assume that your normal components are always less than 1.0, then you can store the normal in the fractional part, and the colour in the integer part of the float.
Something like:float4 packed = normal + (floor(colour*255)+1); float4 decodeNormal = frac( packed ); float4 decodeColour = (floor( packed )-1)/255.0; Edited by Hodgman

Share on other sites
Ok thank you for the explanation , I didn't noticed my normals were wrong in this case . A last question before I really add this extra bit in my gbuffer : couldn'it work in clip space ? I mean is it right to assume this (normal.z<0) in clip space instead of view space ?

Share on other sites

Actually, if these are floating point vectors, and your diffuse colour is 8-bit, you might be able to pack them together.
If we assume that your normal components are always less than 1.0, then you can store the normal in the fractional part, and the colour in the integer part of the float.

This is clever, and may work. I'm actually packing these not to for the purpose of using to know the direction of the vector (and reversing it for backsides of triangles), but to save several texture samples in the shader. I'm using trilinear sampling, and also using XNA, so texture arrays are out. Now a vertex could sample from 3 different textures at most, combined with trilinear sampling, you have 9 texture samples right there. Now if you add in normals, thats 18 samples just for one pixel.

The issue with this then becomes, you will need to load a processed HalfVector4 texture into the gfx, rather than a dx3 compressed one which will take more time. Not sure if the tradeoff is worth it. It would be best if you could stick to the diffuse being only 32 bit, 8 bits for each rgba. I'm wondering if there is some way to combine them such that fa(<vx,vy,vz,va>) = <dx,dy,dz> and fb(<vx,vy,vz,va>) = <nx,ny,nz>? Some clever math trick or something using the extra alpha channel. Edited by vant

Share on other sites
It doesn't sound like it's worth it. Sampling 1 16x4 texture is going to have similar cost to sampling 2 8x4 textures. If those 8-bit textures are DXT compressed, then the 16-bit fetch will probably be much slower than the 2xDXT fetches.

1. 1
2. 2
Rutin
20
3. 3
khawk
16
4. 4
A4L
14
5. 5

• 11
• 16
• 26
• 10
• 11
• Forum Statistics

• Total Topics
633756
• Total Posts
3013709
×