Sign in to follow this  
DutchRhino

Storing Normals faster/easier Idea

Recommended Posts

Hi, I was wondering. With typical 3D Models one not only stores the vertices, but alos the vertex normals and the face normals. These normals are then normalized once so they can be used in the rendering process much easier. my question is as following: I now store all my normals as Vectors. So with an X Y and Z component. But I'm wondering if this is optimal. If one would visualize all the normals in a model with the center being the 0,0,0 coord, one would see a ball with radius 1 emerging. The more normals, the better you'd see the ball. So rotating an object would be to rotate all normals and this would end up with a similar looking ball. This would lead me to think that there would be an easy way with lookup tables to easily and fast retrieve a rotated normal. wouldn't it be much smarter to somehow store a normal as an XYZ rotation (not sure how to obtain this) and then if you rotate this thing about the XYZ plane the new normal would be a simple matter of sinetable lookups instead of a costly transformation matrix. does anyone have experience with this and knows if this is flawed or could actually work...and has hands on experience with the optimization results? thanks! Rhino

Share this post


Link to post
Share on other sites
if you have space issues you can store normals in spherical form, that is two angles. this would save you one float. but every time you need to use them you'd have to convert them with(spherical to cartisian) this requires a sin and cos, a few mults, a couple additions. since the normal information is used so frequently in lighting, it seems like an inefficent method to store them, in fact, I've never even seen it doen that way. you could however use this method to decrease your disk file size, if that's what your looking for, not something you really need to worry about on todays pc.

****quote**btw how do you quote guys??
wouldn't it be much smarter to somehow store a normal as an XYZ rotation (not sure how to obtain this) and then if you rotate this thing about the XYZ plane the new normal would be a simple matter of sinetable lookups instead of a costly transformation matrix.
***

actually no, because the matrix you use to rotate normals is(usually, assuming rigid orientation perserving transromations, aka rotations, translations, and uniform scales) just your rotation matrix, which is something you need to calculate anway, so it requires no costly sin or cos, I have to say tho, I can't see a table approach working, I think the data stored would be huge usually things that can take on near infinate range of values are a poor canidate for a lookup table, even if you introduced some sort of interpolation funciton, that would just add over head, on top of that, if you needed to rotate the model, you'd have to rotate this huge table thing. with the matrix, these multiplications are usually being done on your graphics card, this special designed for doing these kind of things in a massively parallel manner.

Tim

Share this post


Link to post
Share on other sites
Quote:

if you have space issues you can store normals in spherical form, that is two angles. this would save you one float.

I think it is easier to store just n.x, n.y and compute n.z when needed. 1 = x^2 + y^2 + z^2... :-)

Share this post


Link to post
Share on other sites
I just thought about that for a sec and realize that

you'd have to do something like this

z = sqrt(1 - y^2 + x^2);

this eqation has both a positive and a negative solution, so you wouldn't be able to tell which direction z is facing +/-.

Tim

Share this post


Link to post
Share on other sites
Hi,

My query wasnt for space optimization but more for processor power optimization.

I'm not using hardware acceleration (since the stuff has to work on mobile devices) and thats when I came to think about these normals.

For basic occlusion and lighting stuff you don't even need to transform your normals, but for things like 'fake phong mapping' where you tranform the vertexnormals and use these as UV coords in a phongtexture map, I was just wondering if transforming these Vertexnormals couldn't be done on a much faster basis.

Any thougths on this are welcome!

Cheers

R

Share this post


Link to post
Share on other sites
Hmmm

I just checked a math site and according to them one can convert from spherical to cartesian by this formula:

x = RHO * Sin(PHI)*Cos(THETA)
y = RHO * Sin(PHI)*Sin(THETA)
z = RHO * Cos(PHI)

where RHO = the distance of the origin (in our case 1 so we can skip this alltogether)
PHI = latitude
THETA = longitude.


If one has a nice Sin lookuptable I think storing normals using this system would be much faster then the typical storing XYZ approach and then transforming it via a matrix.

the only thing I have to do extra is calculate the PHI and THETA values given a
normal transformation matrix. But this is just converting it from cartesian to spherical and only has to be done once.

Could this work or am I missing something obvious?

R

Share this post


Link to post
Share on other sites
Hmmmmm. If you pass normals into the rasterizer in their two-component spherical coordinate form and then use them to do a 2D texture lookup to restore the normal in the pixel shader, I think you'd get an improvement in the quality of the interpolated normals over doing an actual interpolation of the 3-component vector, because interpolating spherical coordinates ought to produce a more even distribution of angles than a straight lerp/normalise. If you were using a normalization cube map, it'd be free too because you were doing the lookup anyway.

Share this post


Link to post
Share on other sites
I don't understand, how do you intend to rotate the normals when the object is rotated? it seems to me any reasonably look up table like superpig suggest, would be to large for cahce, potentially outweighing any savings, a matrix multiplication isn't all that bad really, if you had a lookup table for instance, youd definitly have to define an interpolation funciton to get reasonable results, assuuming bilinear interpolation of the 4 closest normals in your map. this is basically just as many multiplicaitons as a matrix.

Tim

Share this post


Link to post
Share on other sites
Since the normals are stored as 2 values: lattitude and longitude, rotating the normals to any new position is basically 2 additions.
This is really fast and doesn't require lookuptables.

The only thing which requires a lookup table is to convert the nromal back to a X,Y and Z component... but this can be done using a SineLookuptable as the equations a few post up show.

You might be right that in the end, due to the cache situation on a mobile device, the whole matrix multiplicatrion trick might be faster in the end.
On the other hand, the Sine lookuptable doesn't necessarily have to be that big to get decent results. I might even get away with a small lookuptable without even using interpolations to get the correctvalue.
It depends for what you tend to use the resulting normal vector.

Anyhow, I'll try to implement the method just for the fun of it when I have time today or tomorrow and I'll post the results here!

Cheers

Reinier

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
The only thing which requires a lookup table is to convert the nromal back to a X,Y and Z component... but this can be done using a SineLookuptable as the equations a few post up show.

with what superpid suggested, the conversion to X, Y and Z components comes down to a texture lookup, and a remap from the 0.0 -> 1.0 range to the -1.0 -> 1.0 range. and you get the renormalization done "for free".
but this seems perhaps more interesting for graphics hardware than for software rasterizers on mobile devices... dunno...

you could pass the two normal coordinates to the vertex shader, that would rotate them with two additions (like DutchRhino said), then pass the two rotated coordinates to the pixel shader, that would do the conversion back to cartesian coordinates and renormalisation all-in one texture lookup (and you could trash your normalisation cubemap if you had no other renormalisations to do).
sounds quite nice in theory... neat idea superpig, I wonder what results it produces.. got to try it out :)

Share this post


Link to post
Share on other sites
if you have a matrix that represents a rotation for the object, how do you extrapolate the necessarey information to do the two additions for this "simple" rotation of normal? seems to me there's as much math needed there to detract it as an alternative to a matrix multiplicaition issue. a 2-d lookup table seems nice, but like I said before I'd think it would need to be rather large to get good results, for mobile devices space is a premium I'd think. I guess you'd have to impliment it to be sure.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
if you have a matrix that represents a rotation for the object, how do you extrapolate the necessarey information to do the two additions for this "simple" rotation of normal? seems to me there's as much math needed there to detract it as an alternative to a matrix multiplicaition issue


clearly not... with the normal method, you need one 3*3matrix/vector multiplication per vertex to rotate the normals.
here, you only need to convert the matrix to the theta/phi offsets once per transformation matrix (that is, once per _object_ for non skinned meshes, once per bone for vertex skinned ones), and to perform two additions per vertex.

and the amount of maths needed is quite small indeed: matrix to euler conversion, then you can go from euler to theta/phi angles directly with no further computations.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
oops, sorry, there was supposed to be a link in the previous post: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToEuler/

Share this post


Link to post
Share on other sites
your right I wasn't thinking, I was thinking you'd have to do it for every normal. I guess the real question here is, how big of a lookup table is necessary to get reasonable results? 360*180 would definitly suffice I'd think, perhaps make it even smaller by introducing interpolation, however like I said before any interpolation might offset speed.

Share this post


Link to post
Share on other sites
If you're doing SW rendering and you're using the normals for lighting I'd suggest that you rotate the light into object space and then performing the lighting.
Usually the number of lights is alot smaller than the number of vertices, hence it's faster to transform the light once per frame per object.
Just a tip.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this