This topic is now archived and is closed to further replies.

which way a polygon is facing...(for cubemapping)

This topic is 5267 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi! i''d just like to know how to properly find out which way a polygon is facing. this is so i can properly cubemap an object. i''m currently trying out my own implementation of the algorithm, and so far, all i do is get the normal at every vertex, and then see which component is largest (i.e. comparing the absolute value of the x component with the absolute values of the y and z components; if the x component is largest, it tells me its mostly facing left or right, etc), then divide the other two components by that value to get the correct texture coordinates. what happens, however, is that the y component always seems to be the largest no matter what happens (it goes into the "if" condition for that circumstance). i''m thinking it may have something to do with how i figure out which way a vertex of a polygon is generally facing. i might be doing it wrong. any suggestions/corrections?

Share this post

Link to post
Share on other sites
1. are the normals normalized ?

2. have you verified that normals are actually being generated correctly? - i.e try drawing each as a coloured line with one end a different colour than the other. I find that doing that is invaluable as a debugging aid for normals because you can "see" whether they point out from the vertex correctly etc.

3. You shouldn''t need to do any per-vertex or per-polygon testing to apply a cube map!. You just use automatic texture coordinate generation (texgen) to turn the normals into 3D texture coordinates. The graphics card then checks which of the components of the texture coordinate is the largest to determine which of the faces to select. You don''t need to do that yourself!

4. If this is some sort of software engine, then I think (can''t remember) the hardware cube mapping doesn''t perform any divide, it just drops the largest component (that component decided the face to use) and uses the other two components to look up the map for that face. IIRC there''s actually pseudocode for the method hardware uses on one of the ATI/nVidia/Matrox sites

5. The fact that your Y component is always coming out highest possibly hints at a problem with your normal creation or the geometry being something like terrain where the normals mostly point upwards. Alternatively it might be that you have the Y axis going into or out of the screen and you''ve (possibly incorrectly) transformed into camera space...

6. How are you transforming your normals? - i.e. if you''re doing this in any space other than object space, then make sure they''re transformed correctly, and not screwed up by the transform itself.

7. Take the dot product of the normal and another vector in the same space to determine how much the normal faces that direction. For example the dot product of the (inverted) camera vector and the normal to determine if the normal faces the camera.

Share this post

Link to post
Share on other sites