Why are vertices represented as 4f, but normals and light positions as 3f?

Started by
2 comments, last by haegarr 10 years, 4 months ago

I'm using GLSL specifically and attempting to reproduce the diffuse side of phone shading. I've noticed in all example code I've seen, positions are represented as a vector of size 4, but normals and light positions are stored in vec3. Why do positions need the fourth value but normals and lights do not?

Thank you for your help.

-Nick

Advertisement
Normals and such can perfectly be represented with 4 components. Positions can be represented with 3. The fourth component is necessary in either case when working with affine transformation matrices. Vectors should have a w=0 and points should have a w=1 in order for the affine transformations to work. If you aren't dealing with any affine transformations - typical with many vector operations - then you can just use a 3-component vector instead. Points typically have to deal with translation operations (affine transformations) while normals generally do not, allowing a lot of short cuts.

http://gamedev.stackexchange.com/questions/64081/what-is-w-componet

Sean Middleditch – Game Systems Engineer – Join my team!

I should mention that using w=0 is fine for directional lights (they are at a point at infinity, which is what w=0 means), but for normals this doesn't always work. The 4x4 matrix that describes the affine transformation has a 3x3 submatrix that describes a mapping from vectors to vectors, but in general you should apply the inverse transpose of that matrix to a normal vector. If your 3x3 matrix is a rotation, it is its own inverse transpose, so plugging in w=0 will work. But it's good to know why it works and in which circumstances it might not.

To develop the above posts a bit, vectors can be used for positions, differences between positions, directions, and normals/tangents/bi-normals/bi-tangents.

* A position vector denotes, well, a point in space.

* A difference vector has no beginning and no end in the sense of positions, it has just a direction and a length. This may be confusing, but look at it so: It is easy to find two different pairs of positions where the difference vectors are identical; you cannot tell which one resulted from which pair of positions by looking at the vectors components.

* A direction vector is a vector with its length set to 1 (unit length), so that the vector still has a direction but no distinguishable length. A difference vector can be made to a direction vector by "normalization".

* A normal/... vector is a direction vector with the constraint to have a specific angle to a line, surface, and/or other vectors.

In a homogeneous co-ordinate system a position vector has the homogeneous co-ordinate, say w, set to a value unequal to zero, where w==1 denotes the normalized case (all cases can simply be converted to the normalized case by dividing by w). All other vector kinds have a w==0. In an affine co-ordinate system the w is implicit (you as the programmer have to remember and think of which kind of vector you're dealing with). In a homogeneous coordinate system you have w as a helper, but still need to remember and think of the special constraints on normals/tangents/... as mentioned by Álvaro.

I'm sure (not to say I'm hoping) that the examples you found on the internet do consider this in the one or other way.

This topic is closed to new replies.

Advertisement