Cube mapping

Started by
5 comments, last by Suen 12 years, 1 month ago
Hello. I've been trying to understand an implementation of cube mapping in GLSL but I have some problems understanding the reasoning behind a part of the implementation so rather than the programming I'm having trouble understanding the theory which I thought I could ask about here.

From what I remember reading a couple of months back in a 3D computer graphics cube mapping is simply the means of reflecting the environment on an object to get some nice results with less heavy performance (as opposed to a global illumination method as ray tracing). Skipping the part of where you have to create the actual textures the concept is as following:

Calculate the view vector (the vector going from the camera TO the vertex), calculate the surface normal and from these two you calculate the reflecting view vector which is used to index in the texture.

I'm currently reading OpenGL SuperBible 5th Edition and the implementation there is as following in the vertex shader:

The view vector is calculated by merely transforming the incoming vertex with the modelview matrix (which makes sense since we're in eyespace then and the camera is always at (0.0, 0.0, 0.0). The normal is transformed by the normal matrix to have it in eyespace and then the reflecting view vector is calculated from these two.

But after this part the reflecting view vector is multiplied by the inverse of the camera rotation matrix to consider it's orientation to have a correct reflection when moving the camera around the scene (which is mentioned in the book). If this is not done then you'll have the same reflection wherever you move the camera which I tried by removing the part with the inverse matrix.

I don't exactly have a clear understanding to why it is the inverse of the camera rotation matrix which is needed for this? And what is the reason for not having a correct reflection when moving the camera around without considering the camera rotation matrix? Is it due to the fact that wherever the camera is moved in eyespace it's always in (0.0, 0.0, 0.0)?
Advertisement
The cubemap lookup is done in world space, so you need to apply the inverse of the rotation part of the modelview matrix to get the calculated lookup vector from view space (eye space) into world space.
What I don't get then is why you are able to get a proper reflection on the object if you only use the reflected view vector in eyespace. Yes it will be view-dependent then but if the indexing of the cubemap is done in world space then it shouldn't work as well, no? Or am I missing something here?
I'm not 100% sure what you mean. I've drawn a diagram which might help.

As you can see, the red line is the reflected view vector in eye space going towards z+. If we use this as a cubemap lookup without transforming it we end up accessing the wrong face of the cubemap (the red line on the bottom part of the diagram). If we transform it using the inverse of the eye space transformation we get the pink line, which is correct; it points towards the z+ face of the cubemap.

The result isn't completely perfect as it doesn't take into account the reflecting point's position relative to the cubemap centre. There's some discussion on that issue in this thread.
What you said does make sense. Since the cubemap is in world space and we need to have everything in the same space we would transform the lookup vector by the inverse viewmatrix to get back to world space. But here comes a stupid question then...how do you know that the actual cubemap is in world space? This is what basically got me confused to why they did that inverse operation in the shader.

At first I thought that the reason to why you would not get correct reflections when moving the camera around the scene was because you never considered the position of the camera. Regardless of where we move the camera in the scene it's position is always (0.0,0.0,0.0) in eyespace. Basically what I mean is that you can place the camera in (3.0,3.0,3.0) or in (15.0,15.0,15.0) in world space but it would still be (0.0,0.0,0.0) in eyespace.

Also what I meant by my previous reply was that if I skipped the part of transforming the lookup vector back to world space and sampled the texture with the eyespace lookup vector then I would get a correct reflection on my sphere as long as the camera remained static. But if I moved it around basically nothing would happen. I tried moving the camera to the other side of the sphere but it kept showing the same reflection I saw in the position I started from. I was confused about this since I thought that I would get a wrong reflection regardless of the camera being in it's start position or not due to the reflected view vector being in eyespace and sampling the wrong face as you described above.
What I should have said is that the cubemap texture lookup treats the cubemap as being axis-aligned so, for example, using (0,0,1) as the lookup vector will always access the centre of the z+ face of the cubemap, (1,0,0) will always access the x+ face, etc.

The cubemap doesn't necessarilly have to be in world space. For example, if you were generating a cubemap to apply to a specific object you might transform the cameras used to render the cubemap faces into object space. To do a lookup into such a cubemap from, say, a view space reflection vector you'd be transforming the vector from view space into object space and not world space.

The camera's position is taken into account. Yes it's always at (0,0,0) in view space, but remember that in view space the camera's motion translates into the motion of everything else in the scene; if you move the camera right->left in world space you're moving the world left->right in view space. The view space reflection vector you're using intrinsically incorporates the position of the camera; that doesn't change when you transform it into world space.

As for your final point: look closely at the reflection results when using the untransformed reflection vector. You should notice that the reflection does change, but only slightly (the amount by which it changes depends on the shape of the reflecting object - a sphere is best for plaing around with this sort of thing). The reason the reflection stays more-or-less the same is because the object's position and normals (and hence the reflection vector) is more-or-less the same relative to the camera.

Hopefully this has explained things a little better than I did previously!
Hello. Having been abroad and just returning home I haven't had the chance to reply back until now, still everything now makes sense to me. I did check the reflection results and as you said there is some very slight change but it was barely noticable to me at first.

And yeah...I totally forgot to consider that when you move the camera you are basically moving the world, thus really taking the position of the camera into account. Makes much more sense now that you reminded me of it!

Thanks lots for the help, greatly appreciated! Things are definitely clearer now :)

This topic is closed to new replies.

Advertisement