# OpenGL Cube mapping

## Recommended Posts

Hello. I've been trying to understand an implementation of cube mapping in GLSL but I have some problems understanding the reasoning behind a part of the implementation so rather than the programming I'm having trouble understanding the theory which I thought I could ask about here.

From what I remember reading a couple of months back in a 3D computer graphics cube mapping is simply the means of reflecting the environment on an object to get some nice results with less heavy performance (as opposed to a global illumination method as ray tracing). Skipping the part of where you have to create the actual textures the concept is as following:

Calculate the view vector (the vector going from the camera TO the vertex), calculate the surface normal and from these two you calculate the reflecting view vector which is used to index in the texture.

I'm currently reading OpenGL SuperBible 5th Edition and the implementation there is as following in the vertex shader:

The view vector is calculated by merely transforming the incoming vertex with the modelview matrix (which makes sense since we're in eyespace then and the camera is always at (0.0, 0.0, 0.0). The normal is transformed by the normal matrix to have it in eyespace and then the reflecting view vector is calculated from these two.

But after this part the reflecting view vector is multiplied by the inverse of the camera rotation matrix to consider it's orientation to have a correct reflection when moving the camera around the scene (which is mentioned in the book). If this is not done then you'll have the same reflection wherever you move the camera which I tried by removing the part with the inverse matrix.

I don't exactly have a clear understanding to why it is the inverse of the camera rotation matrix which is needed for this? And what is the reason for not having a correct reflection when moving the camera around without considering the camera rotation matrix? Is it due to the fact that wherever the camera is moved in eyespace it's always in (0.0, 0.0, 0.0)?

##### Share on other sites
The cubemap lookup is done in world space, so you need to apply the inverse of the rotation part of the modelview matrix to get the calculated lookup vector from view space (eye space) into world space.

##### Share on other sites
What I don't get then is why you are able to get a proper reflection on the object if you only use the reflected view vector in eyespace. Yes it will be view-dependent then but if the indexing of the cubemap is done in world space then it shouldn't work as well, no? Or am I missing something here?

##### Share on other sites
I'm not 100% sure what you mean. I've drawn a diagram which might help.

As you can see, the red line is the reflected view vector in eye space going towards z+. If we use this as a cubemap lookup without transforming it we end up accessing the wrong face of the cubemap (the red line on the bottom part of the diagram). If we transform it using the inverse of the eye space transformation we get the pink line, which is correct; it points towards the z+ face of the cubemap.

The result isn't completely perfect as it doesn't take into account the reflecting point's position relative to the cubemap centre. There's some discussion on that issue in [url="http://www.gamedev.net/topic/616553-gpu-gems-image-based-lighting"]this thread[/url].

##### Share on other sites
What you said does make sense. Since the cubemap is in world space and we need to have everything in the same space we would transform the lookup vector by the inverse viewmatrix to get back to world space. But here comes a stupid question then...how do you know that the actual cubemap is in world space? This is what basically got me confused to why they did that inverse operation in the shader.

At first I thought that the reason to why you would not get correct reflections when moving the camera around the scene was because you never considered the position of the camera. Regardless of where we move the camera in the scene it's position is always (0.0,0.0,0.0) in eyespace. Basically what I mean is that you can place the camera in (3.0,3.0,3.0) or in (15.0,15.0,15.0) in world space but it would still be (0.0,0.0,0.0) in eyespace.

Also what I meant by my previous reply was that if I skipped the part of transforming the lookup vector back to world space and sampled the texture with the eyespace lookup vector then I would get a correct reflection on my sphere as long as the camera remained static. But if I moved it around basically nothing would happen. I tried moving the camera to the other side of the sphere but it kept showing the same reflection I saw in the position I started from. I was confused about this since I thought that I would get a wrong reflection regardless of the camera being in it's start position or not due to the reflected view vector being in eyespace and sampling the wrong face as you described above.

##### Share on other sites
What I should have said is that the cubemap texture lookup treats the cubemap as being axis-aligned so, for example, using (0,0,1) as the lookup vector will always access the centre of the z+ face of the cubemap, (1,0,0) will always access the x+ face, etc.

The cubemap doesn't [i]necessarilly [/i]have to be in world space. For example, if you were generating a cubemap to apply to a specific object you might transform the cameras used to render the cubemap faces into object space. To do a lookup into such a cubemap from, say, a view space reflection vector you'd be transforming the vector from view space into object space and not world space.

The camera's position [i]is [/i]taken into account. Yes it's always at (0,0,0) in view space, but remember that in view space the camera's motion translates into the motion of everything else in the scene; if you move the camera right->left in world space you're moving the world left->right in view space. The view space reflection vector you're using intrinsically incorporates the position of the camera; that doesn't change when you transform it into world space.

As for your final point: look closely at the reflection results when using the untransformed reflection vector. You should notice that the reflection [i]does[/i] change, but only slightly (the amount by which it changes depends on the shape of the reflecting object - a sphere is best for plaing around with this sort of thing). The reason the reflection stays more-or-less the same is because the object's position and normals (and hence the reflection vector) is more-or-less the same [i]relative to the camera[/i].

Hopefully this has explained things a little better than I did previously!

##### Share on other sites
Hello. Having been abroad and just returning home I haven't had the chance to reply back until now, still everything now makes sense to me. I did check the reflection results and as you said there is some very slight change but it was barely noticable to me at first.

And yeah...I totally forgot to consider that when you move the camera you are basically moving the world, thus really taking the position of the camera into account. Makes much more sense now that you reminded me of it!

Thanks lots for the help, greatly appreciated! Things are definitely clearer now

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627676
• Total Posts
2978582
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 11
• 12
• 10
• 12
• 22