# OpenGL Cubemap reflections & GLSL vertex shaders [SOLVED!]

## Recommended Posts

Hello, long time reader & first time poster. I've been toying around with simple OpenGL cubemap reflections, trying to learn the API, and eventually to learn GLSL. Various tutorials on the net have so far been more than helpful, but I've come across a vertex shader issue I'm unable to solve. While searching for similar threads in this forums has been helpful I've been unable to find an answer :/ I have a simple scene with fixed-functionality reflections that works, and I'm trying to duplicate this functionality using GLSL vertex and fragment shaders. The fragment shader is trivial and I believe that's not where the issue lies; from what I gather I'm using the wrong matrix in my vertex shader, but no matter what I can't seem to duplicate the (working) fixed-pipeline functionality. My scene works like this, with irrelevant code cut:
//clear color & depth buffers (...)
//set up projection & reset matrices

glMatrixMode(GL_PROJECTION);
gluPerspective(45.0, 640.0/480.0, 0.01f, 100.0f);
glMatrixMode(GL_TEXTURE);
glMatrixMode(GL_MODELVIEW);

//draw textured skybox

glMultMatrixf((const GLfloat *)&sceneRotationMatrix);

//draw some quads (...)

//draw model

glTranslatef(cameraPos.x, cameraPos.y, cameraPos.z);
glMultMatrixf((const GLfloat *)&sceneRotationMatrix);
glMultMatrixf((const GLfloat *)&modelRotationMatrix);

glEnable(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, someCubemapTexture);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP); glEnable(GL_TEXTURE_GEN_S);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP); glEnable(GL_TEXTURE_GEN_T);
glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP); glEnable(GL_TEXTURE_GEN_R);

glMatrixMode(GL_TEXTURE);
glMultMatrixf((const GLfloat *)&sceneRotationMatrixTranspose);

drawModel(someModel);

glDisable(GL_TEXTURE_GEN_S);
glDisable(GL_TEXTURE_GEN_T);
glDisable(GL_TEXTURE_GEN_R);
glDisable(GL_TEXTURE_CUBE_MAP);

//draw some text, flush etc (...)


As you see, I have two separate rotation matrices, one for the "camera" (everything in the scene gets multiplied by it), and one for the "model" (only the model gets rotated by this, not the skybox). This seems to work, the reflection on the model seems to take both scene and model rotation into account: No matter how the model is rotated, the bottom cube face (green in the screenshot) is reflected on the bottom side of the model. The first image has the model in the natural position (as loaded from the 3ds file), the second has it rotated from this position. Now I've been trying to duplicate this functionality using GLSL. After much trial & error, and reading contradictory tutorials, here is my vertex shader:
uniform vec3 camera;

varying vec3 Reflect;

void main()
{
vec4 cam = vec4(-camera.xyz, 0.0);
vec3 normal = gl_Normal;
vec4 vert = gl_Vertex;
vec4 eyeDir = gl_ModelViewMatrixTranspose * cam;
Reflect = reflect(vert.xyz - eyeDir.xyz, normal);

gl_Position = ftransform();
}


The fragment shader just does a textureCube lookup using the Reflect vector. Now this seems to work for the scene rotation, but if the model is rotated separately (using modelRotationMatrix above), the reflections are not updated. As an example, here is a similar scene to the one above, but rendered using the shaders. First with the model in the natural position (looks just like when using fixed-function): And, after rotating the model: What happens is that the reflections seem to be calculated without taking the model rotation into account - the "belly" of the model is still green, because in the default position it's supposed to reflect the green side of the cube. I'm guessing I need to multiply one of the vectors in the vertex shader by one of the matrices, but so far trial & error have net me no results :( . Any and all suggestions are appreciated. [Edited by - peachu on March 17, 2009 6:13:52 PM]

##### Share on other sites
not 100% sure if this is right, but i think you want to multiply the normal by gl_NormalMatrix, to take into account the rotations of the model, because when you rotate obviously model view matrix transforms your vertices, but your normals remain the same.

##### Share on other sites
Unfortunately,
vec3 normal = gl_NormalMatrix * gl_Normal;

produces even weirder results. The reflections are wrong, even when not rotating the model. It only looks right if both the scene and model rotation matrices are unity.

Thinking about it, you're probably right about this being about the normals though. I tried multiplying them with every matrix I could find in the GLSL reference sheet, but no luck :)

I'll see if I can make the model loader draw normal vectors for every triangle, that might make things easier to diagnose.

edit: I'm dumb, this won't help as the normals would then be vertices, and get transformed either way :v

[Edited by - peachu on March 15, 2009 1:35:31 PM]

##### Share on other sites
Try this:
varying vec3 refdir;void main(void){   vec4 camcen= gl_ModelViewMatrixInverse[3];//camera center in World Space Coordinates   vec4 dir   = gl_Vertex - camcen;//(Assuming) gl_Vertex is given in WSC   refdir     = reflect(dir.xyz,gl_Normal);   gl_Position = ftransform();}

if your cubemap is static this is all what you need :)

Edit:
In your original VS:

varying vec3 Reflect;void main(){	vec4 cam = vec4(-camera.xyz, 0.0);	vec3 normal = gl_Normal;	vec4 vert = gl_Vertex;	vec4 eyeDir = /*gl_ModelViewMatrixTranspose */ cam; //Don't have to do that transformation	Reflect = reflect(vert.xyz - eyeDir.xyz, normal);		gl_Position = ftransform();}

It should work now (it does for me).

Edit2: Sorry I was wrong (I'm a noob ;). My VS works fine only when the model is not transformed, when the only moving object is the camera.
Anyway, If we assume that the cubemap is static, One have to send the transformation (and the normal) matrix of the object to the vertex shader in order to get the world space coordinates of its vertices before calculating the reflexion vector. This will not be very efficient with big meshes. there must be better approaches.

[Edited by - knighty on March 16, 2009 4:04:57 AM]

##### Share on other sites
Maybe this will help: link.

##### Share on other sites
I solved this, sort of :)

Multiplying the GL_TEXTURE matrix with my model rotation matrix and then multiplying the reflection vector by gl_TextureMatrix[0], as in
[C code]:
glMatrixMode(GL_TEXTURE);
glMultMatrixf((const GLfloat *)&sceneRotationMatrixTranspose);
if (opt2 & 8) //this is my obscure way of checking if we're using shaders
glMultMatrixf((const GLfloat *)&modelRotationMatrix);

vec3 tReflect = reflect(vert.xyz - cam.xyz, normal);
Reflect = vec3(gl_TextureMatrix[0] * vec4(tReflect, 0.0));
//pass Reflect to fragment shader

seems to work. Bizarrely enough, multiplying GL_TEXTURE with the rotation breaks fixed-pipeline functionality, as the rotation matrix gets applied twice, so the reflection rotates twice for every rotation of the model, but at least the shader works. Clearly this is not the right way of doing it, and someday I'll probably figure out exactly where this mysterious other multiplication is taking place, but for now I'm satisfied :)

Thanks for your help!

##### Share on other sites
Try this:
uniform mat3 CubeMapModelViewMatrixInverse;varying vec3 Rdir;void main (void){   gl_Position = ftransform();   vec3 Vdir   = (gl_ModelViewMatrix * gl_Vertex).xyz;//assuming w=1   vec3 Ndir   = gl_NormalMatrix * gl_Normal;   Rdir        = CubeMapModelViewMatrixInverse * reflect(Vdir,normalize(Ndir));}

Calculations are done in camera-space.

In this shader one can rotate the cube map :) but it requires to calculate the inverse of its modelview-matrix.

Typically, View and CubeMAp transformations are just rotations and translations so that the inverse of "CubeMapModelViewMatrix" is its transpose.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627662
• Total Posts
2978508
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
Thank you in advance!
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 10
• 12
• 22
• 13