Sign in to follow this  
Suen

OpenGL Cube mapping

Recommended Posts

Suen    160
Hello. I've been trying to understand an implementation of cube mapping in GLSL but I have some problems understanding the reasoning behind a part of the implementation so rather than the programming I'm having trouble understanding the theory which I thought I could ask about here.

From what I remember reading a couple of months back in a 3D computer graphics cube mapping is simply the means of reflecting the environment on an object to get some nice results with less heavy performance (as opposed to a global illumination method as ray tracing). Skipping the part of where you have to create the actual textures the concept is as following:

Calculate the view vector (the vector going from the camera TO the vertex), calculate the surface normal and from these two you calculate the reflecting view vector which is used to index in the texture.

I'm currently reading OpenGL SuperBible 5th Edition and the implementation there is as following in the vertex shader:

The view vector is calculated by merely transforming the incoming vertex with the modelview matrix (which makes sense since we're in eyespace then and the camera is always at (0.0, 0.0, 0.0). The normal is transformed by the normal matrix to have it in eyespace and then the reflecting view vector is calculated from these two.

But after this part the reflecting view vector is multiplied by the inverse of the camera rotation matrix to consider it's orientation to have a correct reflection when moving the camera around the scene (which is mentioned in the book). If this is not done then you'll have the same reflection wherever you move the camera which I tried by removing the part with the inverse matrix.

I don't exactly have a clear understanding to why it is the inverse of the camera rotation matrix which is needed for this? And what is the reason for not having a correct reflection when moving the camera around without considering the camera rotation matrix? Is it due to the fact that wherever the camera is moved in eyespace it's always in (0.0, 0.0, 0.0)?

Share this post


Link to post
Share on other sites
johnchapman    601
The cubemap lookup is done in world space, so you need to apply the inverse of the rotation part of the modelview matrix to get the calculated lookup vector from view space (eye space) into world space.

Share this post


Link to post
Share on other sites
Suen    160
What I don't get then is why you are able to get a proper reflection on the object if you only use the reflected view vector in eyespace. Yes it will be view-dependent then but if the indexing of the cubemap is done in world space then it shouldn't work as well, no? Or am I missing something here?

Share this post


Link to post
Share on other sites
johnchapman    601
I'm not 100% sure what you mean. I've drawn a diagram which might help.

As you can see, the red line is the reflected view vector in eye space going towards z+. If we use this as a cubemap lookup without transforming it we end up accessing the wrong face of the cubemap (the red line on the bottom part of the diagram). If we transform it using the inverse of the eye space transformation we get the pink line, which is correct; it points towards the z+ face of the cubemap.

The result isn't completely perfect as it doesn't take into account the reflecting point's position relative to the cubemap centre. There's some discussion on that issue in [url="http://www.gamedev.net/topic/616553-gpu-gems-image-based-lighting"]this thread[/url].

Share this post


Link to post
Share on other sites
Suen    160
What you said does make sense. Since the cubemap is in world space and we need to have everything in the same space we would transform the lookup vector by the inverse viewmatrix to get back to world space. But here comes a stupid question then...how do you know that the actual cubemap is in world space? This is what basically got me confused to why they did that inverse operation in the shader.

At first I thought that the reason to why you would not get correct reflections when moving the camera around the scene was because you never considered the position of the camera. Regardless of where we move the camera in the scene it's position is always (0.0,0.0,0.0) in eyespace. Basically what I mean is that you can place the camera in (3.0,3.0,3.0) or in (15.0,15.0,15.0) in world space but it would still be (0.0,0.0,0.0) in eyespace.

Also what I meant by my previous reply was that if I skipped the part of transforming the lookup vector back to world space and sampled the texture with the eyespace lookup vector then I would get a correct reflection on my sphere as long as the camera remained static. But if I moved it around basically nothing would happen. I tried moving the camera to the other side of the sphere but it kept showing the same reflection I saw in the position I started from. I was confused about this since I thought that I would get a wrong reflection regardless of the camera being in it's start position or not due to the reflected view vector being in eyespace and sampling the wrong face as you described above.

Share this post


Link to post
Share on other sites
johnchapman    601
What I should have said is that the cubemap texture lookup treats the cubemap as being axis-aligned so, for example, using (0,0,1) as the lookup vector will always access the centre of the z+ face of the cubemap, (1,0,0) will always access the x+ face, etc.

The cubemap doesn't [i]necessarilly [/i]have to be in world space. For example, if you were generating a cubemap to apply to a specific object you might transform the cameras used to render the cubemap faces into object space. To do a lookup into such a cubemap from, say, a view space reflection vector you'd be transforming the vector from view space into object space and not world space.

The camera's position [i]is [/i]taken into account. Yes it's always at (0,0,0) in view space, but remember that in view space the camera's motion translates into the motion of everything else in the scene; if you move the camera right->left in world space you're moving the world left->right in view space. The view space reflection vector you're using intrinsically incorporates the position of the camera; that doesn't change when you transform it into world space.

As for your final point: look closely at the reflection results when using the untransformed reflection vector. You should notice that the reflection [i]does[/i] change, but only slightly (the amount by which it changes depends on the shape of the reflecting object - a sphere is best for plaing around with this sort of thing). The reason the reflection stays more-or-less the same is because the object's position and normals (and hence the reflection vector) is more-or-less the same [i]relative to the camera[/i].

Hopefully this has explained things a little better than I did previously!

Share this post


Link to post
Share on other sites
Suen    160
Hello. Having been abroad and just returning home I haven't had the chance to reply back until now, still everything now makes sense to me. I did check the reflection results and as you said there is some very slight change but it was barely noticable to me at first.

And yeah...I totally forgot to consider that when you move the camera you are basically moving the world, thus really taking the position of the camera into account. Makes much more sense now that you reminded me of it!

Thanks lots for the help, greatly appreciated! Things are definitely clearer now :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now