# OpenGL Request for Explanation of Light Half Vector ...

## Recommended Posts

Hello,

I have been trying to get my head around light half-vectors for a minute now but I still don't understand it. Here is what I found on a StackExchange article:

Quote

http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir2 reports that half vector in OpenGL context is 'Eye position - Light position' but then it goes on to say 'luckily OpenGL calculates it for us' [which is now deprecated].

How can, practically be calculated (a simple example would be greatly appreciated) [mainly, it puzzles me what "Eye" is and how it can be derived].

At the moment I managed to make specular calculations work (with good visual result) with half vector being equal to Light where Light is

vec3 Light = normalize(light_position - vec3(out_Vertex));
Now, I've no idea why that worked.

[If at least I knew what "Eye" is and how it can be derived practically.]

My two questions are:

• In the above code, is the "out_Vertex" the vPosition that is coming in for the geometry?
• I don't have a light position in my GLSL shader but it seems based on the above example I need one. Am I correct? Or do I need to send a light position to the GLSL fragment shader as well?

Thank you for any input you can provide on this.

##### Share on other sites

My two questions are:

• In the above code, is the "out_Vertex" the vPosition that is coming in for the geometry?
• I don't have a light position in my GLSL shader but it seems based on the above example I need one. Am I correct? Or do I need to send a light position to the GLSL fragment shader as well?

Thank you for any input you can provide on this.

Half vector is usually said for the formula "normalize( lightDir + viewDir );"

That's all.

Now, to get the light direction, you may already have it (i.e. directional lights have a user-specified direction but no direction) or you may have to calculate it (i.e. point lights have position but no direction, thus lightDir = out_Vertex - lightPos)

viewDir is the camera's direction, so you already have that value.

Now, what is out_Vertex depends on what space you deal everything. If the light position is in object space, then out_Vertex should be the input vertex position from the vertex shader. If light is in world position then out_Vertex should be the input vertex position multiplied by the world matrix. If it's in view space, it should be input vertex position multiplied by the world-view matrix.

I suggest that you read the Programming Vertex, Geometry and Pixel Shaders book which was once hosted in GameDev.net

It has an entire chapter dedicated to lighting and should contain everything you want and need to know about lighting. If you google enough, there's also a convenient PDF version of it.

##### Share on other sites

viewDir is the camera's direction, so you already have that value.

I think this is not true. viewDir should be the vector from the lit pixel's position to the camera's position - this is different from the camera's "lookAt" direction, if that's what you meant. I made this mistake too at first.

1. In the above code, is the "out_Vertex" the vPosition that is coming in for the geometry?

Yes. Lighting is usually done in world-space, so this is usually the world-space position.

2. I don't have a light position in my GLSL shader but it seems based on the above example I need one. Am I correct? Or do I need to send a light position to the GLSL fragment shader as well?

Yes and no... You shouldn't send the light position into the fragment shader. It is better to compute the lightDir vector in the vertex shader, and pass that into the fragment shader. The pipeline will interpolate the lightDir vector across the surface of each triangle linearly before passing it to the fragment shader, so you will have to re-normalize it in the fragment shader. And I think the same goes for all other vectors used in the lighting formula...

Edited by tonemgub

##### Share on other sites

Never did say thank you for the replies!

I will go through these but it seems like this is what I was looking for; thank you.

##### Share on other sites

viewDir is the camera's direction, so you already have that value.

I think this is not true. viewDir should be the vector from the lit pixel's position to the camera's position - this is different from the camera's "lookAt" direction, if that's what you meant. I made this mistake too at first.

Oops. You're right my mistake.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627652
• Total Posts
2978421
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 12
• 22
• 13
• 33