uniform mat4 matView; uniform mat3 matViewProjection; uniform float fHeightScale; uniform float fPerspectiveBias; uniform vec4 vLightOffset; varying vec2 TexCoord; varying vec3 Normal; varying vec3 Binormal; varying vec3 Tangent; varying vec3 EyeVector; varying vec3 LightVector; varying vec2 vParallaxXY; attribute vec3 rm_Binormal; attribute vec3 rm_Tangent; void main(void) { // Transform the position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; // Set the texture coordinate TexCoord = gl_MultiTexCoord0.xy; // Compute the view vector in view space vec3 vView = -(matView * gl_Vertex); // Compute tangent space basis vector in view space vec3 vTangent = mat3(matView) * rm_Tangent; vec3 vBinormal = mat3(matView) * rm_Binormal; vec3 vNormal = mat3(matView) * Normal; // Normalize the view vector in tangent space vec3 vViewTS = mat3(vTangent, vBinormal, vNormal) * normalize(vView); // Compute initial parallax displacement direction vec2 vParallaxDirection = normalize(vViewTS.xy); // Compute the length of the parallax displacement vector float fParallaxLength = -sqrt(1.0 - vViewTS.z * vViewTS.z) / vViewTS.z; // Compute the parallax bias parameter float fParallaxBias = fPerspectiveBias + (1.0 - fPerspectiveBias) * (2.0 * vViewTS.z - 1.0); // Compute the actual reverse parallax displacement vector vParallaxXY = -vParallaxDirection * fParallaxLength * fParallaxBias * fHeightScale; // Compute light vector (view space): vec3 vLight = vView + vLightOffset; // Propagate the view and the light vectors (in tangent space): EyeVector = mat3(vTangent, vBinormal, vNormal) * vView; LightVector = mat3(vTangent, vBinormal, vNormal) * vLight; }Now the first thing I wondered about was if we can't replace all occurrences of matView with gl_ModelViewMatrix. Obviously matView should transform vertices from world to view space but for example vec3 vView = -(matView * gl_Vertex); Here gl_Vertex is in object space so the modelview matrix maps from object to view space, right? The 2nd thing: // Compute the length of the parallax displacement vector float fParallaxLength = -sqrt(1.0 - vViewTS.z * vViewTS.z) / vViewTS.z; Why do we negate the length and divide it by vViewTS.z? And the 3rd: What is the fPerspective bias input? Would be really nice if someone can help me out with this :)

**0**

# Parallax Occlusion Mapping in GLSL

Started by famous, Feb 17 2010 02:05 AM

3 replies to this topic

###
#1
Members - Reputation: **101**

Posted 17 February 2010 - 02:05 AM

hi there,
I'm just trying to implement my own parallax occlusion mapping shader with GLSL (I still don't understand all this stuff 100% so it's a learning by doing kinda thing). I found this implementation somewhere on this site:

###
#3
Members - Reputation: **149**

Posted 22 February 2010 - 11:53 PM

Quote:

Why do we negate the length.

if Z == 1 the view vec is effectively normal to the plane and the displacement becomes greater as Z is reduced(so no displacement when your looking at it from directly above, but as you view from greater angles the value will increase, causing the parallax effect).

Oh wait didnt read the code... If you mean why do we "-sqrt..." thats to make it go in the right direction when you do the multiply later.

I assume the perspective bias is to correct the effect, as different fov's will give wrong results?

###
#4
Members - Reputation: **101**

Posted 25 February 2010 - 01:57 AM

Quote:

Original post by Simon_Roth

If you mean why do we "-sqrt..." thats to make it go in the right direction when you do the multiply later.

Yeah, this absolutely makes sense.

Quote:

Original post by Simon_Rothif Z == 1 the view vec is effectively normal to the plane and the displacement becomes greater as Z is reduced(so no displacement when your looking at it from directly above, but as you view from greater angles the value will increase, causing the parallax effect).

Is this actually the explanation for why we divide by vViewTS.z? This makes sense as well, however imo the length already gets calculated by -sqrt(1.0 - vViewTS.z * vViewTS.z) without the division (theorem of pythagoras) so this value has to be sth other than the length (at least the length divided by vViewTS.z :D)

Quote:

Original post by Simon_RothI assume the perspective bias is to correct the effect, as different fov's will give wrong results?

Can you elaborate a bit more on this. I'm pretty new to all this stuff so my practical experience is pretty low. I'm not 100% sure how this bias and scale factor work. In the original paper by Terry Welsh there is an example where a brick wall texture covers a 2x2 meter area with a thickness of 0.02m so the scale factor would be 0.02m/2m = 0.01 with a bias of -0.05.

But I don't get why we have to divide the surface thickness by 2.