Sign in to follow this  
Yours3!f

parallax occlusion mapping in a deferred renderer

Recommended Posts

Hi,

I'm trying to implement parallax occlusion mapping as in:
[url="http://developer.amd.com/media/gpu_assets/Tatarchuk-ParallaxOcclusionMapping-Sketch-print.pdf"]http://developer.amd...ketch-print.pdf[/url]

However the texture coordinates that are the results of the POM calculation seem to be wrong... see the screenshot.
I followed the POM sample that can be found in DX SDK 2010 June in the DX 9 section. To add I'm doing the whole thing in view space instead of world space that the sample uses. Tangent space is only used to calculate the modified texture coordinates.

here's the G-buffer filling vertex shader with POM calculation:
[CODE]
#version 410

uniform mat4 m4_p, m4_mv; //projection and modelview matrices
uniform mat3 m3_n; //normal matrix
uniform vec3 v3_view_pos; //view space camera position
uniform float height_map_scale; //height map scaling value = 0.1

in vec4 v4_vertex; //vertex attribute
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture;

out cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn; //tangent to view space matrix
vec3 vs_normal;
vec3 vs_view_dir;
vec2 ts_pom_offset; //tangent space POM offset
} vertex_output;

void main()
{
vec3 normal = m3_n * v3_normal; //transformthe normal attribute to view space
vertex_output.vs_normal = normal; //store it unnormalized
normal = normalize(normal);
vec3 tangent = normalize(m3_n * v3_tangent);
vec3 bitangent = cross(normal, tangent);

vertex_output.tbn = mat3( tangent,
bitangent,
normal ); //tangent space to view space matrix (needed for storing the normal map)
mat3 vs_to_ts = mat3(tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z); //view space to tangent space matrix

vertex_output.v2_texture_coords = v2_texture;
vertex_output.position = m4_mv * v4_vertex; //view space position

//tangent space pom offset calculation
vertex_output.vs_view_dir = v3_view_pos - vertex_output.position.xyz;
vec3 ts_view_dir = vs_to_ts * vertex_output.vs_view_dir;

//initial parallax offset displacement direction
vec2 pom_direction = normalize(ts_view_dir.xy);

float view_dir_length = length(ts_view_dir); //determines the furthest amount of displacement
float pom_length = sqrt(view_dir_length * view_dir_length - ts_view_dir.z * ts_view_dir.z) / ts_view_dir.z;

//actual reverse parallax displacement vector
vertex_output.ts_pom_offset = pom_direction * pom_length * height_map_scale;

gl_Position = m4_p * vertex_output.position;
}
[/CODE]

pixel shader:
[CODE]
#version 410

uniform sampler2D texture0; //albedo texture
uniform sampler2D texture1; //normal map
uniform sampler2D texture2; //height map
uniform float far;
uniform int max_samples; // = 130
uniform int min_samples; // = 8
uniform int lod_threshold; // = 4

in cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn;
vec3 vs_normal;
vec3 vs_view_dir;
vec2 ts_pom_offset;
} pixel_input;

out vec4 v4_albedo;
out vec4 v4_normal;
out vec4 v4_depth;

vec2 encode_normals_spheremap(vec3 n)
{
vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5;
return enc;
}

void main()
{
vec3 vs_normal = normalize(pixel_input.vs_normal); //normalize the vectors after interpolation
vec3 vs_view_dir = normalize(pixel_input.vs_view_dir);
vec2 texture_dims = textureSize(texture0, 0); //get texture size (512, 512)

//POM

//current gradients
vec2 tex_coords_per_size = pixel_input.v2_texture_coords * texture_dims;

vec2 dx_size, dy_size, dx, dy;
vec4 v4_ddx, v4_ddy;

//in the sample the HLSL ddx, and ddy functions were used. Is dFdx and dFdy the same in GLSL?
v4_ddx = dFdx( vec4( tex_coords_per_size, pixel_input.v2_texture_coords ) ); //calculate 4 derivatives in one calculation
v4_ddy = dFdy( vec4( tex_coords_per_size, pixel_input.v2_texture_coords ) );

dx_size = v4_ddx.xy;
dy_size = v4_ddy.xy;
dx = v4_ddx.zw;
dy = v4_ddy.zw;

//mip level, mip level integer portion, fractional amount of blending between levels
float mip_level, mip_level_int, mip_level_frac, min_tex_coord_delta;
vec2 tex_coords;

//find min of change in u and v across a quad --> compute du and dv magnitude across a quad
tex_coords = dx_size * dx_size + dy_size * dy_size;

//standard mipmapping
min_tex_coord_delta = max( tex_coords.x, tex_coords.y );

//compute current mip level, 0.5 * log2(x) is basically sqrt(x)
mip_level = max( 0.5 * log2( min_tex_coord_delta ), 0 );

//start the current sample at the input texture coordinates
vec2 tex_sample = pixel_input.v2_texture_coords;

if( mip_level <= float(lod_threshold) )
{
//this changes the number of samples per ray depending on view angle
int num_steps = int(mix(max_samples, min_samples, dot( vs_view_dir, vs_normal ) ) );

float current_height = 0.0;
float step_size = 1.0 / float(num_steps);
float prev_height = 1.0;
float next_height = 0.0;

int step_index = 0;
bool condition = true;

vec2 tex_offset_per_step = step_size * pixel_input.ts_pom_offset;
vec2 tex_current_offset = pixel_input.v2_texture_coords;
float current_bound = 1.0;
float pom_amount = 0.0;

vec2 pt1 = vec2(0.0, 0.0);
vec2 pt2 = vec2(0.0, 0.0);

vec2 tex_offset = vec2(0.0, 0.0);

while(step_index < num_steps)
{
tex_current_offset -= tex_offset_per_step;

current_height = textureGrad( texture2, tex_current_offset, dx, dy ).x; //sample height map

current_bound -= step_size;

if(current_height > current_bound)
{
pt1 = vec2( current_bound, current_height );
pt2 = vec2( current_bound + step_size, prev_height );

tex_offset = tex_current_offset - tex_offset_per_step;

step_index = num_steps + 1;
}
else
{
step_index++;
}

prev_height = current_height;
}

float delta1 = pt1.x - pt1.y;
float delta2 = pt2.x - pt2.y;

float denominator = delta2 - delta1;

if(denominator == 0.0) //check for divide by zero
{
pom_amount = 0.0;
}
else
{
pom_amount = (pt1.x * delta2 - pt2.x * delta1) / denominator;
}

vec2 pom_offset = pixel_input.ts_pom_offset * (1.0 - pom_amount);

tex_sample = pixel_input.v2_texture_coords - pom_offset;

if(mip_level > float(lod_threshold - 1.0)) //if we're too far, then only use bump mapping
{
mip_level_frac = modf(mip_level, mip_level_int);
//mix to generate a seamless transition
tex_sample = mix(tex_sample, pixel_input.v2_texture_coords, mip_level_frac);
}

//shadows here
}

v4_albedo = texture(texture0, tex_sample); //sample the input albedo and other textures and store them in the g-buffer for lighting later on
v4_normal.xy = encode_normals_spheremap(pixel_input.tbn * (texture(texture1, tex_sample).xyz * 2.0 - 1.0));
v4_depth.x = pixel_input.position.z / -far;
}
[/CODE]

after these g-buffer fills a simple blinn phong lighting calculation is applied. The result rather looks distorted than the POM

EDIT: forgot to include screenshot, now its included.

best regards,
Yours3!f

Share this post


Link to post
Share on other sites
ok so I tried to find the cause, starting with the vertex shader. I converted the original sample to viewspace, but still in DX though.

original DX sample vertex shader (converted to view space, except for tangent space calculations):
[CODE]
VS_OUTPUT RenderSceneVS( float4 inPositionOS : POSITION,
float2 inTexCoord : TEXCOORD0,
float3 vInNormalOS : NORMAL,
float3 vInBinormalOS : BINORMAL,
float3 vInTangentOS : TANGENT )
{
VS_OUTPUT Out;

// Transform and output input position
Out.position = mul( inPositionOS, g_mWorldViewProjection );

// Propagate texture coordinate through:
Out.texCoord = inTexCoord * g_fBaseTextureRepeat;

float4x4 worldview = mul(g_mWorld, g_mView);
// Transform the normal, tangent and binormal vectors from object space to homogeneous projection space:
float3 vNormalWS = mul( vInNormalOS, (float3x3) worldview );
float3 vTangentWS = mul( vInTangentOS, (float3x3) worldview );
float3 vBinormalWS = mul( vInBinormalOS, (float3x3) worldview );

// Propagate the world space vertex normal through:
Out.vNormalWS = vNormalWS;

vNormalWS = normalize( vNormalWS );
vTangentWS = normalize( vTangentWS );
vBinormalWS = normalize( vBinormalWS );

// Compute position in world space:
float4 vPositionWS = mul( inPositionOS, worldview );

float4 eye = mul(g_vEye, g_mView);

// Compute and output the world view vector (unnormalized):
float3 vViewWS = eye - vPositionWS;
Out.vViewWS = vViewWS;
// Compute denormalized light vector in world space:
float3 vLightWS = mul(g_LightDir, g_mView);

// Normalize the light and view vectors and transform it to the tangent space:
float3x3 mWorldToTangent = float3x3( vTangentWS, vBinormalWS, vNormalWS );

// Propagate the view and the light vectors (in tangent space):
Out.vLightTS = mul( vLightWS, mWorldToTangent );
Out.vViewTS = mul( mWorldToTangent, vViewWS );

// Compute the ray direction for intersecting the height field profile with
// current view ray. See the above paper for derivation of this computation.

// Compute initial parallax displacement direction:
float2 vParallaxDirection = normalize( Out.vViewTS.xy );

// The length of this vector determines the furthest amount of displacement:
float fLength = length( Out.vViewTS );
float fParallaxLength = sqrt( fLength * fLength - Out.vViewTS.z * Out.vViewTS.z ) / Out.vViewTS.z;

// Compute the actual reverse parallax displacement vector:
Out.vParallaxOffsetTS = vParallaxDirection * fParallaxLength;

// Need to scale the amount of displacement to account for different height ranges
// in height maps. This is controlled by an artist-editable parameter:
Out.vParallaxOffsetTS *= g_fHeightMapScale;

//Out.vParallaxOffsetTS = vViewWS.xy;
return Out;
}
[/CODE]

the sample still worked so using view space shouldn't be a problem. Next I tried to debug the app by displaying different values from the vertex shader. Because the view-space calculations were correct I went on to check the tangent space calculations, and I found a strange thing:
[CODE]
// Normalize the light and view vectors and transform it to the tangent space:
float3x3 mWorldToTangent = float3x3( vTangentWS, vBinormalWS, vNormalWS );

// Propagate the view and the light vectors (in tangent space):
Out.vLightTS = mul( vLightWS, mWorldToTangent );
Out.vViewTS = mul( mWorldToTangent, vViewWS );
[/CODE]

So in this part first a view-space to tangent space matrix is constructed (ignore the variable name i.e. wolrdtotangent), and this matrix is used to transform the light direction and view direction vectors to tangent space. Now this wouldn't be a problem, but the way the sample does it is rather strange. I calculates the tangent space light vector using row major calculations, however in the next line the order is changed, and from maths I know that in this order there is no such operation. Now I looked up the "mul" operation from msdn (http://msdn.microsoft.com/en-us/library/windows/desktop/bb509628%28v=vs.85%29.aspx) and it turns out that if mul is used like this, then the vector is considered a column vector. Now if the vector is a column vector then the matrix should be a column one as well, otherwise this operation doesn't exist. So this line:
Out.vViewTS = mul( mWorldToTangent, vViewWS );
is equal to this, right?
Out.vViewTS = mul( vViewWS, transpose(mWorldToTangent) );
But thats strange. This matrix now isn't a view space to tangent space matrix. But then what is it? Could someone please explain this to me?

Share this post


Link to post
Share on other sites
ok, so I noticed that the same sample can be located among the rendermonkey samples. So I checked it out, and found that it is way easier to understand. So I tried to implement the effect in OGL in rendermonkey and I almost got it right, the texture coordinates now look fine from above, but if I rotate the camera they become distorted.
Here are the files:
https://docs.google.com/open?id=0BzHTUfIQ-XD8M0NOcm40VWRlV2c

Any idea what am I missing?

Share this post


Link to post
Share on other sites
So I tried to see what could be the problem, and I noticed an interesting option when you right-click on a model (mesh). You can choose whether rendermonkey interpret the input geometry as being in either left or right handed coordinate system. After changing it to right handed the disc model looked fine, but when I turned it upside down the other side of it became distorted. I tried it with the original DX sample but this issue didn't occur there (with left handed coordinate system). So I went on to try out other meshes, since there are various opengl examples and they work with these. So I found that there are meshes that work correctly, and there are ones that don't. So I thought the issue was the input mesh, but then when I returned to my app to implement the same technique I ran again into this distortion issue. But I'm using a blender-generated cube as a model, so what it should be in right handed coordinates...
To add the DX sample uses world space, but it doesn't seem to transform any attributes to world space, so that indicates that the models are already in world space.
But hey then the algorithm only works for world space models, or what?
So how could it be generalized to use an object-space input model and transform it to whatever space one likes?

EDIT: furthermore if the sample claims to have its attributes in world space, then why does it multiply the position with a modelviewprojection matrix?
and changing the modelviewprojection matrix to a viewprojection doesnt change anything...

Share this post


Link to post
Share on other sites
so I finally solved it. As it turned out the shaders weren't the problem, but the assets, and the settings.
so I used the two textures that were used in the rendermonkey sample.
rgb albedo + rgb normals, height in alpha channel
I created a monkey in blender. I added UVs to it (edit mode, left panel, unwrap->reset), set normals to smooth and exported it into obj format.
I used 0.04 as the height map scale value, 8 as the minimum number of samples and 128 for the maximum number of samples.
Here are the rendermonkey projects + the textures and the monkey:
[url="https://docs.google.com/open?id=0BzHTUfIQ-XD8SWFVemRhdlVxNEE"]https://docs.google....SWFVemRhdlVxNEE[/url]

EDIT: when porting to my engine, I bumped into a lot of weirdness. After a lot of shader modifications I thought lets go back to the basics, and check if the normals and the tangents look right. The normals did but the tangents didnt, so I went back to the tangent vector calculation. I used some algorithm that I found somewhere on the internet, but it didn't work. So I spent a few hours searching for another algorithm, because I was too lazy to come up with one. Then it popped into my mind that I implemented some helper functions when I developed libmymath. So I looked at them and found calculate_tangent_basis(). How silly is that? ok I may say as an excuse that I developed the maths library almost a year ago :) (and I didnt touch it since last october, because it worked correctly...)
well you can be sure that the shaders are right. To add if you're interested in the tangent calculation just look at the link in my signature.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this