Per-pixel displacement mapping GLSL

Started by
6 comments, last by Paxi 9 years, 11 months ago

Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82.

At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. I added some screen to illustrate the problem bellow.

EDIT: I added a video http://paxi.at/random/sfg_com.avi

[attachment=21194:1.jpg]

[attachment=21195:2.jpg]

[attachment=21196:3.jpg]

[attachment=21197:4.jpg]

Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out.

I added my full VS and FS code:

VS:


#version 400

layout(location = 0) in vec3 IN_VS_Position;
layout(location = 1) in vec3 IN_VS_Normal;
layout(location = 2) in vec2 IN_VS_Texcoord;
layout(location = 3) in vec3 IN_VS_Tangent;
layout(location = 4) in vec3 IN_VS_BiTangent;

uniform vec3 uLightPos;
uniform vec3 uCameraDirection;
uniform mat4 uViewProjection;
uniform mat4 uModel;
uniform mat4 uView;
uniform mat3 uNormalMatrix;

out vec2 IN_FS_Texcoord;
out vec3 IN_FS_CameraDir_Tangent;
out vec3 IN_FS_LightDir_Tangent;

void main( void )
{

   IN_FS_Texcoord = IN_VS_Texcoord;


   vec4 posObject = uModel * vec4(IN_VS_Position, 1.0);
   vec3 normalObject         = (uModel *  vec4(IN_VS_Normal, 0.0)).xyz;
   vec3 tangentObject        = (uModel *  vec4(IN_VS_Tangent, 0.0)).xyz;
   //vec3 binormalObject       = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz;
   vec3 binormalObject =  normalize(cross(tangentObject, normalObject));

   // uCameraDirection is the camera position, just bad named
   vec3 fvViewDirection  = normalize( uCameraDirection - posObject.xyz);
   vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz );

   IN_FS_CameraDir_Tangent.x  = dot( tangentObject, fvViewDirection );
   IN_FS_CameraDir_Tangent.y  = dot( binormalObject, fvViewDirection );
   IN_FS_CameraDir_Tangent.z  = dot( normalObject, fvViewDirection );

   IN_FS_LightDir_Tangent.x  = dot( tangentObject, fvLightDirection );
   IN_FS_LightDir_Tangent.y  = dot( binormalObject, fvLightDirection );
   IN_FS_LightDir_Tangent.z  = dot( normalObject, fvLightDirection );

   gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0);
}

The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space.

FS:


#version 400

// uniforms
uniform Light 
{
    vec4 fvDiffuse;
    vec4 fvAmbient;
    vec4 fvSpecular;
};

uniform Material {
    vec4 diffuse;
    vec4 ambient;
    vec4 specular;
    vec4 emissive;
    float fSpecularPower;
    float shininessStrength;
};

uniform sampler2D colorSampler;
uniform sampler2D normalMapSampler;
uniform sampler2D heightMapSampler;

in vec2 IN_FS_Texcoord;
in vec3 IN_FS_CameraDir_Tangent;
in vec3 IN_FS_LightDir_Tangent;

out vec4 color;

vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){

    vec2 NewCoords = coords;
    vec2 dUV = - dir.xy * height * 0.08;
    float SearchHeight = 1.0;
    float prev_hits = 0.0;
    float hit_h = 0.0;

    for(int i=0;i<10;i++){
        SearchHeight -= 0.1;
        NewCoords += dUV;
        float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r;
        float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0);
        hit_h += first_hit * SearchHeight;
        prev_hits += first_hit;
    }
    NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV;

    vec2 Temp = NewCoords;
    SearchHeight = hit_h+0.1;
    float Start = SearchHeight;
    dUV *= 0.2;
    prev_hits = 0.0;
    hit_h = 0.0;
    for(int i=0;i<5;i++){
        SearchHeight -= 0.02;
        NewCoords += dUV;
        float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r;
        float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0);
        hit_h += first_hit * SearchHeight;
        prev_hits += first_hit;    
    }
    NewCoords = Temp + dUV * (Start - hit_h) * 50.0f;

    return NewCoords;
}

void main( void )
{
   vec3  fvLightDirection = normalize( IN_FS_LightDir_Tangent );
   vec3  fvViewDirection  = normalize( IN_FS_CameraDir_Tangent );

   float mipmap = 0;

   vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap);

   //vec2 ddx = dFdx(NewCoord);
   //vec2 ddy = dFdy(NewCoord);

   vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz;
   BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0));

   vec3  fvNormal         = BumpMapNormal;
   float fNDotL           = dot( fvNormal, fvLightDirection ); 

   vec3  fvReflection     = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); 
   float fRDotV           = max( 0.0, dot( fvReflection, fvViewDirection ) );

   vec4  fvBaseColor      = textureLod( colorSampler, NewCoord.xy,mipmap);

   vec4  fvTotalAmbient   = fvAmbient * fvBaseColor; 
   vec4  fvTotalDiffuse   = fvDiffuse * fNDotL * fvBaseColor; 
   vec4  fvTotalSpecular  = fvSpecular * ( pow( fRDotV, fSpecularPower ) );

   color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) );

}

The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map.

I looking forward for some ideas. Thanks in advance!

UPDATE: I added a video http://paxi.at/random/sfg_com.avi

Advertisement

Looks almost like your camera ray is inverted. Try using:


normalize( posObject.xyz - uCameraDirection );

Always seems to be where I screw up when implementing POM and occasionally lighting. Easy to get wrong and even easier to overlook. :)

Thank for your suggestion.

My camera direction was really inverted, but I invert it in the TraceRay method again (


vec2 dUV = - dir.xy * height * 0.08;

dir is the cameradirection in tangent space.

).

I tried changing to


normalize( posObject.xyz - uCameraDirection );

and without inverting in the TraceRay method but I get the same result unfortunately :(

try flipping the binormal and see if it has any effects.

When I´m flipping the binormal, the effect is more visibile at the other side of the quad.

In general it seems to me, that the uv coordinates are "moved" a little bit when im getting closer and therefore the texture is not applied correctly anymore.

[attachment=21258:5.jpg]

I also added a video http://paxi.at/random/sfg_com.avi

I am not sure, but dont you have to normalize normalObject and tangentObject (vertex normal and tangent after the matrix multiplication)?

Unfortunately no change. I tried that already before, the uModel matrix is in fact only a idently matrix at the moment.

Some more suggestions from my side:

Actually i have no idea why it works. But it works when i do NOT normalize the view and light direction in the VS.

So instead of this:


vec3 fvViewDirection  = normalize( uCameraDirection - posObject.xyz);
vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz );

I got this now:


vec3 fvViewDirection  = uCameraDirection - posObject.xyz;
vec3 fvLightDirection = uLightPos.xyz - posObject.xyz;

Here is a final video:
http://www.paxi.at/random/final.avi

This topic is closed to new replies.

Advertisement