per pixel displacement mapping

Started by
24 comments, last by jeroenb 18 years, 6 months ago
Hello all, I tried to implement the displacement mapping technique described by William Donnelly in chapter 08 in the gpu gems 2. I want to implement that in glsl so I use rendermonkey for that. As this chapter can be downloaded for free at nvidia I hope there is someone can have a look at what went wrong in my shader... I don't use the filtering because: 1. I don't know how to access a texture for filtering like it is done in the paper (maybe with texture2DProj() ???) 2. I have a ati x700 and thus no access to derivatives in the fragment shader. I use the software renderer or sometimes an other machine with a geforce 6600... The distanceMap is calculated by the algorithm of Danielsson (1980) as described in the paper and found on the cd. Also they look fine to me. Am I corect with the order of the distancemap to set the first slice of the volume texture to the most black one and the last one to the texture with most white areas? Maybe it is because of the bivector and tangentvector i get from rendermonkey? this is the code: VertexShader:


uniform vec4 vViewPosition;
uniform float BumpDepth; 
uniform vec4 lightPos;

attribute vec3 rm_Tangent; 
attribute vec3 rm_Binormal;

varying vec3 TexCoord;
varying vec3 tanEyeVec;
varying vec3 tanLightVec;

void main( void )
{
    // Project position into screen space 
   // and pass through texture coordinate
    gl_Position = ftransform();
    TexCoord    = vec3(gl_MultiTexCoord0.xy, 1.0);

   // Transform the eye vector into tangent space.
   // Adjust the slope in tangent space based on bump depth
   vec3 eyeVec = (vViewPosition.xyz - gl_Vertex.xyz);
   tanEyeVec.x = dot(rm_Tangent, eyeVec);
   tanEyeVec.y = dot(rm_Binormal, eyeVec);
   tanEyeVec.z = -1.0/BumpDepth * dot(gl_Normal, eyeVec);
   
   // Transform the light vector into tangent space.
   // We will use this later for tangent-space normal mapping
   vec3 lightVec = (lightPos.xzy - gl_Vertex.xzy);
   tanLightVec.x = dot(rm_Tangent, lightVec);
   tanLightVec.y = dot(rm_Binormal, lightVec);
   tanLightVec.z = dot(gl_Normal, lightVec);
    
}

PixelShader:


#define NUM_ITERATIONS 2

uniform sampler2D colorSampler;
uniform sampler3D heightSampler;
uniform sampler2D normalSampler; 
// this is the depth of 3dTexture divided by the width of the 3dTexture
// 8/256 in my case
uniform float normalizationFactor; 

varying vec3 TexCoord;
varying vec3 tanEyeVec;
varying vec3 tanLightVec;

void main( void )
{
   // Normalize the offset vector in texture space.
   // The normalization factor ensures we are normalized with respect
   // to a distance which is defined in terms of pixels.
   vec3 offset = normalize(tanEyeVec);
   offset *= normalizationFactor;
   vec3 texCoord = TexCoord;
   // March a ray
   
   for (int i = 0; i < NUM_ITERATIONS; i++) {
      float distance2Trace = texture3D(heightSampler, texCoord).r;
      texCoord += distance2Trace * offset;
   }
   
   // Compute derivatives of unperturbed texcoords.
   // This is because the offset texcoords will have discontinuities
   // which lead to incorrect filtering.
   //vec2 dx = dFdx(TexCoord.xy);
   //vec2 dy = dFdy(TexCoord.xy);
   // Do bump-mapped lighting in tangent space.
   // ‘normalTex’ stores tangent-space normals remapped 
   // into the range [0, 1].
   vec3 tanNormal = 2.0 * texture2D(normalSampler, texCoord.xy).xyz - 1.0;
   //vec3 tanNormal = 2.0 * texture2D(normalSampler, texCoord.xy, dx, dy) - 1.0;
   vec3 tanLightVecN = normalize(tanLightVec);
   float diffuse = dot(tanNormal, tanLightVecN);
   // Multiply diffuse lighting by texture color
   //gl_FragColor = diffuse * texture2D(colorSampler, texCoord.xy, dx, dy);
   gl_FragColor = diffuse * texture2D(colorSampler, texCoord.xy);
    
}

thanks for any help or suggestions :) Bebud
Advertisement
What is the exact problem that you're having? Realize also that on an ATI card you're not going to be able to do more than 5 iterations (dependent texture read limit of 4) and thus it's not going to look particularly good.

Still if you could explain your question/problem more, we might be able to help.
ok...
my problem was that the displacement was placed in the wrong place and also it was distorted in the wrong direction. But thats solved now by flipping the heighmap by the vertical axis and changing the normalization parameter to minus!

The results are very nice!

dispmap

But I have problems with non-planar Objects:

At some degree of the viewers position the distortion is wrong. Also on planes you can see an edge where no displacement happens.
The mouse pointer points to this edge:



Another question I have is how to use the filtering as used in the paper in GLSL?
Ok. The derivatives are can be done by dFdx/dFdy. But how can I use these vectors to load a special lod of the Image?

Bebud :)
hmm.. the only reason this non displacement could have is the ray not hitting the heightmap i think but i don't know how that could happen..

PS:a little offtopic but couldn't also the silhouette be better done by making the bumpmap being "pressed" into the surface instead letting it stick out of the surface? that would make it possible for the ray not to hit the hieghtmap and you could turn these pixels transparent(either with clip(-1) or turning there alpha to 0.0f) Is this correct?

regards,
m4gnus
"There are 10 types of people in the world... those who understand binary and those who don't."
Quote:hmm.. the only reason this non displacement could have is the ray not hitting the heightmap i think but i don't know how that could happen..


I checked this. I used another starting point in the displacement volume texture:
vec3 texCoord = vec3(TexCoord, 0.5);
instead of:
vec3 texCoord = vec3(TexCoord, 1.0);

now I don't have this artifact but other! :(

Maybe someone could tell me if the process of generating my volume texture is correct:
1.input the heighmap and generate sibgle textures with the algorithm of Danielsson (1980).
2.input the single textures in TextureAtlasTool from nvidia with the params:
-volume -nomipmap

now I have a dds texture. but what is a bit confusing is the tai file output from TextureAtlasTool:
# stone-dismap.tai# AtlasCreationTool.exe -nomipmap -volume -o stone-dismap## <filename>		<atlas filename>, <atlas idx>, <atlas type>, <woffset>, <hoffset>, <depth offset>, <width>, <height>## Texture <filename> can be found in texture atlas <atlas filename>, i.e., # stone-dismap<idx>.dds of <atlas type> type with texture coordinates boundary given by:#   A = ( <woffset>, <hoffset> )#   B = ( <woffset> + <width>, <hoffset> + <height> )## where coordinates (0,0) and (1,1) of the original texture map correspond# to coordinates A and B, respectively, in the texture atlas.# If the atlas is a volume texture then <depth offset> is the w-coordinate# to use the access the appropriate slice in the volume atlas.heightMap00.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.062500, 1.000000, 1.000000heightMap01.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.187500, 1.000000, 1.000000heightMap02.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.312500, 1.000000, 1.000000heightMap03.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.437500, 1.000000, 1.000000heightMap04.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.562500, 1.000000, 1.000000heightMap05.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.687500, 1.000000, 1.000000heightMap06.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.812500, 1.000000, 1.000000heightMap07.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.937500, 1.000000, 1.000000



doesn't the displacement algorithm use 1.0 as the starting offset? But this seems to go from 0-1 in the <depth offset>!
On the other side. When I set my texCoord.z=1.5 it also works with except of some artifacts.
Well what are these artifacts? They still occur on some angles! On certain angles it seems as if the displacement is bigger than on other and I have some swirl effect...I'll try to make some screenshots...

[Edited by - bebud on September 2, 2005 10:13:21 AM]
ok. I forgot to pass the normals in rendermonkey stream mapping option to my shaders!!!
Now all seems to be fine but I still wonder why I have to start my tracing at 0.5 and not 1.0!?

The algorithm may still work when you start texcoord.z at 0.5 or 1.5 (it's very good at converging ;) ), but it will probably have some strange artifacts too.
Maybe just write a debugging shader that renders the slice of the distance map with Z=1? If you have your texture wrap mode set wrong (like with repeating or something) then you will get odd results.
I have never used the texture atlas tool, so visualising the distance map would be a very good idea to make sure it came through right.

Will.
"Math is hard" -Barbie
Hey!

The debugging shader at 1.0 gives me the first layer on the volume texture...


This is the output (flatten) from my distancemap generator by inputing a heighmap and saying that it should produce a volume texture with the atlas tool:



i can also produce the reverse order, which I think is the right order:


but as you can see two slices are complete black! Should I kick them off? Is this natural?

just curious, how fast is this thing? I remember when I was reading that GPU Gems chapter I was immediately put off by the 3D texture and the precomputation involved. Have you considered relief mapping? I guess it might not be suitable for you since it requires shader model 3.0, probably due to the number of instructions it needs in the linear ray stepping. Per pixel displacement mapping is really an interesting problem, and it would be great to have a technique that doesn't require a 3D texture or precomputation or linear ray stepping. The ray stepping is a problem, like the GPU Gems 2 guys mentioned, that in between steps you can miss a high frequency ridge. Perhaps by next SIGGRAPH we might have a nice solution.
Quote:Original post by musawirali
just curious, how fast is this thing?

It's plenty fast on modern hardware. By my knowledge it can easily do a full screen of pixels @ >100Hz, and since it's a per-pixel method, it automatically scales with distance.

Quote:Original post by musawirali
The ray stepping is a problem, like the GPU Gems 2 guys mentioned, that in between steps you can miss a high frequency ridge.

By my understanding a lot of the point of the distance map is to avoid skipping these high-frequency details. The advantage of the distance functions method as well is that it can do undercuts and all sorts of other interesting geometry. I suspect it could be paired extremely effectively with a deferred renderer as well to get correct silhouettes (with no extra work) and such.

This topic is closed to new replies.

Advertisement