# per pixel displacement mapping

This topic is 4492 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello all, I tried to implement the displacement mapping technique described by William Donnelly in chapter 08 in the gpu gems 2. I want to implement that in glsl so I use rendermonkey for that. As this chapter can be downloaded for free at nvidia I hope there is someone can have a look at what went wrong in my shader... I don't use the filtering because: 1. I don't know how to access a texture for filtering like it is done in the paper (maybe with texture2DProj() ???) 2. I have a ati x700 and thus no access to derivatives in the fragment shader. I use the software renderer or sometimes an other machine with a geforce 6600... The distanceMap is calculated by the algorithm of Danielsson (1980) as described in the paper and found on the cd. Also they look fine to me. Am I corect with the order of the distancemap to set the first slice of the volume texture to the most black one and the last one to the texture with most white areas? Maybe it is because of the bivector and tangentvector i get from rendermonkey? this is the code: VertexShader:

uniform vec4 vViewPosition;
uniform float BumpDepth;
uniform vec4 lightPos;

attribute vec3 rm_Tangent;
attribute vec3 rm_Binormal;

varying vec3 TexCoord;
varying vec3 tanEyeVec;
varying vec3 tanLightVec;

void main( void )
{
// Project position into screen space
// and pass through texture coordinate
gl_Position = ftransform();
TexCoord    = vec3(gl_MultiTexCoord0.xy, 1.0);

// Transform the eye vector into tangent space.
// Adjust the slope in tangent space based on bump depth
vec3 eyeVec = (vViewPosition.xyz - gl_Vertex.xyz);
tanEyeVec.x = dot(rm_Tangent, eyeVec);
tanEyeVec.y = dot(rm_Binormal, eyeVec);
tanEyeVec.z = -1.0/BumpDepth * dot(gl_Normal, eyeVec);

// Transform the light vector into tangent space.
// We will use this later for tangent-space normal mapping
vec3 lightVec = (lightPos.xzy - gl_Vertex.xzy);
tanLightVec.x = dot(rm_Tangent, lightVec);
tanLightVec.y = dot(rm_Binormal, lightVec);
tanLightVec.z = dot(gl_Normal, lightVec);

}



#define NUM_ITERATIONS 2

uniform sampler2D colorSampler;
uniform sampler3D heightSampler;
uniform sampler2D normalSampler;
// this is the depth of 3dTexture divided by the width of the 3dTexture
// 8/256 in my case
uniform float normalizationFactor;

varying vec3 TexCoord;
varying vec3 tanEyeVec;
varying vec3 tanLightVec;

void main( void )
{
// Normalize the offset vector in texture space.
// The normalization factor ensures we are normalized with respect
// to a distance which is defined in terms of pixels.
vec3 offset = normalize(tanEyeVec);
offset *= normalizationFactor;
vec3 texCoord = TexCoord;
// March a ray

for (int i = 0; i < NUM_ITERATIONS; i++) {
float distance2Trace = texture3D(heightSampler, texCoord).r;
texCoord += distance2Trace * offset;
}

// Compute derivatives of unperturbed texcoords.
// This is because the offset texcoords will have discontinuities
// which lead to incorrect filtering.
//vec2 dx = dFdx(TexCoord.xy);
//vec2 dy = dFdy(TexCoord.xy);
// Do bump-mapped lighting in tangent space.
// ‘normalTex’ stores tangent-space normals remapped
// into the range [0, 1].
vec3 tanNormal = 2.0 * texture2D(normalSampler, texCoord.xy).xyz - 1.0;
//vec3 tanNormal = 2.0 * texture2D(normalSampler, texCoord.xy, dx, dy) - 1.0;
vec3 tanLightVecN = normalize(tanLightVec);
float diffuse = dot(tanNormal, tanLightVecN);
// Multiply diffuse lighting by texture color
//gl_FragColor = diffuse * texture2D(colorSampler, texCoord.xy, dx, dy);
gl_FragColor = diffuse * texture2D(colorSampler, texCoord.xy);

}


thanks for any help or suggestions :) Bebud

##### Share on other sites
What is the exact problem that you're having? Realize also that on an ATI card you're not going to be able to do more than 5 iterations (dependent texture read limit of 4) and thus it's not going to look particularly good.

Still if you could explain your question/problem more, we might be able to help.

##### Share on other sites
ok...
my problem was that the displacement was placed in the wrong place and also it was distorted in the wrong direction. But thats solved now by flipping the heighmap by the vertical axis and changing the normalization parameter to minus!

The results are very nice!

But I have problems with non-planar Objects:

At some degree of the viewers position the distortion is wrong. Also on planes you can see an edge where no displacement happens.
The mouse pointer points to this edge:

Another question I have is how to use the filtering as used in the paper in GLSL?
Ok. The derivatives are can be done by dFdx/dFdy. But how can I use these vectors to load a special lod of the Image?

Bebud :)

##### Share on other sites
hmm.. the only reason this non displacement could have is the ray not hitting the heightmap i think but i don't know how that could happen..

PS:a little offtopic but couldn't also the silhouette be better done by making the bumpmap being "pressed" into the surface instead letting it stick out of the surface? that would make it possible for the ray not to hit the hieghtmap and you could turn these pixels transparent(either with clip(-1) or turning there alpha to 0.0f) Is this correct?

regards,
m4gnus

##### Share on other sites
Quote:
 hmm.. the only reason this non displacement could have is the ray not hitting the heightmap i think but i don't know how that could happen..

I checked this. I used another starting point in the displacement volume texture:
vec3 texCoord = vec3(TexCoord, 0.5);
vec3 texCoord = vec3(TexCoord, 1.0);

now I don't have this artifact but other! :(

Maybe someone could tell me if the process of generating my volume texture is correct:
1.input the heighmap and generate sibgle textures with the algorithm of Danielsson (1980).
2.input the single textures in TextureAtlasTool from nvidia with the params:
-volume -nomipmap

now I have a dds texture. but what is a bit confusing is the tai file output from TextureAtlasTool:
# stone-dismap.tai# AtlasCreationTool.exe -nomipmap -volume -o stone-dismap## <filename>		<atlas filename>, <atlas idx>, <atlas type>, <woffset>, <hoffset>, <depth offset>, <width>, <height>## Texture <filename> can be found in texture atlas <atlas filename>, i.e., # stone-dismap<idx>.dds of <atlas type> type with texture coordinates boundary given by:#   A = ( <woffset>, <hoffset> )#   B = ( <woffset> + <width>, <hoffset> + <height> )## where coordinates (0,0) and (1,1) of the original texture map correspond# to coordinates A and B, respectively, in the texture atlas.# If the atlas is a volume texture then <depth offset> is the w-coordinate# to use the access the appropriate slice in the volume atlas.heightMap00.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.062500, 1.000000, 1.000000heightMap01.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.187500, 1.000000, 1.000000heightMap02.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.312500, 1.000000, 1.000000heightMap03.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.437500, 1.000000, 1.000000heightMap04.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.562500, 1.000000, 1.000000heightMap05.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.687500, 1.000000, 1.000000heightMap06.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.812500, 1.000000, 1.000000heightMap07.png		stone-dismap0.dds, 0, Volume, 0.000000, 0.000000, 0.937500, 1.000000, 1.000000

doesn't the displacement algorithm use 1.0 as the starting offset? But this seems to go from 0-1 in the <depth offset>!
On the other side. When I set my texCoord.z=1.5 it also works with except of some artifacts.
Well what are these artifacts? They still occur on some angles! On certain angles it seems as if the displacement is bigger than on other and I have some swirl effect...I'll try to make some screenshots...

[Edited by - bebud on September 2, 2005 10:13:21 AM]

##### Share on other sites
ok. I forgot to pass the normals in rendermonkey stream mapping option to my shaders!!!
Now all seems to be fine but I still wonder why I have to start my tracing at 0.5 and not 1.0!?

##### Share on other sites
The algorithm may still work when you start texcoord.z at 0.5 or 1.5 (it's very good at converging ;) ), but it will probably have some strange artifacts too.
Maybe just write a debugging shader that renders the slice of the distance map with Z=1? If you have your texture wrap mode set wrong (like with repeating or something) then you will get odd results.
I have never used the texture atlas tool, so visualising the distance map would be a very good idea to make sure it came through right.

Will.

##### Share on other sites
Hey!

The debugging shader at 1.0 gives me the first layer on the volume texture...

This is the output (flatten) from my distancemap generator by inputing a heighmap and saying that it should produce a volume texture with the atlas tool:

i can also produce the reverse order, which I think is the right order:

but as you can see two slices are complete black! Should I kick them off? Is this natural?

##### Share on other sites
just curious, how fast is this thing? I remember when I was reading that GPU Gems chapter I was immediately put off by the 3D texture and the precomputation involved. Have you considered relief mapping? I guess it might not be suitable for you since it requires shader model 3.0, probably due to the number of instructions it needs in the linear ray stepping. Per pixel displacement mapping is really an interesting problem, and it would be great to have a technique that doesn't require a 3D texture or precomputation or linear ray stepping. The ray stepping is a problem, like the GPU Gems 2 guys mentioned, that in between steps you can miss a high frequency ridge. Perhaps by next SIGGRAPH we might have a nice solution.

##### Share on other sites
Quote:
 Original post by musawiralijust curious, how fast is this thing?

It's plenty fast on modern hardware. By my knowledge it can easily do a full screen of pixels @ >100Hz, and since it's a per-pixel method, it automatically scales with distance.

Quote:
 Original post by musawiraliThe ray stepping is a problem, like the GPU Gems 2 guys mentioned, that in between steps you can miss a high frequency ridge.

By my understanding a lot of the point of the distance map is to avoid skipping these high-frequency details. The advantage of the distance functions method as well is that it can do undercuts and all sorts of other interesting geometry. I suspect it could be paired extremely effectively with a deferred renderer as well to get correct silhouettes (with no extra work) and such.

##### Share on other sites
Quote:
 just curious, how fast is this thing? I remember when I was reading that GPU Gems chapter I was immediately put off by the 3D texture and the precomputation involved. Have you considered relief mapping?

Like AndyTX mentioned it's pretty fast on my Geforce6 machine. On slower machines (SM 2.0) I use two steps which gives me much nicer results than offset/parallax Mapping.

I think it could also be possible to put the 3D texture into a big 2D texture using AtlasTexture Tools if you want to prevent 3D-textures.

Why don't you like the precomputation? It's just a texture which you have to prepare like other textures for your game/visualisation. And with the right tools it can be automate, like "click, wait,... done" :)

@AndyTX: do you think my tool creates the right distance map?

##### Share on other sites
well pre-computation generally limits algorithms in certain cases, i.e. they are not very flexible. For example, if the height field changes dynamically you basically cannot apply this technique. Sure you can animate the height field by storing this 3D texture for every keyframe but is that really feasible? :P

##### Share on other sites
Quote:
 Original post by bebud@AndyTX: do you think my tool creates the right distance map?

I really don't know - I'd have to look at it in more detail. You're going to have to wait for Will to get back to you on this one :)

##### Share on other sites
From the images you posted, I am fairly confident that you are not computing the distance map correctly. Recall that if you have a height map h, the value of the distance map d(x,y,z) is the minimum distance between (x,y,z) and any other point (u, v, h(u,v)). It looks as though the result you are getting is the distance between (x,y,z) and h(x,y). The distance map computation code that I cooked up with Stefanus is available online from NVIDIA at http://download.nvidia.com/developer/GPU_Gems_2/CD/Index.html and I would suggest comparing your results with ours to see the difference.

As for the black layers, your bottom layer should always be black. This ensures that the ray always terminates, otherwise it could cause some very odd artifacts. But two black layers just means you are wasting data.

Will.

##### Share on other sites
Hey!

Thanks a lot...I'll have a look at it today :)

##### Share on other sites
I am looking at this chapter too and I am trying to implement it with OpenGL and CG but I cant get the displacement working. I use the distance.cpp file that comes with the chapter sources. There is no compiler error, but the displacement does not happen, only the normal mapping. For filling the distance texture I use the glTexImage3D function:

glTexImage3D(GL_TEXTURE_3D, 0, 1, width, height, depth, 0, GL_RED, GL_UNSIGNED_SHORT, final.m_data);

final is an instance of the DistanceMap structure, filled with almost the same code as in the original file:

   DistanceMap final(width, height, depth, 1);   for (int z = 0; z < depth; z++) {      for (int y = 0; y < height; y++) {         for (int x = 0; x < width; x++) {            double value = 0;            for (int i = 0; i < 3; i++) {               value += dmap(x, y, z, i)*dmap(x, y, z, i);            }            value = sqrt(value)/depth;            if (value > 1.0) value = 1.0;            final(x, y, z, 0) = value;         }      }   }

The CG code is exactly the same as comes with the chapter sources. I guess there is something wrong with the glTexImage3D call.

I searched for the Danielsson paper, but I couldn't find it. Does one of you have a reference to it?

##### Share on other sites
Seems that something is wrong with my call to the glTexImage3D function. Even when I set every color to the maximum I get no replacement. I checked my videocards capabilities and it should support 3D texture with sizes up to 512 (GeForce6).

GLuint id;   glGenTextures (1, &id);   glGetError();   glTexImage3D(GL_TEXTURE_3D, 0, 3, width, height, depth, 0, GL_RGB, GL_UNSIGNED_BYTE, final.m_data);   int error = glGetError();   if (error != GL_NO_ERROR) {      cerr << gluErrorString(error) << endl;   }

final.m_data contains the data which should be loaded into my texture. In my fragment shader it tells me (by coloring red) that for every texel the contents of this map is 0. I don't realy know how to proceed from here..

Anyone an idea what could be wrong here?

##### Share on other sites
Quote:
 Original post by jeroenb[...] GLuint id; glGenTextures (1, &id); glGetError(); glTexImage3D(GL_TEXTURE_3D, 0, 3, width, height, depth, 0, GL_RGB, GL_UNSIGNED_BYTE, final.m_data); int error = glGetError(); if (error != GL_NO_ERROR) { cerr << gluErrorString(error) << endl; }[...]Anyone an idea what could be wrong here?
You aren't binding the texture before you try to upload data to it.

##### Share on other sites
I added the bind call, but it still did not work. I will have another indepth look at it tomorrow, but anyone has an idea I am all open for it :)

##### Share on other sites
Update:

I got things almost running. But for some strange reasons the bumpdepth isn't working as in the demo. When increasing the depth, nothing changes during the run. Even not when I hardcode it in the vertex program.

##### Share on other sites
Be aware that one of the big problems with per pixel displacement mapping is that it throws early Z out the window. You might want to do occlusion culling for anything that's using it, because all of the pixels will be computed regardless of depth test.

##### Share on other sites
You only lose early Z rejection if you actually modify the z value in the pixel shader. However most implementations of per-pixel displacement mapping do NOT do this (for the very reason you mentioned). So you don't get proper z buffering but you do get parallax, self-occlusion etc. If you are using shadowmapping and deferred shading you can write out the modified viewspace position (based on the height) and have your shadows react to the bumps almost for free :)

##### Share on other sites
That's a good point about deferred rendering and shadow maps. If anyone has a demo that does this, I would really like to see it.

##### Share on other sites
Ok, per-pixel displacement now works, tho still now view independant. For some reason I dont get it rotate correctly like the demo does. I tried to use the inverse/transpose modelview matrix to transform the eye position like the code of the demo does. But with no success yet.

##### Share on other sites
Quote:
 Original post by jeroenbOk, per-pixel displacement now works, tho still now view independant. For some reason I dont get it rotate correctly like the demo does. I tried to use the inverse/transpose modelview matrix to transform the eye position like the code of the demo does. But with no success yet.

and have you tried it without any transformation matrix? I didn't have a deep look at the source in the demo app. but in the book they don't use any transformations...