What is needed for parallax mapping?

Started by
7 comments, last by rubicondev 14 years ago
I'm attempting to implement parallax mapping with a deferred renderer. I don't want parallax occlusion mapping, steep parallax mapping, any of that, just plain old parallax mapping, but it seems nearly everything on the web I can find is on one of the more complex approaches. I understand that parallax mapping shifts the texture coordinates based on the viewing angle and height. However, all the sample code use variables like "eye" and "worldMatrix" without explaining what those actually are. Can someone explain exactly what I need to pass to the shader to get plain old parallax mapping to work, and how this data is used to determine the direction of offset and amount? I don't even need lighting - that's done in the deferred stage. I just need enough to get the texture coordinates shifted. Currently what is passed to the fragment shader is: position, normal, tangent, binormal, all in view space (ie, multiplied by the modelViewMatrix or inverse transpose in the case of normals), and texture coordinate. (One other quick question: I don't know much about tangent space, so I might be wrong. With the normal, you must multiply by the inverse transpose. But since the tangent and binormal are "parallel" (somewhat) to the surface, you would multiply them by the same matrix the position uses, right? Or am I thinking about it wrong?) EDIT: Hmm, actually, is it generally more useful to keep normals/tangents/binormals in world space instead of view space? Either is easy. Thanks!
Advertisement
Quote:However, all the sample code use variables like "eye" and "worldMatrix" without explaining what those actually are.


How many shaders have you written? Those are used in every shader really. Since parallax is doing the shift you talk about based on the viewer. If you are viewing from the top down on the texture there would be no shift. Viewing from the side you would see a big shift. You need the eye vector to do this.

Try doing normal mapping first to make sure you have things correctly set.

To get to any space you just have 3 vectors describing that space. A vector in that space is a dot product against all 3 vectors. So if you have a model with Norm,Tan,BiNorm in world space, you just multiply any world space vector to get those in tangent space.

The basic idea was to get the light in tangent space and interpolate per-vertex. Since you are in deferred, you need all normals in view space. So you need to take the normal that you get from your normal map (in tan space), and put it through the inverse matrix to bring it to eye space. I might have a few things off here since I haven't done it in deferred.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

"eye" is usually the position of the viewer, in whatever space your shading takes place in (possibly world space).
"worldMatrix" is usually a matrix for transforming model-space data into world-space.

The normal/binormal/tangent form a 3d coordinate system. If you transform one of these, you really should be performing the exact same transformation on the other 2 as well!

Usually to transform positions from model to world, you'll just use the "worldMatrix", but when transforming normals/binorm/tangents, you don't want to translate/scale them (because then they wouldn't be unit vectors any more), you just want to rotate them using the 3x3 part of the worldMatrix (assuming there's no scaling involved).

Both world-space and view-space are valid for shading... just make sure all the calculations are done in one or the other!
I condensed plain 'ol parallax mapping down into this GLSL function. I know it's not an "explanation", but for what it's worth:
vec2 parallaxmap(sampler2D heightmap, vec2 texcoords, float scale, bool flip){    float fDepth = 0.0;    vec2 vHalfOffset = vec2(0.0,0.0);    int i = 0;    vec2 eyevec = vec2(dot(vVertex,t),dot(vVertex,b));    if (flip){while (i < 3){fDepth= fDepth+(    texture2D(heightmap,texcoords+vHalfOffset).r) *0.5;vHalfOffset=normalize(eyevec).xy*fDepth*scale;i+=1;}}    else     {while (i < 3){fDepth=(fDepth+(1.0-texture2D(heightmap,texcoords+vHalfOffset).r))*0.5;vHalfOffset=normalize(eyevec).xy*fDepth*scale;i+=1;}}    return vHalfOffset;}
-G

[size="1"]And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.
[size="2"]

Thanks everyone! That helped clarify a lot. One question... what exactly is vVertex? Is it the position of that fragment? And when would "flip" be used? Thanks.

EDIT: Oh, also, one more thing. Is there a way to get the new "position" of the fragment if I have a float3 of its original position? It isn't a huge deal, but I also have cube mapping going on, so and I think it might be more accurate if I have the "correct" position of the fragment (though I haven't tested it yet).

[Edited by - Gumgo on March 31, 2010 1:00:48 AM]
Quote:One question... what exactly is vVertex? Is it the position of that fragment? And when would "flip" be used? Thanks.

EDIT: Oh, also, one more thing. Is there a way to get the new "position" of the fragment if I have a float3 of its original position? It isn't a huge deal, but I also have cube mapping going on, so and I think it might be more accurate if I have the "correct" position of the fragment (though I haven't tested it yet).
vVertex = vec4(gl_ModelViewMatrix*gl_Vertex).xyz;
Also, look at the source; "flip" swaps the low for the high elevations. Cube mapping also is not affected by the final position. Cube mapping is only accurate if the reflected surfaces are at infinity--a starting assumption for the technique.
-G

[size="1"]And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.
[size="2"]

Thanks for the clarification. The reason I wanted the position was because in view space the position is the vector from the camera to the point on the object. However, now that I think about it, I think the input fragment position is actually correct.
I've just had a go at slotting this in to my engine - it's been on my todo list for a while. I have a problem though. I can see it's doing something, but it's not right.

This is just a lighting effect? I'm doing multipass lighting, so the final pixel generation is divorced from the lighting that falls on it, and of course my existing bump-mapping is done in a shader that applies lighting and shadows only - it's too late to move the UV for the effect texture sampling.

I've looked at the DX samples and I just can't work out from looking at exe output if the textures move as well as the change in lighting. Can't deduce much from the source as all I can see is a really complicated ray-tracing method which I just don't need.
------------------------------Great Little War Game
Just ftr. I had a crap bug and now I've fixed it I clearly see that whilst you can get away with a little bit of lighting only, if you are texturing you need to do it on the texture sampler as well.

One in the eye for multipass! :(
------------------------------Great Little War Game

This topic is closed to new replies.

Advertisement