what is the difference between POM and Relief Mapping

Started by
8 comments, last by InvalidPointer 13 years, 9 months ago
Hi,

I'm getting a bit confused, I did not understand what is the difference between parallax occlusion mapping (not just "parallax mapping") and relief mapping.

Can anyone help me ???

THANKS
Advertisement
The two are very similar techniques, and differ mainly in their fine details. Relief mapping uses a series of filtered depth texture lookups to search for an intersection iteratively. This introduces a high latency and can cause "stair-stepping" artifacts. Parallax occlusion mapping also does a series of depth texture lookups, but the intersection search is done based on a piecewise linear approximation of the surface rather than pulling further samples in the pixel shader.
thanks !

So parallax occlusion mapping seems better. Is it much slower ?
with direct x 11, gpu tesselation kills parallax occlusion mapping easy, maybe you should look that up.
Quote:Original post by rouncED
with direct x 11, gpu tesselation kills parallax occlusion mapping easy, maybe you should look that up.


Is it really true ? So it doesn't worth it anymore learning those bump mapping techniques ?
Quote:Original post by NiGoea
So parallax occlusion mapping seems better. Is it much slower ?

I don't believe so, no. Actually, it's likely to be faster, because there's fewer dependent texture reads.
Quote:Original post by NiGoea
Quote:Original post by rouncED
with direct x 11, gpu tesselation kills parallax occlusion mapping easy, maybe you should look that up.
Is it really true ? So it doesn't worth it anymore learning those bump mapping techniques ?
Not necessarily. They're both useful tools - one does displacement mapping at the fragment level, one does it at the vertex level - it's always good to have more tools at your disposal as a programmer.
While there have been some explanations, I don't think there's anything really meaningful here yet, so I'll try to clarify. The core difference relates to how sampling is performed and not so much piecewise linear approximations. (Sneftel: The two approaches you describe are in fact the same thing ;) The heightfield bitmap means the simulated height function is piecewise, and bilinear filtering takes care of the whole linear approximation clause)

In essence, POM does some fancy vector math beforehand and calculates the maximum texture displacement based on view vector and heightfield scale, then divides that vector up into n smaller segments, where n is the number of desired iterations. From there, you basically just take samples starting from the original eye/surface intersection and start doing comparisons.

Relief mapping, however, just keeps adding the normalized texture-space eye vector and comparing the Z values until an intersection is found and optionally does one or more refinement passes (of which there are several, the best that I am aware of is still probably interval mapping though I haven't done particularly exhaustive research on the subject for at least two years; also note that I don't include 'replacement' approaches like cone step mapping) once some more precise bounds on the intersection region are computed.

So what's the difference? Not a whole lot, actually. Mostly it's just a question of semantics, though technically speaking parallax occlusion mapping makes no accommodations for refinement passes. You can probably interchange the two freely with no real problem.

EDIT: BONUS ROUND!
AFAIK tessellation is indeed a superior replacement to virtual displacement mapping on most supporting hardware configurations/scenes. While having skillions of microtriangles everywhere pretty much ruins pixel quad efficiency, you're chopping off scads of dependent texture reads/register usage so the shader itself is vastly cheaper. You can probably mitigate this extra efficiency cost somewhat by being clever with your tessellation factors and limiting heavy detail to silhouettes.

Plus, with tessellation, you get free silhouette correction/fragment depth anyways :)

EDIT 2: Sneftel has pointed out that my original interpretation of piecewise linear approximation above was incorrect; we are, in fact, describing the same thing here.

[Edited by - InvalidPointer on July 16, 2010 12:39:07 PM]
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
Quote:Original post by InvalidPointer
Sneftel: The two approaches you describe are in fact the same thing ;) The heightfield bitmap means the simulated height function is piecewise, and bilinear filtering takes care of the whole linear approximation clause)
That's the difference. Relying on bilinear filtering means you're interpolating between texels, so the t-positions of the piecewise function can show up anywhere. With POM, the samples are taken at regular intervals (including whatever filtering you want), and the result is a new polyline with known and well-behaved t-positions.
Quote:Original post by Sneftel
Quote:Original post by InvalidPointer
Sneftel: The two approaches you describe are in fact the same thing ;) The heightfield bitmap means the simulated height function is piecewise, and bilinear filtering takes care of the whole linear approximation clause)
That's the difference. Relying on bilinear filtering means you're interpolating between texels, so the t-positions of the piecewise function can show up anywhere. With POM, the samples are taken at regular intervals (including whatever filtering you want), and the result is a new polyline with known and well-behaved t-positions.

Fair enough. I guess that's what I get for not yet having a really solid background in signal processing! Edited original post.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

This topic is closed to new replies.

Advertisement