Modern parallax techniques

Started by
7 comments, last by Styves 12 years, 4 months ago
Could someone kick my into the right Parallax-Mapping direction?

So far I implemented old cheap offset parallax mapping, and "Steep Parallax Mapping". The "steep" variant checks if a pixel is occluded or not by firing a short ray towards the camera, then check for about 12 iterations if the ray intersected with the surface heightMap or not.

The problem with the "steep parallax" effect is that I get a sliced-layered result. As if 12 thin sandwiches we're stacked. This picture described the problem:
http://apocaq3.xtr3m.net/images/parallax/SteepParallax-14iters-CloseView.jpg

Maybe there is simple solution for this, or maybe my heightMap isn't right (too big height differences?).
Either way, I was wondering what is the game-standard these days. I plan to use the parallax effect for walls and such, but also for debris on the ground (bricks and crap) so a good level of detail is important. Only problem is that I just don't know what to chose.... Steep Parallax Mapping, Cone Stepping, Relief Mapping, Parallax Occlusion Mapping,... don't know which method looks good & has a proper performance.



Ultimately, it should be something like this:

That's Ambient Occlusion Mapping right? A few questions then:
- The bricks in that video seems to have (texture?)detail on the sides as well... How/where did they draw that? Or is it maybe a procedural shader instead of a texture that does the job here?
- The sides are "clipped" away, instead of having a flat horizon. Is that just a matter of killing pixels in the fragment shader once they fall outside?
- What's so different about this than Steep Parallax Mapping? I mean, it also works by raytracing right?
- Is Ambient Occlusion Mapping like shown here a good idea (performance, practical to implement etc)?
- Last but not least, anyone knows a good paper or demo with that technique?

Thanks,
Rick
Advertisement
I can't comment on POM, but QDM is a modern way to handle things. There are comparisons to pom in the paper I linked. Must better algorithm overall.
QDM, yeah why not add another name to the already huge listing of parallax techniques :P
Cool paper, and not even too difficult to understand. And most of all, it looks good. Oh, POM = Parallax Occlusion Mapping right?


Didn't read the gritty implementation details yet, but I'm only a bit concerned about the performance. The paper sais it will run good on the upcoming generation of hardware... but on the current? Todays hardeware shouldn't be a limitation when developing something for tomorrow. But since POM is already used for some time with good results & able to run on current hardware, I wonder about the QDM performance / quality ratio. It looks a bit better, but if it's also a ton slower... Don't want to rape my laptop!

Ok, back to the paper to read further :)

QDM, yeah why not add another name to the already huge listing of parallax techniques :P
Cool paper, and not even too difficult to understand. And most of all, it looks good. Oh, POM = Parallax Occlusion Mapping right?


Didn't read the gritty implementation details yet, but I'm only a bit concerned about the performance. The paper sais it will run good on the upcoming generation of hardware... but on the current? Todays hardeware shouldn't be a limitation when developing something for tomorrow. But since POM is already used for some time with good results & able to run on current hardware, I wonder about the QDM performance / quality ratio. It looks a bit better, but if it's also a ton slower... Don't want to rape my laptop!

Ok, back to the paper to read further :)

QDM is already a few years old, modern hardware should have no problem with it (current gen = XBox360,PS3, modern PC GPU are beyond 'current gen'). Basically it utilize 'mipmapping' to find the right collision quite fast and precisiously. An issue with older hardware is, that you need integer aritmethic (SM >=4.0) and access of the mipmapping levels (tex2lod). The benefit is, that it delivers very high quality with very good performance. POM needs a lot more iterations to deliver similar quality,especially when you want to deliver surfaces with a high depth perception.
I see. Best might be to arm ourselves with both POM and QDM.
QDM for surfaces that really need the extra quality, POM with relative few iterations for somewhat lower quality stuff. And for the simple stuff, just plain old cheap parallax mapping eventually.

My laptop comes from the SM3 generation so probably it will cry (didn't know ints were a problem btw). Well, too bad for the laptop then. Let's start coding.


Another question rises btw. Parallax implementations often use self-shadowing. As usual with simple demo's, they use a single, known lightsource. But how about multiple lights & a deferred rendering approach? So far, I use parallax only to offset the albedo / specular / normalMaps. Then that data is stored in buffers for deferred lighting later on. In other words, the lighting-phase has nothing to do with parallax mapping so far...

I could do self-shadowing by only using one primary (dominant) lightsource in the pre-stage already. But it's often difficult to decide what the primary lightsource should be in indoor situations...
Ok. I started with POM first, with the help of Jason Zink's article here on Gamedev. Now that I understand POM a bit, it's easier to try QDM. Still I like to solve a few issues first:

1.- Self-Shadowing; Is there a nice way to integrate this in a deferred rendering pipeline?
Now I'm just firing a secundary ray towards 1 light to check SS. This happens before the actual lighting is done in the deferred pipeline. However, in my indoor situation there are many lights. I could try to pick the dominant one. But maybe there is another way around?

2.- Steep edges are lacking texture-data & still look "sliced" (using ~150 samples).
I guess there is not much to do about it, or am I wrong? Rounded stones have a smooth falloff, but very high height differences such as a brick wall suffer from slicing & there is only 1 or 2 pixels normal/albedo/specular data for them. How the heck did they look the bricks look that sharp in this movie? Neither do I see slicing, although that might be masked by the movie quality..
http://www.youtube.c...feature=related

3.- For the finishing touch, Silhouette Clipping
Are there multiple tricks for this? So far I saw a nVidia demo. I believe it actually adds fins with a geometry shader, and mask those. Sounds difficult to me though, but I didn't examine the code yet. Anyway, any recommendations for this?
- edit -
Clipping wasn't too difficult. At least, not on a simple quad with known texture coordinates. Just check if the resulting texture coordinates are outside the bounds of the quad corner texture coordinates and clip the pixels. However... it may get tricky on more complex objects such as a sphere.


Thanks for helping guys!

1.- Self-Shadowing; Is there a nice way to integrate this in a deferred rendering pipeline?
Now I'm just firing a secundary ray towards 1 light to check SS. This happens before the actual lighting is done in the deferred pipeline. However, in my indoor situation there are many lights. I could try to pick the dominant one. But maybe there is another way around?


I haven't tried this yet but the way I was thinking of doing this was with screen space self shadowing like crytek described in one of their presentations. I think this is possible when you are creating your G-buffer and when you are distorting the albedo and normal maps with one of the above mentioned algorithms, I think you can offset the depth too at this point. This means other screen space techniques like SSAO/SSDO and SSGI should work as well although there will probably be a loss in quality. I haven't tried this yet so I have no idea how good the results will be but it might be worth a try.

I think I saw the technique in the lighting presentation on this page: [url="http://www.crytek.com/cryengine/presentations"]crytek presentations.[/url]
They are great presentations and worth a look anyway if you haven't seem them already.
Hope that helps!



[quote name='spek' timestamp='1322773693' post='4889577'] 1.- Self-Shadowing; Is there a nice way to integrate this in a deferred rendering pipeline?
Now I'm just firing a secundary ray towards 1 light to check SS. This happens before the actual lighting is done in the deferred pipeline. However, in my indoor situation there are many lights. I could try to pick the dominant one. But maybe there is another way around?


I haven't tried this yet but the way I was thinking of doing this was with screen space self shadowing like crytek described in one of their presentations. I think this is possible when you are creating your G-buffer and when you are distorting the albedo and normal maps with one of the above mentioned algorithms, I think you can offset the depth too at this point. This means other screen space techniques like SSAO/SSDO and SSGI should work as well although there will probably be a loss in quality. I haven't tried this yet so I have no idea how good the results will be but it might be worth a try.

I think I saw the technique in the lighting presentation on this page: crytek presentations.
They are great presentations and worth a look anyway if you haven't seem them already.
Hope that helps!



[/quote]
The big problem I see with this is that you're basically going to ruin any kind of early-Z optimization if you apply this approach to a significant fraction of the surfaces in your game. Considering that virtual displacement mapping approaches make for somewhat expensive shaders and that G-buffer generation requires a several-fold increase in write bandwidth, you're kinda making a perfect 'lack of scalability' storm. It could also carry over into the lighting pass, depending on how the hardware handles this sort of situation-- for some, all early Z is disabled until you actually clear the buffer.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
The "ambient occlusion" effect is actually just soft-shadows generated by sampling the height-map in steps away from the light direction (in tangent space). There's an ATI paper about it here that has most of the answers to your questions.

As far as getting it into a deferred pipeline: if you store all the heightmaps and the corresponding displacement scale in the G-Buffer, it should be rather easy to do the displacement and shadows as a post-effect.

The silhouette edges in the video are simply discarding pixels which lay outside of the UV coordinate bounds. Of course, that only works for things without tiling. The better way would be to detect where an edge of an object lies and clip there. This thread has more info on the clipping idea.
(PS: I'm actually the author of that video. Thanks for posting it! laugh.gif)

This topic is closed to new replies.

Advertisement