Sign in to follow this  
spek

Parallax / Displacement mapping in 2017 ?

Recommended Posts

Hi there,

Just wondering, what is the common way to achieve "enhanced bumpmapping" these days?

In the past, I used POM (Parallax Occlusion Mapping), which more or less calculates the offset per pixel through ray-marching. Worked pretty well, but relative expensive (at least, I felt it was in 2012) and of course it's not REAL. Maybe you devs are fooling me, but old brick walls really look as if the bricks truly stick out in modern games, even when looking at the corners.

 

Now I have Tesselation, which truly offsets the geometry. If close enough, when stepping away the sub-division quickly reduces. In the beginning I thought it was awesome, but looking now, I'm actually thinking about using POM again. Problem is the jaggy edges at diagonal stuff. Sure it can be reduced by throwing in even more sub-division, but I guess it gets truly expensive then, and that for an effect that is often not noticed that much anyway. But maybe it's the norm these days, dunno, that's why I'm asking...

 

Both methods had 2 further issues in my case:

* Edges / Corners. With POM you generally shift inwards... which looks kinda weird at the edges where a wall or another surface starts. With Tesselation, you can go both ways. Moving vertices away generates holes at edges, moving them inwards works better but now objects standing on top have their "feet" sunken into the floor, as if it was grass. It breaks the coolness merciless. My "solution" is just to minimize the offset at borders (right now I can Vertex-Paint the offset strength/direction). But.. but exactly at the corners is where displacement should shine! We want broken edges, bricks sticking out!

 

* Self-Shadowing (lack off). Offset is nice, but without self-shading, it still looks flat and ackward. POM demo's showed how to do it, but always with a single fixed lightsource. In a deferred pipeline with many lightsources (and also ambient), I wouldn't know how to achieve that "afterwards", when rendering the light volumes using the G-Buffers where the offset has already taken place.

I guessed for Tesselation it would go more naturally. While filling the DepthBuffer, you can take these offsets into account. However, when rendering shadow(depth)Maps from light perspective, you would have to tesselate as well, otherwise the depth-comparison is incorrect. I haven't tried it yet, but doesn't this make things even worse, performance wise? Or should I just trust on the 2017's GPU powers?

 

Or maybe... I'm old fashioned and you guys use different tricks now? Or maybe... Maybe the truth is that parallax effects aren't used that much, and it's still about smartly placed props and the artist adding some geometry-love manually ??

 

 

 

Share this post


Link to post
Share on other sites

There is a solution for edges and parallax occlusion mapping, which is called silhouette mapping, but it's still fairly hacky: http://developer.amd.com/wordpress/media/2012/10/Dachsbacher-Tatarchuk-Prism_Parallax_Occlusion_Mapping_with_Accurate_Silhouette_Generation(SI3D07).pdf

And there's no real solution to self shadowing. I was thinking about some sort of precalculated horizon map, but there's no real way to pre-filter that correctly, you can't mip it and it doesn't work for anistropic filtering.

Regardless tessellation, or rather displacement map based tessellation, is definitely the future. But not the present because there's filtering problems and tessellation is slow on the base consoles unless you only tessellate confirmed visible triangles (entirely possible in a few different ways, but they'd generally have to be integrated from a ways off). The filtering problem, as you said subdiv tends to disappear quite quickly, is theoretically solveable with things like LEADR mapping: http://blog.selfshadow.com/publications/s2014-shading-course/dupuy/s2014_pbs_leadr_slides.pdf

But the problem is that the tessellation stage in modern GPUs is fairly fixed function. Meaning you'll have to sit there and pull your hair out a lot if you want a performant solution as such. All of this is why you don't really see tessellation or POM in shipping games. POM causes filtering, decal, and shadowing problems and tessellation causes filtering and performance problems. I expect with the Pro/Scorpio release we'll see more tessellation as at least an option for them/high end PCs; as they've got better tessellation peformance. And it's definitely what I'd stick with, no hacky solutions for silhouettes or shadowing or etc. Just nice and straightforward geo that can just be treated like any other geo. Bonus, leadr mapping results. Would really love to start seeing this in shipping titles at some point, playing the great looking Deus Ex Mankind Divided, only to look out over a large body of water and see terrible scaling/filtering restults is just  :wacko:

https://www.youtube.com/watch?v=LA39CnvEysI

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites

Oh yes, forgot to mention, but indeed, decals on POM walls is like trying to paint Bob Ross on a running toddler. It works, but... weird. On a tesselated surface it should be less of a problem, when doing deferred decals at least. I agree that in the end, tesselation works more natural with pretty much any effect that will follow up on your surfaces.

Thanks for the Silhouette paper, I remember that being quite old actually, but I never tried it. So, on a cloudy sunday...

 

You state that current games generally don't use POM or Tesselation that much. Which is understandable for all the reasons given above. But... how do they do parallax effects right now then? Broken stuff, brick walls, old plank floors certainly appear "3D" in a lot of games I played last 5 years or so. Is that just manual-made 3D-offsets, or am I fooled easily?

Or let me ask it differently. I have a feeling my normalMapped surfaces look flatter than in modern games. Unless I'm putting lights at extreme angles, making the "bump" visible, there isn't much shadowing going on, and with the lack of displacement/parallax, the end-result is... flattish. I'm guessing it's really just smarter, old fashioned, art-work in the end, but maybe I missed something and games are spicing their normalMapping effects up somehow...

Share this post


Link to post
Share on other sites
Brick walls, wood prank floor etc. - for those things i'd simply use higher tesselated models. Looks better and i guess it's faster too is used wisely.
(E.g. RE7 has lots of poly bricks)
For indoor it really should not be a problem if you have some occlusion culling. But maybe you need more LODs for the meshes, so more artist work :(

Displacement and parallax are more suited for natural sruff like rocks and terrain.

To me HW tesselation is a disappointment:
* Can't do catmull clark efficiently
* We could use a compute shader to do it once and reuse for multiple frames
* it came many years too late
 

Or let me ask it differently. I have a feeling my normalMapped surfaces look flatter than in modern games. Unless I'm putting lights at extreme angles, making the "bump" visible, there isn't much shadowing going on, and with the lack of displacement/parallax, the end-result is... flattish. I'm guessing it's really just smarter, old fashioned, art-work in the end, but maybe I missed something and games are spicing their normalMapping effects up somehow...


Hmmm, that term 'flattish' is interesting.
I'm puzzling for decades why some games look flattish, their polys look somehow 'thin'.
I don't know why - can't figure it out. Maybe it's too high texture reloution on too low poly mesh.
I don't think it's flat lighting (actually i don't know an recent example, maybe Dragon Age Inquisition had the issue a little bit, but it was more common for older games.

Not sure if we talk about the same thing, but you could post some screenshot where you think displacement would be an improvement.
Personally i have the impression your gfx look very good - at least at the tiny blog screenshots :) and your scenery is not well suited to displacement.
(I'm also one of those guys who never agree to add high frequency details 'everywhere', so i'm generally against those techniques) Edited by JoeJ

Share this post


Link to post
Share on other sites

If a game like RE7 (and yes, exactly one of the titles I had mind while writing this) indeed simply uses more man-made bricks and planks, than so be it. I just wanted to make sure I wasn't missing something smart here :)

A bit off-topic, but do programs like Blender or the likes have tools for you to (auto)generate this, respecting the UV-coordinates of the (Flat) wall surface behind it? Doing all that by hand... Probably I can build a real pyramid faster. Biggest issue is that you can't easily change afterwards: every texture-change or even UV-shift would require a rebuild of that surface. So it would be something the artist does in a very final phase, when the scene is definite.

 

And maybe that's one of those things going on with "Flattish"... I learned that every scene I make sucks big time until the final tweaks have been made. Meaning a near perfect texture composition, correct UV's, details added (varying from big furniture to tiny wires and decals), and an appealing light set-up. When looking close, things may still suck, but the complete picture is ok.

 

But I still find it very hard to make a clean scene. Thus without tons of rubbish, wall cracks, pig grease, and broken lights to mask imperfections. Making an empty corridor with a boring light setup look realistic is darn hard. Like women, not everyone is a natural beauty without foundation. Yet I have a feeling some engines actually do manage to create a nice looking scene even with minimal effort. But as said, maybe it's just because all factors together are more complete/correct.

 

I figured PBR would be helpful, so materials should look natural in any case. Which requires correct textures as well. So once in a while I download some PBR-Ready textures, like the ones here

https://www.poliigon.com/texture/48

In comparison, the white planks and bricks from that website have been applied in the attached screenshot (normalMapping on, tesselation toggled on in the second shot). When loading them into my game, it doesn't look that spectacular. Not bad, but a bit bland. Of course, the previews on that website use ultra quality, and the attached scene itself simply is empty and boring here.

The third shot shows a different, lower quality texture. Same techniques, but instead of a 2K texture, this was less than 1K I believe.

 

Now in this shot, with the lamp right above the bricks, the normalMapping is pretty obvious. But the majority is litten indirectly. In general, things look more interesting (not necessarily more realistic) when having high contrasts & light coming from 1 dominant direction. But that's pretty much opposite to multi-bounce G.I., which spreads light all over the place.... Thinking about it, tweaking light contrasts is probably another key to success here...

Share this post


Link to post
Share on other sites
What do you store for your baked lighting?
Maybe with better directional information the red wall in shadow would look as good as the lit part.

I remember we talked about subsurface stuff some while back (ice blocks).
You could bake this with directional information too, using a second lightmap for the back side.
(In case i did not mention back then, but i've got the idea later i think.)

Of course you can also bake glossy reflections with this.

MJP has demo and blog comparing SH, SG etc.


I have to agree the tesselated bricks look pretty good, but if directional lighmaps have a similar overall cost, i expect the win to be much higher.
May be worth some testing time...



The screenshots are great in general, maybe not perfect modeling details but really beautiful lighting - beyond most AAA to me.
Edit: I think your AO is too dark. Edited by JoeJ

Share this post


Link to post
Share on other sites

Well thank you :) I must say the results aren't consistent though. Like I said, it takes pain and sweat to get even simple scenes look right in most cases.

 

>> Baked lighting

Right now it's Lightmap(s). And instead of doing something like baking 3 lightMaps for 3 different directions, I only have 1 color, and 1 "dominant direction" vector stored. That might not be very good either, now that I think of it. If light comes from multiple directions, that "dominant vector" tends to be just forward.

Actually there are 4 maps, as I also store sky-influence & direction and influence factors of 3 other lights (so you can change their colors realtime, making semi-dynamic realtime G.I... well, a little bit).

 

>> SSS

Can't remember exactly... you mean storing thickness or curvature in the lightmap? I implemented that, but on a per-vertex level. Most concrete walls here aren't very good candidates for SSS hehe. And organic objects often have quite a lot vertices to make up.

 

But I was thinking about ditching lightmaps anyway. Every time when I try them, there is trouble. UV mapping issues, leaking edges, not useful for particles & dynamic objects, resolution terribly low at some points, and so on. In the past I used probes (and light from 6 directions, like a cubemap), which also had its share of problems, but felt like an easier and more allround solution for my needs. Maybe I should just do that. Saw something interesting here:

http://advances.realtimerendering.com/s2015/SIGGRAPH_2015_Remedy_Notes.pdf

(talks about using partially pre-computed probes, but in a smarter/more compacted way than I did in the past).

Share this post


Link to post
Share on other sites
With SSS i mean e.g. a room with of walls of ice (that was your example - SSS may be the wrong term). Directional Lightmaps could store the blurry scene behind the ice.

Personally i work on realtime lightmaps for years and now when i'm almost done i can't wait to get to all those issues you mention.
I already spent some months on auto segmentation and UV mapping - pretty hard stuff, especially for me because i have additional segmentation constraints.
Recently i saw Simplygon has awesome tools for this purpose in case you have some bucks left :)
And in the other case maybe this helps to save work (From that beatiful puzzle game): https://github.com/Thekla/thekla_atlas


So you go from voxel tracing to lighmaps to probe grid... Seems i'll never play Tower22 :(

Just joking, who cares for games when we can do graphics dev :P

Share this post


Link to post
Share on other sites

And there's no real solution to self shadowing. I was thinking about some sort of precalculated horizon map, but there's no real way to pre-filter that correctly, you can't mip it and it doesn't work for anistropic filtering.

 

There is actually a way to do horizon mapping well, and I describe it in Chapter 5 of Game Engine Gems 3:

https://www.amazon.com/dp/1498755658/?tag=terathon-20

Share this post


Link to post
Share on other sites

 

And there's no real solution to self shadowing. I was thinking about some sort of precalculated horizon map, but there's no real way to pre-filter that correctly, you can't mip it and it doesn't work for anistropic filtering.

 

There is actually a way to do horizon mapping well, and I describe it in Chapter 5 of Game Engine Gems 3:

https://www.amazon.com/dp/1498755658/?tag=terathon-20

 

 

Huh... cheap self shadow for POM could make it more appealing, at least in the near term. It is cheap right?

Share this post


Link to post
Share on other sites

>> JoeJ - Lightmaps

I'm the type of guy who can implement stuff 90%. Pretty far, but the last 10% is what perfects techniques like this; better generation tools, seam errors, better UV space usage, leak reducing, better performance / compression, more accurate directional information, ... All together the results are pretty ok, but various little, yet nasty errors break the illusion. And as described above, my method for storing directional information isn't good. Especially at places where the dominant vector is taken over from 1 light to another (or from 1 light to none).

 

>> voxel tracing to lighmaps to probe grid

Hehe. True, I wasted quite some time on getting G.I. Also tried realtime updated lightmaps a bit like Elighten does, with pre-computed relations between lightmap patches, a long time ago (the first Tower22 G.I. system actually). But as said, getting the last 10%... I think in my case probes produced the best results - so far. But moreover the simple conclusion is that no method is perfect, and it takes a lot of sweat :|

I stepped away from the ambitious desire to get true "Realtime G.I.". It just has to look good... but then again I do have quite a lot situations where lights can be switched on/off locally, as well as a day/night cycle so a 100% static bake is not an option either.

 

>> blurry scene behind the ice

That's pretty smart, got to remember this. I wasn't planning to throw the lightmaps away, though I might move the G.I. part back to probes.

The good news is that I actually spend last year on T22 GAMEPLAY - no graphics (or well, just a little bit). So instead of trying to make things look good, I tried to avoid the player from falling through floors, a bit of A.I. Behavior Trees & Scripting, level design, and so on. And now the game(demo) is playable. But now we're back at the point we need to make things look good again. Which means I'll need to find some motivated artists -which is almost impossible :( Generating good screenshots sometimes helps to lure them though... So we're back on graphics yes.

 

>> Frenetic Pony - lighting needs spec information

Actually there are specular probes (cubemaps). I'm splatting them as deferred cubes or spheres, as described here

https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/

But I'm interested in trying the method described in the other paper I posted earlier. Here each (G.I.) probe refers to the most suitable SpecularProbe. Problem with the deferred approach is that I need to define the volumes of each probe manually (radius / depth / height / ...) to get them fit in the relative tight spaces I have. That means works for the artist, and also I sometimes overlap space in the other neighbor room, or forget some spots, meaning they don't get receive any specular (well, they do via Screen space RLR if available).

 

>> Eric Lengyel - Horizon Mapping

From the few bits I understand, a (pre-computed) horizon-map contains the angle towards the "horizon". Not sure what that is, but is that enough information to test if a pixel is occluded y/n for any given light? In my situation there can be relative much (small, local) lights, and a deferred pipeline is used (thus splatting light volumes onto the screen, reading G-Buffers to fetch normals). Typically the parallax offsetting took place before that, while filling the G-Buffers.

 

 

Thank you all!

Share this post


Link to post
Share on other sites

Huh... cheap self shadow for POM could make it more appealing, at least in the near term. It is cheap right?

 

Very cheap. Only about two dozen scalar instructions in the pixel shader.


>> Eric Lengyel - Horizon Mapping

From the few bits I understand, a (pre-computed) horizon-map contains the angle towards the "horizon". Not sure what that is, but is that enough information to test if a pixel is occluded y/n for any given light?

 

Yes, that's exactly what it does. (And the horizon map contains the sine of the angle to the horizon, which is the highest feature in the bump map in a local neighborhood around each texel.)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this