Sign in to follow this  
bootstrap

real 3D-geometry vs fake (normal/relief/parallax/POM=parallax-occlusion-mapping/etc)

Recommended Posts

This post was originally a reply to a beginner-forum question that asked whether 3D geometry or texture/normal/parallax/etc mapping consumes more graphics engine horsepower. I post this message here because I suspect that question is not only a beginner question, but worthy of serious thought and discussion by serious experts. Time for some brainstorming (or teeth kicking)! Specifically, I refer to a thought process that I went through when planning, designing and understanding the consequences of POM (parallax occlusion mapping) or any other advanced parallax mapping technique. I will describe this thought process below, but the bottom line is this: it looks to me like it is FASTER to create massively detailed geometry than to "fake it" with techniques like POM/relief-mapping/etc - but nobody seems to have noticed this, probably because the move from the oldie/moldie/slow/little-no-videoRAM days to super-fast/lots-of-video-RAM/super-cool-shaders days was a gradual continuous evolution (albeit in a fast-moving industry). First, consider the POM technique. And to make the thought process concrete, imagine your game applies POM to display the surface of a gigantic spacecraft or better yet, one of the death-stars from the StarWars movies (or BorgCube). As players tie-fighters fly near the death-star, POM is a natural approach to consider to display the wealth of 3D relief detail on the outside surface of the death-star --- as opposed to creating actual 3D geometry for the entire moon-size gizmo! To stress the point slightly, lets say you are flying over the surface of the death-star and looking out the front-window (or either side-window) of your figher craft. What you see is a huge expanse of death-star surface stretching out into the distance. Now, to put this into POM terms, the camera-view-ray is the line between your eyes (in the fighter craft) to every point on the death-star that will be displayed as a pixel on the screen. Well, as you can easily see, most (if not all) of the camera-view-rays intersect the nominal top surface of the death-star at a lowish angle --- say a 20 degree angle at the nearby points you can see, and a less-than 1 degree to 5 degree angle at the further-away points you can see. Now we enter the world of the pixel/fragment shader that displays every pixel on the deathstar with its POM algorithm shader-code. The POM shader begins where the camera-view-ray intersects the nominal surface of the death star, which must be at the level of the highest tip of the highest tower sticking up from the surface of the death-star. The lowest point is the bottom of the lowest groove/passage-way in the death-star (like the pathway Luke flew down to explode the deathstar). So, the POM code takes the low-angle (nearish horizontal) camera-view-ray and checks it against the deathstar height-map and finds the ray far above that location on the death-star. So it adds zero to two pixels to its x and y coordinates (in tangent space) and subtracts something like 1 to 3 from the altitude of the camera-view-ray depending on how shallow/steeply the ray approaches the surface of the death-star. This iteration process repeats - often DOZENS to HUNDREDS of times when the camera-view-ray angle is this low (or only 1~2 times looking straight-down, which will virtually never happen in this scenario, except when you are banking at a steep angle during a steep turn and looking out your side window). At each iteration the code reads the normal-map (from "texture-map" videoRAM) to access the height of the death-star surface at this location. In summary, the POM shader reads videoRAM memory dozens to hundreds of times before the camera-view-ray finally hits something on the death-star surface. The best-case, which will be rare, is when tracing the camera-view-ray into the top of the highest towers projecting upward from the death-star surface. The worst-case, which will be less rare, is when the camera-view-ray passes down the length of one of those grooves/passageways, or where large sections of the death-star surface are flatish and low-level with occassional upward projections. (Which mean, POM while flying over monument valley would be absolutely horrific)! Okay, now consider a death-star created with super-fine detail 3D geometry. In fact, lets assume that every polygon is only 2~4 pixels across (where 2 pixels across means you have one vertex for every single screen pixel!!!). So, what happens in the shader of your game engine now --- with this oldiest and moldiest of techniques == utterly vanilla straightforward 3D geometry? Well, to begin with, we probably start out with only half as many vertices as the POM has pixels to trace. But we instantly lose this entire advantage because we have never-displayed vertices on the backside of every geometric feature on the death-star, as well as vertices on the vertical (and even overhanging) portions of every feature in the death-star. So we must assume we gain nothing here --- we have at least as many vertices in the geometry as the normal/height-maps have pixels (and maybe even a factor of 2+ more!) Ouch. But let us continue a little further and not give up before we fully consider. How much work must the 3D engine do per displayed pixel? The POM scheme had to read dozens of normal/height-map entries per displayed pixel. So far, our scheme has to process 2~4 vertices to account for backsides and all those extra features on the sides of the real 3D geometry. Well, first of all we know the 3D engines find out about backside vertices and z-buffer culled vertices early in the pipeline - they never reach the pixel shader. This is important, but not a huge win --- yet. The huge win we are waiting for is *obvious*, in retrospect. For every displayed (and backsurface and z-buffer culled) pixel the 3D engine only reads 2~5 vertices, and only fully processes through the pixel-shader 1~3 vertices (average == 2). So, while POM performs dozens of cache-missing videoRAM references per pixel displayed, oldie-modie-ancient-technology-3D-geometry only reads TWO vertex structures per displayed pixel, and these are ALREADY PREFECTED AND IN CACHE because they are sequential fetches from the VBO/vertex-buffer. And our oldie-and-moldie technique doesn't even NEED texture-maps (for color) or normal-maps (for normals and heights) because all this is part of every fetched vertex! So which is faster? The answer seems very clear. The oldie-and-moldie-3D-geometry method is!!! How can this be? And why does nobody notice this? When you inspect the gradual progression of developed techniques, the answer seems pretty evident. When they started developing these fancy/tricky "fakes", the video-cards had something like 64KB to 256KB of videoRAM, not 512MB to 1024MB of videoRAM like today --- a difference of *** many thousand times ***. So the tradeoffs were very different. But how about now - with fresh analysis? Well, the answer does not appear obvious, does it? Or if it does, the obvious appears to favor the oldie-moldie-simplistic-3D-geometry approaches. Or does it? This is where everyone jumps in and kicks my teeth in! :-) Before you do so --- which I know you will do with glee and joy --- consider a couple side issues of this analysis. Namely: The sides of every [near-vertical] feature in POM cannot have any visible features - or at best, extremely low-rez vertical streaks. Why? Because the entire side of every vertical feature (a skyscraper building for instance) is represented by zero (or very few) pixels. No way exists to place windows and bricks and other features on the sides of buildings, monument-valley rock-towers, or any other supposedly real physical object in POM/parallax-mapping! But guess what? Every pixel in the full-resolution 3D-geometry approach can be displayed in full resolution. Want windows, archways, anything-else on the sides of your world? No problem - we've already taken the efficiency-hit for having all this full-bore detail - so take advantage of it! Furthermore, just try to fly through an open (or closed) window or tunnel in any POM/parallax pseudo-fake-geometry! Ha ha ha!!! No way, Jose --- even if you could display it, which we saw above that you can't! In real 3D-geometry world, "no problem"! And 3D-geometry world can support underhangs, holes, tunnels, anything whatsoever --- none of which POM/parallax can support. And guess what? We still have unlimited opportunities to *selectively* cheat and fake things in 3D-geometry-land --- whereever doing so does not destroy the reality of the world. For example, if we do have any expanses of moderately flatish walls/streets/etc, we can perfectly well omit vertices in those areas and simply place a conventional texture-map and normal-map over each of these kinds of expanses. Given the many places we can save in these ways, how can we ever find places we are actually ***ahead*** of the game with POM/parallax? The point of this post is, to ask all of you experts out there, to answer the original post given the context I just provided here. Admittedly I have taken the question even further than the original post (into POM/parallax land), but similar questions exist even in polygon-vs-texture land. Today, given every detail of the state of modern 3D engines and shaders and everything else, are we getting closer and closer to the point where we are better off reverting more and more to simple, streamlined, straightforward, hardware-accellerated conventional 3D geometry worlds? I am inclined to think so, and limited tests so far tend to agree).

Share this post


Link to post
Share on other sites
I agree. I haven't done many extensive tests, just a few. In this tests a well-tesselated model with simple PerPixel lighting rendered several times faster then Relief Mapping. And I guess it amplifies again with silhouette correction techniques, self shadow calculations and such.

Advantages of POM and alike:
* it adapts its detail level easily through simple mipmapping
* texture accesses allow repeating surface detail
* easy to implement: everything happens inside a pixel shader (in contrast to real geometry where more sophisticated scene structures might be necessary)

Disadvantages:
* slow
* rich of artifacts

You surely want to decide for yourself but I won't employ any technique beyond Parallax Mapping in a game. It's not worth all the GPU power.

Bye, Thomas

Share this post


Link to post
Share on other sites
I think you totaly missed the point.

Every technique has it's own proper usage and application.

I agree, what you decribed is clearly a very bad place to use POM, and I don't think anyone is really trying to use POM in such a scenario.

However if you were trying to simulate a rough brick wall or surface of a tree, those would be very nice usages of POM ... unless, of course, you were making a game where you are an ant running up and down the tree. Then you would better stick with real geometry.

Remember, for each technique: use when appropriate.


But otherwise, you raise a valid point.

Share this post


Link to post
Share on other sites
Here is my two cents to the replies so far. The original post admits the presented example was chosen to "stress the point" slightly, so POM comes out looking really horrible in many ways.

However, is it actually clear that POM and POM-like techniques are ever substantially better in fully modern, up-to-date 3D graphics hardware. Because, if we find it is difficult to find cases where POM is clearly significantly "better" (faster/cheaper/whatever), the point of the original post becomes "due to changes/advances in the speed/memory/shaders in next generation 3D graphics cards, 3D geometry is rarely significantly slower or inferior to the so-called faked-geometry techniques".

So, is it truly clear that POM is worth doing in the context of currently best graphics-cards - which will be mainstream in 1~2 years and old-hat in 3 years. Here are the cases where POM seems like it *might be* better (to some degree).

Where almost all of the geometry is near the highest level - like in many cobblestone walls or sidewalks. In these cases, the camera-view-ray will usually intersect the geometry within two or three iterations - even at the slow pace of 2~3 pixels per step. In a case like this, you still benefit from POM over normal-mapping, because there are deep grooves between cobblestones which can exhibit substantial parallax that normal-mapping cannot represent.

I had a couple other situations in mind, but now I realize full geometry does work better in those cases. Still, the above case does look to me like a possible win for POM --- as long as none of its inherent defects matter. Which means, the camera must not fly down into (or nearly into) those grooves between the cobblestones, because this becomes more like the originally described case where 3D geometry wins. And the camera should never get down close to the cobblestones, because then the "streaky" character of the near-vertical grooves in the cobblestones become obvious artifacts of fakery.

So, I guess we must carefully examine this case that seems to be a win for POM given the context and limitations I mention above. If we cannot prove this truly is a win for POM (or relief/parallax/similar-mapping), then the original post seems pretty much entirely valid.

Fact is, quite surprisingly, the old-fashion way *does* look alot better than might be expected, I think, and for exactly the reasons he suggested, probably.

In thinking about his idea, I realized it seems entirely practical to make a very convenient LOD scheme based on the geometry approach, based upon what is pretty much the conventional efficient approach today with OpenGL IBO/VBO pairs (or DX index-buffer/vertex-buffer pairs). What you do is this. You have all the vertices in one VBO/vertex-buffer, plus one IBO/index-buffer for every LOD you want to support. Each IBO references just those vertices in the one VBO that it needs to construct the object at the desired resolution. Presto, you have LOD with only one set of vertices. This is surely not original, but hey, it works. So, where are we?

Share this post


Link to post
Share on other sites
The original post basically comes down to "raytracing is slower than scanline rendering". Which is, uh, true.

The thing to keep in mind about properly applied parallax mapping of any sort is that casting more than a few rays should be very much the exception. It's meant to be used where surface complexity is extremely low. There's a ton of these situations: Cobblestones are mentioned constantly, but there's also scales, wooden planks, awnings, speaker grilles, telephone dials... I could come up with good examples all day. So flying through a forest of greebles isn't an appropriate use? Fine. But other things are.

Share this post


Link to post
Share on other sites
Yes, maybe, probably. But unless somebody does a serious and careful analysis, we are just repeating the conventional wisdom that is being reasonably questioned here. Any honest person *must* admit, it is easier to repeat the conventional wisdom and say it is "obviously" correct, than to carefully and methodically "prove it" - or make an actual case for it. I do not think ANY of us has done that yet, though the cobblestones and scales and similar cases are clearly the best place to start. So far, we are just hand-waving and repeating conventional wisdom. Admittedly, that is oh-so-much easier, but are we really certain that 3D geometry cobblestones and scales are *substantially* worse to render as 3D geometry? I ask it this way because we are ALL certain that 3D geometry is the best way to display an awful lot of things. If we display everything that way, we could put all our efforts into optimizing and polishing that code --- and guess what, far less switching shaders back and forth between so many different shading techniques. Which is much better from the proven "fewer batches is better" point-of-view.

Share this post


Link to post
Share on other sites
Quote:
Original post by technohermit
Admittedly, that is oh-so-much easier, but are we really certain that 3D geometry cobblestones and scales are *substantially* worse to render as 3D geometry?


It depends on a lot of things. You're looking for a clear-cut yes or no answer where none exists. Some situations it might be better to just fall over with millions of polygons, in others it might be better to resort to pixel shader effects like parallax mapping.

Keep in mind though, that video cards often suck at geometry when compared to pushing lots of pixels. For example, the 7800 series of cards has, what, 6 or 8 vertex shader units compared to 24 pixel shader units? Heck, even when the 8800 balances that vertex and pixel workload, it can still "only" do half a billion triangles a second (due to triangle setup being limited to one triangle a clock) whereas it would be able to compute the raytracing requirements of a complex parallax mapping shader over a hundred times faster.

So, are we certain that 'bumpy' things are more expensive to render as geometry? No. Will you find that in most reasonable situations that having a shader take care of it be faster? Yeah, probably. Will that always be the case? No: find a case, try out both ways, and from that decide which is faster.

Share this post


Link to post
Share on other sites
Part of the "problem" is the process of creating and storing 3D geometry vs. a heightmap for the parallax shader. Textures are very convenient in comparison to 3D models. If the entire world were to be modeled in sub-inch density polygon mesh it'd obviously take a lot more time to create than just using tiled textures on less dense geometry.

So far no game that I'm aware of has implemented a system that turns arbitrary polygons plus tiled heightmaps into a mesh of micro-polygons, which could actually work...

Some games use "lego pieces" ie. the world is made up of a large number of 3D models that fit together on the edges but this tends to make the game world look artificial (e.g. made of cubical tiles). The dungeons in Oblivion are a good example of this.

Share this post


Link to post
Share on other sites
Parallax mapping modern methods,using raytrace with 3D-tex-
quite slow and gives us just illusion of volume.And there
are always limited by some distance,which depends from effect value.
From another side,POM always gives us best detalisation.
Breafly-it's not a problem.
Some combination of geometry and parallax must be used anyway:)

Share this post


Link to post
Share on other sites
Quote:
Original post by Fingers_
Part of the "problem" is the process of creating and storing 3D geometry vs. a heightmap for the parallax shader. Textures are very convenient in comparison to 3D models. If the entire world were to be modeled in sub-inch density polygon mesh it'd obviously take a lot more time to create than just using tiled textures on less dense geometry.

One would think so, but it really does not take that much longer, most artists actually do a ultra high res version first(for normal maps) and from that create a low res one, this lowres version takes about the same time to make no matter if it is 15000 or 500 polygons.

Ultimatly it is all a question of fillrate, most graphics applications are and should be fillrate limited, and as long as that is the case you can draw as many polygons as you want.
Relief mapping and others like it take up a large bit of that fillrate.
The irony is that if you use them they will at the same time enable you to replace most of it with real geometry without making it transform limited.
So anything other than the simplest parallax mapping method is really just a waste of time(for now that is).

Share this post


Link to post
Share on other sites
Quote:
Original post by lc_overlord
Quote:
Original post by Fingers_
Part of the "problem" is the process of creating and storing 3D geometry vs. a heightmap for the parallax shader. Textures are very convenient in comparison to 3D models. If the entire world were to be modeled in sub-inch density polygon mesh it'd obviously take a lot more time to create than just using tiled textures on less dense geometry.

One would think so, but it really does not take that much longer, most artists actually do a ultra high res version first(for normal maps) and from that create a low res one, this lowres version takes about the same time to make no matter if it is 15000 or 500 polygons.


Yeah, especially with "Sculpting"-type modeling becoming increasingly popular (zBrush, Mudbox, the newest version of Blender and XSI as well, though not 100% sure about the last one) high-resolution meshes have become *very* easy to make. Heh, even *I* can make 'em, and I'm no (visual) artist by a long shot:

Share this post


Link to post
Share on other sites
Yes, but it's also true for other non sculpting softwares like lightwave, it's pretty easy for a professional artist to create millions of polygons by hand in just a day or two.
And when it comes to the low res approximation mesh it gets easier to get a good fit the more polygons you use and therefore it takes less time to make than if you use a lower res mesh.

Share this post


Link to post
Share on other sites
It's not as easy to constantly work with & re-use high detail meshes. Most of these techniques are general purpose -- you can use the textures & shaders on arbitrary geometry and it will 'just work'. Workflow is becoming just as important as performance.

To give you an example: If I'm building a level and I want a bumpy floor, it's far, far easier to just apply a texture to the floor compared to manually placing multiple high resolution tiling meshes. The bigger the mesh the less flexible it becomes. A specialised floor mesh for one room won't necessarily work for another, so you often have to break it down into multiple 'snappable' tiles to keep the re-usability high, anyway (which makes it more fiddly & less unique). It can be the difference between manipulating one polygon versus scores of high poly meshes. If the room changes (say the gameplay requires tweaking) then the mesh(es) have to change with it and that's more work & hassle. If your bumpy floor is generated using a technique like POM, it'll likely be more flexible to work with.

IMO you use what works for a given situation. Sometimes there's no replacement for the real thing, whereas other times it's far more convenient to use other techniques.

Share this post


Link to post
Share on other sites
Quote:
Original post by Defrag
To give you an example: If I'm building a level and I want a bumpy floor, it's far, far easier to just apply a texture to the floor compared to manually placing multiple high resolution tiling meshes.


Hmm, I don't think the discussion of "how easy is it" to create a texture vs a highly detailed poly mesh is too relevant. If you want to tessalate some triangles based on a texture map, there's plenty of free software to do this.

Regarding the poster's original question, "is it better to use actual vertex data vs image-space techniques", of course the answer is "it depends" (as others have already mentioned).

I was going to write up some comments here, but I think the DX 10.1 slides from gdc 2007 http://msdn2.microsoft.com/en-us/xna/aa937787.aspx explain what I was going to say even better. The talk about on-demand tessaltion and subdivision, done on either GPU or CPU is really the direction I think graphics is ending (computations split between CPU and GPU now that CPUs are becoming more parallel and GPUs are becoming more general).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this