Baking high-poly to low-poly

Started by
6 comments, last by _swx_ 10 years, 6 months ago

Hi guys.

I'm about to implement some LOD improvements for art assets and I need a few pointers. Artists will produce LOD0 (high-poly) assets (typically architecture, houses, etc.) and I need to produce LOD1 (low-poly). The low-poly mesh will be also created by art but without materials (and without textures!!!) and only with unwrap texcoords. Now the task is to "project", "capture" or "bake" the appearance of the high-poly mesh on the low-poly mesh. I'm not talking normal-maps or similar, but everything (diffuse, glossiness, metalness, emissivity, etc... not important). High-poly models feature complex shaders, possibly multi-pass, wherease low-poly will feature only a simplified material/shader. The key is to bake it so it looks close enough in distance (think 300m distant houses with some desaturation due to fog, etc).

I know that graphics packages (ZBrush, Maya, anything?) can do this somehow but I need it automated as a part of the production pipeline. I can think of a z-buffer based naive projection algorithm and about a ray-casting based algorithm, both should produce the same results and be reasonably slow :)

Recap:

Have: High-poly complex textured house + colorless low-poly mesh

Want: "Snapshot" of that house mapped on low-poly approximation.

Has anyone implemented anything like this? Any papers perhaps? :)

Advertisement
Generally speaking, if you are trying to put this in the production pipeline, there is still no reason to do it yourself. Generally speaking, all the professional level packages can be called from the command line and told to do the processing for you. Maya is probably among the best for this since you can link against the libraries and avoid the whole front end GUI where most others will pop up the GUI in order to execute the passed in script commands. Either way though, they tend to be quite usable.

There are some reasons to use the packages directly beyond just the capability of avoiding doing it yourself. First, if your artists need to double check the results, they can do so with the package and the tools and the package should be producing the same things. Second, bugs, bugs and more bugs, implementing this thing is not easy, very interesting yes, but the potential for many bugs is high. Finally third, I'd rather work on the game than such things, so avoiding the time sink is important.

Now, with all that, you should start with simple Google searches for multi-resolution meshes, triangle error based reduction, and other related terms. This gives you the lower poly mesh. Finding the remapped details is generally done via a raytracing system last I looked at this overall. I don't remember what the total process name of the two portions is generally called, I do remember one of the first papers was posted on Microsoft Research several years back but couldn't find it in a quick search.

Hope this gives you at least a starting point.

Is there any reason why imposters won’t work?

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

As AllEight said above, you may be able to do his using your art package *and* also call it from an automated pipeline.

So artists are giving you LOD0 and LOD1, both UV'ed? If so, and if this UV is identical across both meshes, and is unique (no two triangles overlapping in texture-space), then you can quite easily do a few baking/transfer tasks yourself.
Simply render your high-res mesh with a vertex shader that outputs uv*2-1 as the position, instead of outputting the position. Then in the pixel shader, output whatever function you want to bake, such as the diffuse colour.
Baking normals will be more complex if the tangents are different between both meshes - you'd have to bake object/world-space normals from the high mesh, then render the low mesh with a PS that reads the WS normals and converts to tangent-space.
After you've done any of these bakes, you need to post-process the results with a "dilation" filter, which leaves pixels that were written to as they are, but fills in non-drawn-to pixels with neighboring values. Basically, it's like a blur filter, but only affecting pixels that didn't receive a value during baking. This step is especially important if you use bilinear filtering or mipmapping.

I've only done this before because the bakes depended on real-time parameters, like decals, user-chosen material colours, wrinkling normals, etc...

Recap:

Have: High-poly complex textured house + colorless low-poly mesh

Want: "Snapshot" of that house mapped on low-poly approximation.

Many software (including Blender, doesn't have to be expensive) can bake any set of shaders you want. But don't you want the baked result to match what you are rendering real time in your engine with its shaders? Plus when you bake normal maps and specularity you will need to match camera and object position and orientation, scene lighting, etc. So these suggest you want to render them in your engine in their place inside the scene.

It makes some sense to have this kind of static 3D "imposter" but you can't really bake maps like normal and specularity into diffuse maps and expect good results if your scene is dynamic in relation to them. Also hard to say if this is a task best done by the programmer, level designer or prop modeler.

As a not-really-programmer I could only recommend the standard (boring :) ) approaches of rendering either 2D sprites "imposters" if you want to have anything dynamic about them (programming task) or make their LOD1 with diffuse map and only use diffuse shader with baked or real-time lighting on the LOD1 models (modeling task).

Even if you render 2D textures real time and have it wrap around your model via UV the result is going to look pretty much the same as 2D sprite when talking about far away objects. You might not need to update the render so often as your position changes in relation to the object but on the other hand you're having more polygons there.

So artists are giving you LOD0 and LOD1, both UV'ed? If so, and if this UV is identical across both meshes, and is unique (no two triangles overlapping in texture-space), then you can quite easily do a few baking/transfer tasks yourself.

What you describe is very elaborate and probably something the pcmaster is looking for. It might not pose serious problem but I don't think the modelers have matched the UV's between LOD's because of the notable difference in topology. As soon as you leave some bevels, extrusions or crevices out the unwrapping is going to change drastically and effective use of UV map space. They have likely just remodel the low-poly versions much like collision mesh and do a basic unwrap and that's it unless something else was specifically agreed on.

Thanks for your answers. Imposter-like thing will be the best thing. So render the high-poly geometry with engine from 6 directions and planar-map it on the low-poly geo.

We won't have any normal maps on LOD2 buildings. Unfortunately our case is just that modellers DO NOT have matching and definitely not non-overlapping UVs anywhere and they won't have them. No re-modelling is going to happen, there's no budget either :) I'm not getting any more nor any better data. Cool, isn't it? :)

We simply cannot use any package because of a million tweaks we're adding to the models later in our editor, just like Hodgman says and my task is exactly as stated and I see you understood it well. Those of you who work for bigger studios know that sometimes there's no way of using the obvious solution and you have to program it yourself, right? :)

I didn't expect so long and elaborate answers in such a short time so thanks everyone, this forum has been awesome during all those years! :)


We simply cannot use any package because of a million tweaks we're adding to the models later in our editor
You can always export out the data in a format that the package understands, have it tweak it some more and export back into a format that your engine/tools understand wink.png


Unfortunately our case is just that modellers DO NOT have matching and definitely not non-overlapping UVs anywhere and they won't have them. No re-modelling is going to happen, there's no budget either I'm not getting any more nor any better data.
That's unfortunate sleep.png When I did this, I was lucky enough to have an awesome tech-artist who solved a lot of it for me. He made a script in Maya that would generate the low-poly mesh from the high-poly and then transfer over the UV's and tangent-frames from the high to the low automagically for me. Since Maya did that for us, I'm not experienced in writing that part... From there it was simple to do my bakes, as I outlined above.

What kind of UV's will the low-poly meshes have? If they're not non-overlapping, then you can't really use them for baking... unless for example the artists have unwrapped them symmetrically, and you assume that both halves of the model will bake the exact same data anyway.

www.cs.unc.edu/~olano/papers/SuccessiveMapping.IJCGA.pdf?

"In this paper we present a new simplification algorithm, which computes a piece-wise linear mapping between the original surface and the simplified surface"

Some images from an old implementation I did:

gu80bqE.jpg

OqNk45J.png

Left: High poly

Middle: Low poly

Right: High poly geometry mapped onto low-poly mesh

Any property can be transferred from the high poly version.

This topic is closed to new replies.

Advertisement