Sign in to follow this  
fire67

How does material layering work ?

Recommended Posts

Hi there,

 

I am quite a beginner and I would like to know more about material layering exposed in the The Order : 1886 and now applied in Allegorithmic tools.

 

From what I understand, this is not more of a baking process with some uber shaders involved, there is no real material combination with multiple shader and materials at runtime.

It seems that the artists can choose from a material library (rock, metal, fabric, plastic, etc.) and can apply them in some layer using mask and then the final material is created using some baking process and texture merge.

 

But maybe I am wrong, so I would be very happy to understand the way it works because it seems so awesome ! :)

Thanks a lot !

Share this post


Link to post
Share on other sites

I came here just to post the same question, I thought that in the order1886 they could composite various materials in mari and then they were baked out to 1 texture set for regular use in engine, but after reading the Allegorithmic tools blog post it would seem that they atleast are exporting separate materials and blending between them at run time ? surely this is a more expensive method? from the example it would appear they are using 4 diffuse, 4 normal's, 4 pbr masks etc and 1 material mask. that's a lot more textures.

 

If anyone could shed some light, that would be awesome :)

Share this post


Link to post
Share on other sites

I don't think they are blending at runtime, because they excplicitly mention that the compilation of so many materials is a con. Also, on page 74, they show the pixel shader that is used to produce the "compiled" texture. And that's where they blend multiple input textures into one - for each attribute (diffuse, normal etc.). The tools they provide, do the compilation step at runtime for reloading/preview, that seems to be the "secret".

Share this post


Link to post
Share on other sites

I don't think they are blending at runtime, because they excplicitly mention that the compilation of so many materials is a con. Also, on page 74, they show the pixel shader that is used to produce the "compiled" texture. And that's where they blend multiple input textures into one - for each attribute (diffuse, normal etc.). The tools they provide, do the compilation step at runtime for reloading/preview, that seems to be the "secret".

 

Which is what i thought, but the allegorithmic article is misleading as it gives a 4way blend unity shader as a example usage 

 
PRACTICAL EXAMPLE: 4-WAY BLENDING IN UNITY 5

In this example, we have a Unity shader that's designed to blend four materials together using three different masks packed into an RGB map. That's a lot of inputs and parameters to set up manually.

 

 

Share this post


Link to post
Share on other sites

UE4 has a documentation page here that talks about how they do layering.

Basically: They render the mesh once per layer and then the results are blended.

 

HTH

I dont think that is the case here though. I dont think thats the process being used in either link in the first post.

Share this post


Link to post
Share on other sites

Ha, I also wanted to ask the same question.  We are all so curious!

 

It seems more like material masking than material layering.  I was curious about why you would choose this method though.  From the Allegorithmic post, they made it sound like it would only apply to AAA open world and not mobile.  I have no idea why.  Are there a ton of dependent texture lookups happening?

 

Isn't this technique just the same thing as terrain splatting?

 

Also, does anybody have any info on this comment from the Allegorithmic post... I'm not sure what exactly is meant by falloff here:

 

"When using blended shaders like this on console, there is usually a texture input per material that drives how the material blends with other materials along with some controls for threshold and falloff."

Share this post


Link to post
Share on other sites

Yeah so there's obviously two types of material layering/masking being discussed:

 

Pre-blending: Can use unlimited layers, no cost at runtime, blend mask must be same size / smaller than the textures.

On-demand blending: Number of layers limited by performance (lots of texture lookups in your shader!), blend mask can be completely dynamic, or larger than your textures (e.g. on a terrain where your blend mask is 1km in size, but your textures are 1m in size).

 

There's also hybrids, which perform pre-blending at runtime into a texture atlas... or on-demand blending that blends between multiple pre-blended layers.

Share this post


Link to post
Share on other sites

Thanks for your answer @MJP.

 

I'm not really familiar with the Allegorithmic tools, but I can certainly explain how the material compositing works for The Order. Our compositing process is primarily offline: we have a custom asset processing system that produces runtime assets, and one of the processors is responsible for generating the final composite material textures. The tools expose a compositing stack that's similar to layers in Photoshop: the artists pick a material for each layer in the stack, and each layer is blended with the layer below it. Each layer specifies a material asset ID, a blend mask, and several other parameters that can be used to customize how exactly the layers are composited (for instance, using multiply blending for albedo maps). The compositing itself is done in a pixel shader, but again this is all an offline process. At runtime we just end up with a set of maps containing the result of blending together all of the materials in the stack, so it's ready to be sampled and used for shading. This is nice for runtime performance, since you already did all of the heavy lifting during the build process.

 

This seems to be a really nice workflow for artists as they have some kind of material library which they can customize and blend to obtain advanced materials on complicated object. This seems to be the best regarding to performances.

 

The downside of offline compositing is that you're ultimately limited by the final output resolution of your composite texture, so that has to be chosen carefully. To help mitigate that problem we also support up to 4 levels of runtime layer blending, which is mostly used by static geometry to add some variation to tiled textures. So for instance you might have a wall with a brick texture tiled over it 10 times horizontally, which would obviously look tiled if you only had that layer. With runtime blending you can add some moss or some exposed mortar to break up the pattern without having to offline composite a texture that's 10x the size. 

 

So you are using some kind of uber shader that accepts multiple albedos, normals, etc. each with his associated tiling and offsets with a masking texture for each layer ?

 

With UE4 all of the layers are composited at runtime. So the pixel shader iterates through all layers, determines the blend amount, and if necessary samples textures from that layer so that it can blend the parameters with the previous layer. If you do it this way you avoid needing complex build processes to generate your maps, and you also can decouple the texture resolution of your layers. But on the other hand, it may get expensive to blend lots of layers.

 

You might also have multiple drawcalls from those layers which are not present in the above technics, right ? This has some performances costs, can those be neglected ?

Share this post


Link to post
Share on other sites

This seems to be a really nice workflow for artists as they have some kind of material library which they can customize and blend to obtain advanced materials on complicated object. This seems to be the best regarding to performances.

 
Yes, I would say that it has worked out very well for us. It helps divide the responsibility appropriately among the content team: a lot of environment artists can just pull from common material libraries and composite them together in order to create unique level assets. At the same time our texture/shader artists can't author the most low-level material templates, and whenever they make changes they are automatically propagated to the final runtime materials.
 

So you are using some kind of uber shader that accepts multiple albedos, normals, etc. each with his associated tiling and offsets with a masking texture for each layer ?


Yup. We have an ubershader that has a for loop over all of the material layers, but we generate a unique shader for every material with certain constants and additional code compiled in. The number of layers ends up being a hard-coded constant at compile time, and so we unroll the loop that samples the textures for each layer and blends the resulting parameters.
 

You might also have multiple drawcalls from those layers which are not present in the above technics, right ? This has some performances costs, can those be neglected ?


I don't think you would ever want to have multiple draw calls for runtime layer blending. It would likely be quite a bit more expensive than doing it all in a loop in the pixel shader.

Share this post


Link to post
Share on other sites

 

You might also have multiple drawcalls from those layers which are not present in the above technics, right ? This has some performances costs, can those be neglected ?

 

You could, but why would you?  From their documentation/presentations, I can see how one might misconstrue what Epic is actually doing with material layering, but they are most definitely not redrawing the same mesh over and over with different materials to later composite into a final, layered material at runtime.  That would be insane.

 

If you can define what a layer consists of, you can then create functionality to generate it from appropriate inputs.  Then you can create functionality to blend two layers given a mask.  Once you've done that, it isn't a big jump to extend that to n-layers, based purely on performance constraints.  Create an ubershader that does this, build a system that can create permutations of various layer types (if that's necessary) and number of blends, and now you have runtime layer blending.  One material with one shader creating an n-layered visual.

 

This is how our system works, too, and it's very effective, but we limit it to 4 layers currently to be perf-conservative.

Share this post


Link to post
Share on other sites

Of course, the less drawcalls you have the happier you are :D but I had some difficulties to understand how this could work at runtime without having massive drawcalls.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this