How does material layering work ?

Started by
11 comments, last by fire67 7 years, 11 months ago

Hi there,

I am quite a beginner and I would like to know more about material layering exposed in the The Order : 1886 and now applied in Allegorithmic tools.

From what I understand, this is not more of a baking process with some uber shaders involved, there is no real material combination with multiple shader and materials at runtime.

It seems that the artists can choose from a material library (rock, metal, fabric, plastic, etc.) and can apply them in some layer using mask and then the final material is created using some baking process and texture merge.

But maybe I am wrong, so I would be very happy to understand the way it works because it seems so awesome ! :)

Thanks a lot !

Advertisement

I came here just to post the same question, I thought that in the order1886 they could composite various materials in mari and then they were baked out to 1 texture set for regular use in engine, but after reading the Allegorithmic tools blog post it would seem that they atleast are exporting separate materials and blending between them at run time ? surely this is a more expensive method? from the example it would appear they are using 4 diffuse, 4 normal's, 4 pbr masks etc and 1 material mask. that's a lot more textures.

If anyone could shed some light, that would be awesome :)

I don't think they are blending at runtime, because they excplicitly mention that the compilation of so many materials is a con. Also, on page 74, they show the pixel shader that is used to produce the "compiled" texture. And that's where they blend multiple input textures into one - for each attribute (diffuse, normal etc.). The tools they provide, do the compilation step at runtime for reloading/preview, that seems to be the "secret".

I don't think they are blending at runtime, because they excplicitly mention that the compilation of so many materials is a con. Also, on page 74, they show the pixel shader that is used to produce the "compiled" texture. And that's where they blend multiple input textures into one - for each attribute (diffuse, normal etc.). The tools they provide, do the compilation step at runtime for reloading/preview, that seems to be the "secret".

Which is what i thought, but the allegorithmic article is misleading as it gives a 4way blend unity shader as a example usage

PRACTICAL EXAMPLE: 4-WAY BLENDING IN UNITY 5

In this example, we have a Unity shader that's designed to blend four materials together using three different masks packed into an RGB map. That's a lot of inputs and parameters to set up manually.

UE4 has a documentation page here that talks about how they do layering.

Basically: They render the mesh once per layer and then the results are blended.

HTH

Never say Never, Because Never comes too soon. - ryan20fun

Disclaimer: Each post of mine is intended as an attempt of helping and/or bringing some meaningfull insight to the topic at hand. Due to my nature, my good intentions will not always be plainly visible. I apologise in advance and assure you I mean no harm and do not intend to insult anyone.

UE4 has a documentation page here that talks about how they do layering.

Basically: They render the mesh once per layer and then the results are blended.

HTH

I dont think that is the case here though. I dont think thats the process being used in either link in the first post.

Ha, I also wanted to ask the same question. We are all so curious!

It seems more like material masking than material layering. I was curious about why you would choose this method though. From the Allegorithmic post, they made it sound like it would only apply to AAA open world and not mobile. I have no idea why. Are there a ton of dependent texture lookups happening?

Isn't this technique just the same thing as terrain splatting?

Also, does anybody have any info on this comment from the Allegorithmic post... I'm not sure what exactly is meant by falloff here:

"When using blended shaders like this on console, there is usually a texture input per material that drives how the material blends with other materials along with some controls for threshold and falloff."

Yeah so there's obviously two types of material layering/masking being discussed:

Pre-blending: Can use unlimited layers, no cost at runtime, blend mask must be same size / smaller than the textures.

On-demand blending: Number of layers limited by performance (lots of texture lookups in your shader!), blend mask can be completely dynamic, or larger than your textures (e.g. on a terrain where your blend mask is 1km in size, but your textures are 1m in size).

There's also hybrids, which perform pre-blending at runtime into a texture atlas... or on-demand blending that blends between multiple pre-blended layers.

I'm not really familiar with the Allegorithmic tools, but I can certainly explain how the material compositing works for The Order. Our compositing process is primarily offline: we have a custom asset processing system that produces runtime assets, and one of the processors is responsible for generating the final composite material textures. The tools expose a compositing stack that's similar to layers in Photoshop: the artists pick a material for each layer in the stack, and each layer is blended with the layer below it. Each layer specifies a material asset ID, a blend mask, and several other parameters that can be used to customize how exactly the layers are composited (for instance, using multiply blending for albedo maps). The compositing itself is done in a pixel shader, but again this is all an offline process. At runtime we just end up with a set of maps containing the result of blending together all of the materials in the stack, so it's ready to be sampled and used for shading. This is nice for runtime performance, since you already did all of the heavy lifting during the build process.

The downside of offline compositing is that you're ultimately limited by the final output resolution of your composite texture, so that has to be chosen carefully. To help mitigate that problem we also support up to 4 levels of runtime layer blending, which is mostly used by static geometry to add some variation to tiled textures. So for instance you might have a wall with a brick texture tiled over it 10 times horizontally, which would obviously look tiled if you only had that layer. With runtime blending you can add some moss or some exposed mortar to break up the pattern without having to offline composite a texture that's 10x the size.

With UE4 all of the layers are composited at runtime. So the pixel shader iterates through all layers, determines the blend amount, and if necessary samples textures from that layer so that it can blend the parameters with the previous layer. If you do it this way you avoid needing complex build processes to generate your maps, and you also can decouple the texture resolution of your layers. But on the other hand, it may get expensive to blend lots of layers.

Thanks for your answer @MJP.

I'm not really familiar with the Allegorithmic tools, but I can certainly explain how the material compositing works for The Order. Our compositing process is primarily offline: we have a custom asset processing system that produces runtime assets, and one of the processors is responsible for generating the final composite material textures. The tools expose a compositing stack that's similar to layers in Photoshop: the artists pick a material for each layer in the stack, and each layer is blended with the layer below it. Each layer specifies a material asset ID, a blend mask, and several other parameters that can be used to customize how exactly the layers are composited (for instance, using multiply blending for albedo maps). The compositing itself is done in a pixel shader, but again this is all an offline process. At runtime we just end up with a set of maps containing the result of blending together all of the materials in the stack, so it's ready to be sampled and used for shading. This is nice for runtime performance, since you already did all of the heavy lifting during the build process.

This seems to be a really nice workflow for artists as they have some kind of material library which they can customize and blend to obtain advanced materials on complicated object. This seems to be the best regarding to performances.

The downside of offline compositing is that you're ultimately limited by the final output resolution of your composite texture, so that has to be chosen carefully. To help mitigate that problem we also support up to 4 levels of runtime layer blending, which is mostly used by static geometry to add some variation to tiled textures. So for instance you might have a wall with a brick texture tiled over it 10 times horizontally, which would obviously look tiled if you only had that layer. With runtime blending you can add some moss or some exposed mortar to break up the pattern without having to offline composite a texture that's 10x the size.

So you are using some kind of uber shader that accepts multiple albedos, normals, etc. each with his associated tiling and offsets with a masking texture for each layer ?

With UE4 all of the layers are composited at runtime. So the pixel shader iterates through all layers, determines the blend amount, and if necessary samples textures from that layer so that it can blend the parameters with the previous layer. If you do it this way you avoid needing complex build processes to generate your maps, and you also can decouple the texture resolution of your layers. But on the other hand, it may get expensive to blend lots of layers.

You might also have multiple drawcalls from those layers which are not present in the above technics, right ? This has some performances costs, can those be neglected ?

This topic is closed to new replies.

Advertisement