How to write / setup shaders that support wrinkle maps

Started by
3 comments, last by Gian-Reto 8 years, 2 months ago

So I am not sure if anyone on GD can help, seeing how there seems to be no one yet to have posted a successfull attempt at using wrinkle maps in Unreal Engine 4.

It seems it was possible in UDK...

Now, I am a total noob when it comes to things rigging, or how to use wrinkle maps ANYWHERE, not just in UE4.

So I am NOT really asking about how to set up the Unreal engine shader (though if somebody has a link to that, great).... I am interested in how this is generally done in shaders that allow you to use wrinkle maps. So that I have a better understanding how it COULD work in UE4, and what I need to prepare my model for it.

How are wrinkle maps generally triggered within the game engine?

1) Are there techniques that allow to do it procedurally, to measure edge length / polygon compression and trigger the map blending on a vertex by vertex basis?

2) Do I need to export any kind of special data alongside my animations and use that in the shader to trigger it?

3) Is there any way to "animate vertex colors" and export that alongside the animations, and use the vertex colors to control the map blending?

I am not sure yet that the added fidelity is really worth the hassle and performance cost in my case. I am just trying to get a better understanding of what is needed for creating shaders that use wrinkle maps in general.

EDIT:

searching for more general advice on wrinkle maps, I started to find more helpful pages. Haven't found one yet that did explain the concepts in general though, seems to be just many very specific solutions for specific tools:

Valves Source engine: https://developer.valvesoftware.com/wiki/Wrinkle_maps

Advertisement

I've not used them in UE4, but on a custom engine.
We mapped them to particular bones in the animation -- e.g. some of the spine bones would have their twist measured (yaw difference from the bind pose), and this was sent to the shader as a blend factor from -1 to +1. The shader then used that variable to blend between the neutral torso normal map, and a "wrinkled left" and "wrinkled right" version.

Is there any way to "animate vertex colors" and export that alongside the animations, and use the vertex colors to control the map blending?

Sure, this would be the same as doing "morph targets", but instead of morphing a position attribute, you're morphing a colour attribute.
Alternatively, you could attach these colours to the bones themselves, as each vertex already blends multiple bones together using multiple weights. As well as each vertex grabbing the bone matrices and blending them, it could grab the bone "colour" (wrinkle alpha/bend factors).


I've not used them in UE4, but on a custom engine.
We mapped them to particular bones in the animation -- e.g. some of the spine bones would have their twist measured (yaw difference from the bind pose), and this was sent to the shader as a blend factor from -1 to +1. The shader then used that variable to blend between the neutral torso normal map, and a "wrinkled left" and "wrinkled right" version.

If I understand this correct, you created a separate normal map texture for each different area that should get wrinkles added and drove that by these blend feactors?

I read some questions by other people that had trouble getting enough texture samplers for the different normal maps needed. I guess this was the reason why, as this approach sounds rather heavy on different textures needed. Did you just had a few areas that needed wrinkle maps applied? Did you run into problems with the texture sampler count?


Sure, this would be the same as doing "morph targets", but instead of morphing a position attribute, you're morphing a colour attribute.
Alternatively, you could attach these colours to the bones themselves, as each vertex already blends multiple bones together using multiple weights. As well as each vertex grabbing the bone matrices and blending them, it could grab the bone "colour" (wrinkle alpha/bend factors).

IF I can get something like this to work, without making the animations much heavier on the CPU and Memory, I guess that would be a preferrable solution. If I understand the second alternative correctly, that would just be getting the weights of influence of the different bones on the vertex and calculating the amount of the wrinkle map that should be applied procedurally?

If I really go forward with the wrinkle map, I would prefer a solution with just a single wrinkle normal map, or maybe two (stretch map, compression map), and some shader code that would only mix in these additional maps at places where the compression/stretching actually happens... or would I just trade less texture samplers needed for a way larger impact on the CPU/GPU for the in-shader calculations?

Anyway, thanks a lot for the answers.


I read some questions by other people that had trouble getting enough texture samplers for the different normal maps needed. I guess this was the reason why, as this approach sounds rather heavy on different textures needed. Did you just had a few areas that needed wrinkle maps applied? Did you run into problems with the texture sampler count?

This was back in the DX9 era, where sampler count was even more of a problem, so we had a few hacks around this. One was to atlas the wrinklemaps, so that the UV space was different for the wrinkles to the base normalmap. DX10+ era, we'd probably have used texture arrays instead. We packed some extra information into the UV channels to indicate what body part each part of the mesh belonged to - pretty hacky but it didnt add to the size of the mesh data which was nice.

Another solution is to subdivide the mesh into multiple draw calls.

After some more Internet research and some thinking I guess the best way forward would be to go with hodgmans proposal... ie using single bones rotation as a trigger.

as background, I am planning to create a character for a pseudo-isometric use case, so the character is not visible up close (hence I am not sure this is worth the hassle in my case, I just would like to give it a try). That is why I will certainly not use this for facial animation, just to give clothing around the bigger joints a more believable behaviour.

I came to the conclusion that I would have around 8 different areas that need the blends. Knees, Elbows, Shoulders/Armpits, Hips, Maybe the torso (though with short trousers or shirts, that might still only be 8 areas).... with black and white masks that would be only 2 texture samplers more needed, plus one for the additional normal map to blend with. Even with more joints that should get normal map blending, that should still fit into the max sampler count.

creating the additional normal map shouldn't be a problem as its quite easy to add normals to a model in 3D Coat post-baking, and the masks should also not take long given that accuray is not so important with just clothing wrinkles affected.

Does this sound like a good plan?

The only thing I don't know yet is how to get access to the bone rotation values in a UE4 shader... do I need an additional Blueprint or C++ code for that, and drive the shader from this? Somewhere I read that morph targets can drive materials, is this also true for skeletal animations?

This topic is closed to new replies.

Advertisement