UDK/UnrealEngine material implementation

Started by
0 comments, last by InvalidPointer 12 years ago
I'd like to work out a generic model for the UDK material system. A brute force assumption would be that material scripts are executed linearly. In many cases I should think that's pretty much what happens since most features included in the framework are pretty straightforward (multiplication, blending, etc). However, certain functionality seems a bit too expensive (at least at face value) to be done at runtime. For instance extracting individual channels from a texture and rotating/scaling them independently per frame as done in this series. True, the color channels can be split and cached, but this doesn't say much about how any sort of optimizations are done by the engine. My primary concern is that ramping up the number of such operations should, IMO, pretty quickly become prohibitive.

As for the functionality presented above, the best I can think of is either doing the transform in realtime per-pixel (rotated look-up) or doing it using a number of per-vertex passes via FBO-s. The former isn't really a likely option for a number of reasons. However, assuming a good-sized texture (eg 1024x1024), a single texture as presented in the tutorial would end up pretty darn costly as well if it needs to be drawn 5-6 times over per instance. Does the engine simply do a looot of caching or is Unreal Engine just really good at optimizing arbitrary scripts?

To recap: does anyone have a clue as to how this stuff is done under the hood in UE?
Advertisement
I'm not sure where you're getting your information from, as swizzles/masking are dirt-cheap to the point of being free. You can also just dink around with some 2D matrix multiplication for your texture coordinates, so the final cost for something like what you describe there is maybe 2-3 ALU operations per texture coordinate matrix multiply and the standard texture read per, well, texture read. For three channels that's like 6-9 ALU and 3 reads, which is hardly expensive in this day and age.

Also, you assume Unreal is dinking around with the actual texture data when it actually isn't.

In terms of mechanics, the material compiler pretty literally concatenates HLSL strings then feeds that into fxc/cgc to get some runnable bytecode. There's a little bit of extra script stuff for parameter exposure, but you seem to be really overthinking this. It isn't magic ;)
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

This topic is closed to new replies.

Advertisement