Sign in to follow this  
cozzie

Article On Texture (Surface) Formats?

Recommended Posts

Hi,
Does anyone know of a good article on the different types/ formats of textures that can be used?
This sounds quite general, but I mean topics like these

- file formats, i,e. TGA vs DDS (pro's and cons)
- compression types (DX1 to 5, BC etc,)
(I understand there's an overlap here between textures and surface formats)
- what best to use for different types of textures
-- diffuse
-- specular
-- data/ height map
-- etc.

I'd find this information very useful because I'm currently designing/ thinking out my new D3D11 engine and an asset pipeline.

Share this post


Link to post
Share on other sites
^that :)

I'm lazy so I use DDS as my runtime texture file format for Windows at the moment. It's pretty well designed for fast loading, but can be beat by custom formats if you want to spend the time.
On console we use custom formats that require zero deserialization (they just need to be loaded/memcpy'ed to the right location).

Our asset loading system allows loading from compressed "packs" of files, so our DDS files are optionally 'zipped up' with something like LZMA. That's common across the whole engine file loading system though, not specific to textures.

Your artists generally won't want to work with DDS though, so our asset build system (there's a compile button for art, just like for code) pre-converts from TGA/PNG/etc to DDS as part of the compilation process. This tool also automatically chooses BC formats and mixes channels together based on metadata/instructions in the shader code.
e.g. A shader may tell the material editor that it has two input textures - RGB colour and monochrome translucency - but also tell the asset compiler that it wants these packed together into a single BC3 texture. The tool will get the two PNGs specified in the artist's XML material file, build the packed/compressed DDS, and generate a binary material file that references the DDS.

Share this post


Link to post
Share on other sites
If your normals are in a cone (don't go all the way to the edge), you can also use "partial derivative normal maps", where you store x/z and y/z, and reconstruct in the shader with normalize(x_z, y_z, 1).
One advantage of this representation is that you get great results by simply adding together several normal maps (e.g. Detail mapping) *before* you decode back into a 3D normal. The alternative of adding together several normal maps (and renormalising) loses a lot of detail / flattens everything.

Share this post


Link to post
Share on other sites

If your normals are in a cone (don't go all the way to the edge), you can also use "partial derivative normal maps", where you store x/z and y/z, and reconstruct in the shader with normalize(x_z, y_z, 1).
One advantage of this representation is that you get great results by simply adding together several normal maps (e.g. Detail mapping) *before* you decode back into a 3D normal. The alternative of adding together several normal maps (and renormalising) loses a lot of detail / flattens everything.

 

We use Reoriented Normal Mapping for combining normal maps, but derivative normal maps are nice alternative if you want to save cycles.

Share this post


Link to post
Share on other sites
Thanks guys. I think I'll just fetch .xy from the normal map and reconstruct z. There was also a good explanation on how to do this in the presentation posted above (by MJP).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this