D3DFORMAT surface formats

Started by
1 comment, last by S1CA 18 years, 4 months ago
There are plenty of formats specified in D3DFORMAT enumeration. They contain different number of bytes per color channel. How do I decide which one to use in every particular case, e.g. for textures? Are there any common formats usually used for rendering surfaces and textures? Thank you
Advertisement
In directX you can tell the LoadTexture function to use the format of the texture using D3DFMT_UNKNOWN. Obviously 32 bit (A8R8G8B8) is going to be slower than 16 bit (R5G6B5) but if you want an alpha channel, (to see through the texture etc) you'll need a format supporting it such as 32 bit above.
Take a look at D3DXCreateTextureFromFileEx in the MSDN library for more info on Loading in textures and formats.
Of course, in most cases you'll want the user to be able to choose between 32 or 16 bit depths as it is a major performance factor.

Hope that helps.

Hugh Osborne
www.indirectx.co.nr



--------------------------------www.hughosborne.co.uk[email=hugh.osborne@googlemail.com]hugh.osborne@googlemail.com[/email]
1) Decide on the target hardware for your application; the wider the range of graphics cards you want to support is, the smaller the range of surface formats available becomes. To get an idea of how much support there is for various formats, take a look at the "Graphics Cards Capabilities" chart in the latest DirectX SDK.

2) The number of formats available depends on your usage of the texture. Plain 2D textures have more available than render target textures for example.
IDirect3D9::CheckDeviceFormat() and related APIs can be used to determine whether a format is available at runtime on your target hardware for a specific purpose.

3) Next consider the amount of video memory available on the graphics cards you want to support as well as the amount of video memory your application requires at peak. If your requirements are always less than the available amount, then go for whatever format is going to be closest to the original source image: so if you have a 32bit TGA or PNG file as your source, choose something like A8R8G8B8; if you have a HDR image (for example), you might even want to go to a higher number of bits or floating point formats.

4) If your source images have any special characteristics, those can indicate the type of format to use too: a monochrome lightmap for example should probably be loaded as an L8 or L16 format; a normal map should use one of the normal map formats (where available, and where memory isn't limited).

5) If the use of that texture has any special characteristics, then that can suggest a format (e.g. a greyscale texture always only ever used in the alpha channel might suit an A8).

5) If your target hardware has limited memory, then you'll want to use DXT compressed formats for a lot of stuff; read up on each in the docs to see the tradeoffs. Generally, lower frequency images in particular look fine as DXT; normal maps don't look so great as DXT (at least when stored in RGB=XYZ form, there is however a DXT5 trick for storing compressed normals).

6) It's largely possible to develop an automatic heuristic for choosing texture format; particularly when you have information about the source image and how the texture will be used. Even if you don't do it programmatically, you can build up your own rulebook using steps like the above quite easily.

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

This topic is closed to new replies.

Advertisement