Thoughts/opinions/complaints appreciated as usual...
D3D #23: Texture creating/loading enumeration and checking
Textures are a basic resource across all versions of Direct3D and can be used as both inputs and outputs for the pipeline. Despite being a fundamental part of the API they are subject to a number of constraints that an application developer needs to be aware of. For a more general discussion of enumeration and capabilities please refer to D3D #5: Hardware capabilities and enumeration.
This particular FAQ entry focuses on textures, stored as IDirect3DTexture9's, but the information applies equally to volume (IDirect3DVolumeTexture9) and cube (IDirect3DCubeTexture9) forms.
There are two common methods for creating texture
- Creating an empty texture and either rendering data to it or filling it procedurally using "Lock" operations.
- Using the D3DX functions to create a texture based on existing data (e.g. an image file stored on disk).
Creating regular textures is done via the IDirect3DDevice9::CreateTexture() function, whereas the Texturing Functions reference page contains a list of the D3DX functions.
The dimensions, measured in pixels, are of particular importance. The D3DCAPS9 structure (retrieved via IDirect3DDevice9::GetDeviceCaps()) reveals three maximum values: MaxTextureWidth, MaxTextureHeight and MaxVolumeExtent. You can create texture resources with dimensions between 1 and the appropriate maximum value. For most D3D9 hardware this will be either 2048 or 4096, it is rare to find hardware that supports dimensions greater than 4096.
For optimal performance powers of 2 dimensions (64, 128, 256, 512...) should be used. You should pay particular attention to the D3DCAPS9::TextureCaps flags (see the D3DCAPS9 documentation page for precise details). In particular these flags indicate whether non-2n dimensions are permitted, and if they are whether there are any restrictions. You should still check these, but according to the "Graphics Card Capabilities" spreadsheet no modern hardware restricts dimensions.
The D3DFORMAT enumeration contains a huge number of different resource formats, but most hardware only allows you to use a subset of these. Specifically, not all texture formats can be used for all types of usage. The IDirect3D9::CheckDeviceFormat() function is used to determine which formats are available for which resource types and formats. Depending on the intended usage it is often possible to create a number of "fallback" choices using this function such that a texture will always be created even if it is not necessarily the best/desired choice:
if( FAILED( CheckDeviceFormat( D3DFMT_A8R8G8B8 ) ) )
if( FAILED( CheckDeviceFormat( D3DFMT_A8B8G8R8 ) ) )
if( FAILED( CheckDeviceFormat( D3DFMT_A2R10G10B10 ) ) )
// We don't support *any* of the three textures we
// just tested. Either continue trying different ones
// or return an error at this point...
// Third Choice: We can use A2R10G10B10
// Second Choice: We can use A8B8G8R8
// First Choice: We can use A8R8G8B8
The above method is particularly important when targetting multiple types of hardware - not all configurations allow the same texture formats, so providing a fallback can be important.
Another important part to check is the intended usage of the texture resource. This is done by passing one (or more) of the D3DUSAGE or D3DUSAGE_QUERY enumerations into the Usage parameter of CheckDeviceFormat(). A common usage that you must check is when creating a render target (a texture that can be used as both an input and output for the pipeline, useful in many types of effect) - D3DUSAGE_RENDERTARGET should be used here.
When doing any High Dynamic Range Rendering (HDRR) you will find that there are 16bit (half) precision and 32bit (single) precision formats available. Two basic features available to almost every other format are often lacking with these texture formats - filtering and blending. Using CheckDeviceFormat() and passing in D3DUSAGE_QUERY_FILTER and/or D3DUSAGE_QUERY_POSTPIXELSHADER_BLENDING can determine whether any filtering and frame-buffer blending is supported. For D3D9 generation hardware it is common for the half-precision formats to support filtering/blending but single-precision won't - thus it might be better to choose a half-precision format instead of single-precision.
The core Direct3D runtime has no functionality for creating textures from images stored on disk or in memory, but D3DX has comprehensive support via its Texturing Functions. Familiarity with the D3DX functions is useful and can save you a lot of time! It is worth noting that the "FromFileInMemory" functions can be very useful when paired with a virtual file system; if, for example, you store your files in a compressed/encrypted archive you can load them into memory as appropriate and still have D3DX load/create a texture.
A common confusion with the D3DX functions is that they can change the parameters passed in, or if defaults are requested they might not give the expected results. In particular, if a source image is stored in a non-2n dimension the D3DX functions will round the image up (with filtering) to the nearest 2n dimension. In some cases this can introduce blurry results or unexpected values when manipulating the resource. Specifying D3DX_DEFAULT_NONPOW2 for height and/or width will avoid this characteristic, but be prepared for the call to fail if the device does not allow non-2n dimensions (although, as previously remarked, this is unlikely).
D3DX only supports the image formats listed in the D3DXIMAGE_FILEFORMAT enumeration. This covers the majority of uses, but it is worth noting that not all formats are supported - GIF being one that some people still insist on using. In some cases D3DX will have to convert the incoming data into a pixel format supported by Direct3D - specifying D3DFMT_UNKNOWN will allow D3DX to choose the most appropriate (and fail if none can be determined). Specifying a particular format does not guarantee the returned texture is of that format - if a conversion or support is unavailable an alternative will be selected. Using the D3DFMT_FROM_FILE flag can be useful, but if conversion or support is unavailable the call will fail.
As mentioned in the previous paragraphs, D3DX may well choose different parameters where appropriate - this causes people problems if they assume that data will be loaded in a known format (although remember that assumptions about device capabilities are not sensible!). It may be useful to use the D3DXGetImageInfoFromFile() and/or D3DXCheckTextureRequirements() functions before attempting to load a texture. Several of the D3DX functions will return a D3DXIMAGE_INFO structure that contains the final parameters that the texture was actually created with. Inspecting this output and checking for differences can be useful.