Loading high resolution texture ?

Started by
14 comments, last by amarhys 18 years, 1 month ago

The D3DX DDS loader is powerful especially if you have the mipmaps precalculated in the file (or no mipmaps are needed and you don't ask the function to create them).

So, first try the D3DX loader and if then if you feel that it is the bottle neck, you can write another.

It is possible to create DDS textures from memory too, so you can have a background streamer thread to load the textures that will be needed sometime soon.
Advertisement
Quote:Original post by kusma
Yes, but thats mostly with texture coordinate wrapping and such, not really a problem when mapping a cube.


Missing mipmaps is a much bigger problem. Maybe less in this special use case but in general I would not recommend using textures without mipmaps.
Using non-power-of-2 textures is still a bad idea. A lot of cards - particularly the ones that first started being able to do NP2 textures - just create a power of 2 texture internally and tell you it's a NP2 one. Best way to check that is to lock the texture and look at the pitch. If you create a 600x600x32 texture, and the pitch is around 4096 instead of around 2400, it's likely that the driver created a power of 2 texture internally.

Anyway, for backwards compatability, powers of 2 are good.

Demirug: Ah, I didn't know that, cool. What card is that exactly? The X1800? (I know very little of ATi cards, despite having one myself [smile])
Quote:Original post by Evil Steve
Demirug: Ah, I didn't know that, cool. What card is that exactly? The X1800? (I know very little of ATi cards, despite having one myself [smile])


With new cards every six month or even faster it is not that easy to stay up to date. I only know this because it is part of my job as technical writer for a German print magazine.

But to come back to your question. All cards that are based on R5XX and RV5XX GPUS support it. Normally this cards are named Radeon X1xxx. xxx stands for a 3 digit number like 300, 600, 800 or 900.


Thanks again to all of you for your answers.

When I load a 1536x1536 texture, D3DX (or my video card driver, I don't know, I am not very familiar with D3D) creates internally a 2048x2048 texture (I got this size from width and height parameters of associated surface descriptor). Then, to map a face of a cube, I have to use 0.0f and 0.75f for min and max texture coordinates.

I am going to try D3DX DDS loader.

Amarhys

Hi again,

I did not notice any significant improvement loading a DDS file instead of a JPG file. It still takes about 3 seconds to load a 4096x4096 texture. Maybe I did not do that correctly :

1. I created a DDS texture (DXT1 type) from orignal JPEG texture under Paint Shop Pro using the plugin provided with D3DX SDK.

2. I used the same function as for JPEG (i.e. CreateTextureFromFile) specifying the DDS file instead of JPEG file (to simplify the test, I used individual files rather than files embedded in a global database)

.. and no improvment on loading time.

Maybe I forgot something ... D3DX SDK documentation is a little bit unclear for newbies ... I saw some chapters describing availibility of hardware decompression for texture (video cards could directly use DDS compressed texture). How does it work ? Is it transparant in software point of view or should I create a D3DX device with some specific options to make this feature available ?

Thank you in advance for your answers.

Regards
Amarhys

This topic is closed to new replies.

Advertisement