Sign in to follow this  
amarhys

Loading high resolution texture ?

Recommended Posts

amarhys    122
Hi, I am working on a Myst-like game (the name of the game is Amarhys, I already posted topics about it on Gamedev.net) and I just finished to implement a 360 panoramic interface (I use a cube with a user point of view in the center of the cube). The six view textures are rendered under 3DS Max (each view is 1536x1536, the minimum resolution to get a good looking aspect). I concatenated all these views in a single 4096x4096 texture I use to map the six faces, playing with texture coordinates). However, during the game, when I click to change from one view to another, it takes about 3 secondes to display the new environnement. These 3 secondes include : - loading texture from HDD in memory (I cannot directly load the file into a texture using D3DXCreateTextureFromFileEx because the JPEG is embedded in a global data file). The JPEG testure is about 3MO. - creating texture from loaded JPEG using D3DXCreateCubeTextureFromFileInMemoryEx. The texture is loaded in video memory (D3DPOOL_MANAGED), only one MipMap level, without any filtering (D3DX_FILTER_NONE both for Filter and Mipfilter) - mapping of the texture on the cube (vertex buffer is already created in video memory and is not updated, only the texture is updated with D3DDevice->SetTexture) I have to perform some additional tests but I guess 80% of the time is spent in CreateTexture phase (indeed, 3Mo is a small amount of data to load from the disk and later, when the texture is already loaded in video memory, I did not notice any slow down in the display when the panoramic view is rotated by the player). Then, my question is : how to optimize the texture creation phase ? - Maybe I should not create a new texture each time but rather load the JPG in an existing one using D3DXLoadSurfaceFromFileInMemory ? - Maybe I should load the texture in system memory (and transfer it later in video memory)? Note : the texture size (when uncompressed) is about 50Mo. The, if I want to prefetch texture (when I reach a view, I can load in advance all panoramic texture of neightboor views), I have to load them in system memory (video memory is too small). Thank you in advance for your answer (and for spending time to read this topic) Regards Amarhys

Share this post


Link to post
Share on other sites
Evil Steve    2017
There's a few things that are slow:
  • Loading from the HDD
  • Decompressing the JPEG
  • Creating the texture surface
  • Uploading it to video memory
    If I were you, I'd store the texture as a system memory texture, then upload it to video memory when needed. If you're using the managed pool, then D3D will take care of moving the texture in and out of video memory, and you can give it hints (UnloadTexture() and PreloadTexture()).

    Share this post


    Link to post
    Share on other sites
    Calin    419
    As Evil Steve already pointed out: don't load them from hard drive at run time. Load everything at the begining of the game and then display your textures as needed:

    if( view 1 active)
    {
    draw texture for view 1
    }


    if( view 2 active)
    {
    draw texture for view 2
    }

    and so on.

    [Edited by - Calin on March 15, 2006 9:52:11 AM]

    Share this post


    Link to post
    Share on other sites
    Scoob Droolins    258
    watch out, there are plenty of video cards out there that cannot handle a a 4096x4096 texture. assuming you have more of these big textures than can fit into memory all at once, why not use DXT compression in a DDS file (DXT1, since there is no alpha)? for maximum loading speed, you can write your own DDS file loader. it's not too difficult to create a DDS loader which is much faster than the D3DX version, since you can bypass all the filtering and re-formatting stuff (you can blast a DXT-formatted DDS file into memory with a single memcpy). also, consider using a cube map (again DDS format), each face 1024 x 1024.

    Share this post


    Link to post
    Share on other sites
    Evil Steve    2017
    Quote:
    Original post by Scoob Droolins
    watch out, there are plenty of video cards out there that cannot handle a a 4096x4096 texture.
    All ATi cards only support up to 2048x2048 texture resolution in fact. At least all the ATi cards listed in the DirectX caps spreadsheet (In the DX SDK, up to X850) only do 2048x2048.

    Share this post


    Link to post
    Share on other sites
    amarhys    122
    Thanks to all of you for your answers.

    The problem is that whatever solution I use, 1024x1024 resolution is not enough to texture each face of the cube. I found that 1536x1536 was the minimum "acceptable" , which meant 6 textures of 2048x2048 in memory, that's why I created a 4096x4096 "optimized" texture which represents less data (in amount of bytes).

    Maybe the way I implemented the cube texturing is wrong and using some additional texture filterings I could use 1024x1024 textures ? I used ANISOTROPIC for MAG, MIN and MIP filters. I am a newbie in Direct3D and I don't know any other method to improve texture mapping quality : maybe an antialias effect ? how to implement it under Direct3D ?

    I work in Microelectronics domain so I know very well why texture sizes are limited to power of 2 but sometimes it's irritating [smile]

    Amarhys

    Share this post


    Link to post
    Share on other sites
    kusma    170
    all modern gpus support non power of two sized textures. so there's really not much reason to use power of two sizes...

    Share this post


    Link to post
    Share on other sites
    Demirug    884
    Quote:
    Original post by Evil Steve
    Quote:
    Original post by Scoob Droolins
    watch out, there are plenty of video cards out there that cannot handle a a 4096x4096 texture.
    All ATi cards only support up to 2048x2048 texture resolution in fact. At least all the ATi cards listed in the DirectX caps spreadsheet (In the DX SDK, up to X850) only do 2048x2048.


    The last generation (that one that support Shader Model 3) supports 4096*4096 as this is part of the Shader Model 3 definition.

    Share this post


    Link to post
    Share on other sites
    Demirug    884
    Quote:
    Original post by kusma
    all modern gpus support non power of two sized textures. so there's really not much reason to use power of two sizes...


    The non power of two texture support is still limited on a wide range of GPUs.

    Share this post


    Link to post
    Share on other sites
    kusma    170
    Quote:
    Original post by Demirug
    The non power of two texture support is still limited on a wide range of GPUs.


    Yes, but thats mostly with texture coordinate wrapping and such, not really a problem when mapping a cube.

    Share this post


    Link to post
    Share on other sites
    Demus79    362

    The D3DX DDS loader is powerful especially if you have the mipmaps precalculated in the file (or no mipmaps are needed and you don't ask the function to create them).

    So, first try the D3DX loader and if then if you feel that it is the bottle neck, you can write another.

    It is possible to create DDS textures from memory too, so you can have a background streamer thread to load the textures that will be needed sometime soon.

    Share this post


    Link to post
    Share on other sites
    Demirug    884
    Quote:
    Original post by kusma
    Yes, but thats mostly with texture coordinate wrapping and such, not really a problem when mapping a cube.


    Missing mipmaps is a much bigger problem. Maybe less in this special use case but in general I would not recommend using textures without mipmaps.

    Share this post


    Link to post
    Share on other sites
    Evil Steve    2017
    Using non-power-of-2 textures is still a bad idea. A lot of cards - particularly the ones that first started being able to do NP2 textures - just create a power of 2 texture internally and tell you it's a NP2 one. Best way to check that is to lock the texture and look at the pitch. If you create a 600x600x32 texture, and the pitch is around 4096 instead of around 2400, it's likely that the driver created a power of 2 texture internally.

    Anyway, for backwards compatability, powers of 2 are good.

    Demirug: Ah, I didn't know that, cool. What card is that exactly? The X1800? (I know very little of ATi cards, despite having one myself [smile])

    Share this post


    Link to post
    Share on other sites
    Demirug    884
    Quote:
    Original post by Evil Steve
    Demirug: Ah, I didn't know that, cool. What card is that exactly? The X1800? (I know very little of ATi cards, despite having one myself [smile])


    With new cards every six month or even faster it is not that easy to stay up to date. I only know this because it is part of my job as technical writer for a German print magazine.

    But to come back to your question. All cards that are based on R5XX and RV5XX GPUS support it. Normally this cards are named Radeon X1xxx. xxx stands for a 3 digit number like 300, 600, 800 or 900.


    Share this post


    Link to post
    Share on other sites
    amarhys    122
    Thanks again to all of you for your answers.

    When I load a 1536x1536 texture, D3DX (or my video card driver, I don't know, I am not very familiar with D3D) creates internally a 2048x2048 texture (I got this size from width and height parameters of associated surface descriptor). Then, to map a face of a cube, I have to use 0.0f and 0.75f for min and max texture coordinates.

    I am going to try D3DX DDS loader.

    Amarhys

    Share this post


    Link to post
    Share on other sites
    amarhys    122
    Hi again,

    I did not notice any significant improvement loading a DDS file instead of a JPG file. It still takes about 3 seconds to load a 4096x4096 texture. Maybe I did not do that correctly :

    1. I created a DDS texture (DXT1 type) from orignal JPEG texture under Paint Shop Pro using the plugin provided with D3DX SDK.

    2. I used the same function as for JPEG (i.e. CreateTextureFromFile) specifying the DDS file instead of JPEG file (to simplify the test, I used individual files rather than files embedded in a global database)

    .. and no improvment on loading time.

    Maybe I forgot something ... D3DX SDK documentation is a little bit unclear for newbies ... I saw some chapters describing availibility of hardware decompression for texture (video cards could directly use DDS compressed texture). How does it work ? Is it transparant in software point of view or should I create a D3DX device with some specific options to make this feature available ?

    Thank you in advance for your answers.

    Regards
    Amarhys

    Share this post


    Link to post
    Share on other sites

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

    Sign in to follow this