[DirectX9] How to deal with many textures

Started by
3 comments, last by fanste 12 years, 7 months ago
Hey,

I'm new to DirectX & Co. and i have some questions about textures in DirectX9.


  1. Where does DirectX keep the textures, which where loaded with D3DXCreateTexture[FromFileInMemory]? Will they be permanently in VRAM or will DirectX put them into the normal RAM until they are used?
  2. If they will be in VRAM, then it is recommended to dynamically load them, right? Which will be the best way to do it?
    1. Use D3DXCreateTexture() and write the pixel data from RAM onto the surface (higher RAM consumption cuz of RAW pixel data?) or
    2. Keep a copy of the whole image in RAM (just the loaded binary from disk) and create the texture with D3DXCreateTextureFromFileInMemory() as soon as it is used?

  3. Other ways to do it?


I use FreeImage to load the Textures to perform some rescales, etc, if the hardware is too old, to handle the image size. So I actually will use 2.1, because I don't know how to get the complete image (with header, etc) to be able to use *FromFileInMemory.

Background: It is a small testproject, a programm to display images with blendeffects, when the image changes. Now there is the question, how to handle the textures as they may have a width up to 2048px, which is pretty much data...

Sorry for my english, but I hope you understand, what I want ;)

fanste
Advertisement
Where does DirectX keep the textures, which where loaded with D3DXCreateTexture[FromFileInMemory]? Will they be permanently in VRAM or will DirectX put them into the normal RAM until they are used?
It depends on a combination of the [font="Courier New"]Pool and Usage [/font]parameters.
The bottom line is that you should read the documentations and mix-and-match. The information is sometimes a bit scattered. [font="Courier New"]POOL_DEFAULT [/font]is typically video memory, POOL_SYSTEMMEMORY[font="Arial"] is RAM. [/font][font="Courier New"]POOL_MANAGED [/font][font="Arial"]is a mix of both, the runtime will try to cache the contents for you. [/font][font="Courier New"]D3DUSAGE_DYNAMIC [/font][font="Arial"]will move stuff closer to the CPU. For [/font][font="Courier New"]POOL_DEFAULT[/font][font="Arial"], that will be "AGP memory". If I recall correctly, it is invalid for [/font][font="Courier New"]SYSTEMMEMORY [/font][font="Arial"]and maybe [/font][font="Courier New"]MANAGED[/font][font="Arial"].[/font]
[font="Arial"]When the data is uploaded? I don't remember the exact semantics of D3D. Based on my observation, it seems my driver just uploads them right away but take this with salt.[/font]

If they will be in VRAM, then it is recommended to dynamically load them, right?
I'm not sure what you mean by "dynamically" here but DYNAMIC means "I am going to upload this multiple times". Textures that are guaranteed to be immutable such as diffuse textures should NOT be dynamic. Can't say more as I don't use D3DX for that.

I use FreeImage to load the Textures to perform some rescales, etc, if the hardware is too old, to handle the image size. So I actually will use 2.1, because I don't know how to get the complete image (with header, etc) to be able to use *FromFileInMemory.
That's just bad, you should work on that.

Now there is the question, how to handle the textures as they may have a width up to 2048px, which is pretty much data...
The cool thing is that you don't have to fear the size... you just handle them in the same way as the small textures (as long as hardware limits are not exceeded). What's the problem really?
Of course, loading 2048^2 is going to take a while... you might want to look in compressed texture formats but you might also just wait, for a simple application that's not really a problem. Most people won't provide you DXT-compressed images in the first place.

Previously "Krohm"

Tanks for you reply.

The problem I have is, that there may be 20 or more images with up to 2048px. If I load & create all Textures at application startup, will DX move the created textures in and out from video memory itself (let's say with POOL_MANAGED), or do I have to do it myself? Because on older video cards, the image data may be to much to fit into the video ram all together.

With dynamically I meant, that I load the image data into RAM, create a texture from it, when it is required and release it after usage. When it will be used again, we only have to create the texture without loading the data from disk.

@FreeImage: Any tips, where I could find the needed information? I searched for hours and only found solutions working with a locked surface...
Using the managed pool D3D will move textures in and out as required. There's IDirect3DTexture9::Preload which is only valid for the managed pool and will tell D3D that "I'm going to be using this managed resource shortly so put it into video RAM now". Conversely IDirect3DDevice9::EvictManagedResources will brutally shove all managed pool resources (including textures) back out to CPU memory. There's also texture prioritization but I've personally never seen any mileage from that (other folks' experience may differ). Between these you should be able to get some kind of simple management of which textures go into video RAM (and when) going on, but I'd personally recommend that you hold off on any of this until you know for certain that you have a problem that you need to deal with. D3D will do it's own texture management for you (and you can use D3DCREATE_DISABLE_DRIVER_MANAGEMENT at device creation time to select whether the runtime does it or the vendor-provided driver does it - if appropriate) and you may well find that it's more than adequate for your requirements without any extra work needed.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Hey,

I just played a bit arround with your informations and it looks quite good now, thanks.

But now I face a new problem, I do not understand:
The textures are bigger then they have to be and so they are rescaled, when they get rendered. But they look really bad - a bit blocky / forged colors. I tried using different filters (point, linear, anis), but they don't change anything.
TheDXC->GetDXDevice()->SetTexture(p_stage, m_gpuImage);
TheDXC->GetDXDevice()->SetSamplerState( p_stage, D3DSAMP_MAXANISOTROPY, DXCaps::GetInstance()->GetMaxAnisotopy());
TheDXC->GetDXDevice()->SetSamplerState( p_stage, D3DSAMP_MINFILTER, D3DTEXF_ANISOTROPIC );
TheDXC->GetDXDevice()->SetSamplerState( p_stage, D3DSAMP_MAGFILTER, D3DTEXF_ANISOTROPIC );
TheDXC->GetDXDevice()->SetSamplerState( p_stage, D3DSAMP_MIPFILTER, D3DTEXF_LINEAR );

I still create the texture with D3DXCreateTexture (A8R8G8B8, MipMap = 1, Usage = 0, Pool = Managed) and directly fill it after a Lock(). If i rescale the image with FreeImage and create the texture the same way, it works perfekt. No artifacts or similar. Any idea what I could have done wrong?

Thats the way DX gets initialised:

D3DPRESENT_PARAMETERS t_pp;
memset(&t_pp, 0, sizeof(D3DPRESENT_PARAMETERS));

t_pp.Windowed = TRUE;
t_pp.SwapEffect = D3DSWAPEFFECT_DISCARD;

t_dxContext->m_direct3D->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, TheApp->GetWindowHandle(),
D3DCREATE_SOFTWARE_VERTEXPROCESSING, &t_pp, &t_dxContext->m_direct3DDevice);


D3DXMATRIX t_ortho2D;
D3DXMATRIX t_identity;

D3DXMatrixOrthoLH(&t_ortho2D, TheApp->GetWindowWidth(), TheApp->GetWindowHeight(), 0.0f, 1.0f);
D3DXMatrixIdentity(&t_identity);

t_dxContext->m_direct3DDevice->SetTransform(D3DTS_PROJECTION, &t_ortho2D);
t_dxContext->m_direct3DDevice->SetTransform(D3DTS_WORLD, &t_identity);
t_dxContext->m_direct3DDevice->SetTransform(D3DTS_VIEW, &t_identity);
t_dxContext->m_direct3DDevice->SetRenderState(D3DRS_LIGHTING, FALSE);

This topic is closed to new replies.

Advertisement