Texture memory speed

Started by
4 comments, last by slack 16 years, 7 months ago
I have an application that uses enough large textures on my video card (NVidia 8800) to run out of video memory. I do not need all the memory, but it helps to have as many of my textures as possble cached on the card. There is no option of reducing texture size or using fewer textures. I have read that video cards typically use faster texture memory first. My tests support this because the textures allocated last are noticably slow (many shading passes used in app). If the first textures are disposed and replaced, the replacements continue performing fast. I never want the application to perform slowly with textures. Are there any reasonable strategies for ensuring that my textures are always in the fastest memory? I know that's a device implementation detail that is not exposed through D3D, but I wondered if I could query the device for its available memory and stop allocating more at a certain point. Thanks!
Advertisement
Assuming you're using the managed pool for all your resources, D3D should take care of it all for you. The managed pool will commit all textures to video memory when possible (Well, the defalt pool), and when it runs out of memory there, it'll start to swap textures out to system memory on a last recently used basis.

There's various hints you can give to the D3D manager, by calling IDirect3DResource9::PreLoad(), and IDirect3DResource9::SetPriority(), which will determine how the D3D manager swaps resources in and out of video memory.

The "faster type of memory" you're referring to is probably local video memory as supposed to AGP memory, which is an area of system RAM mapped by the driver. It's unlikely that you'll have any direct control over what goes where.
I'm always using the default pool for my textures because of system memory limitations. When I used the managed pool before, the app memory usage would be too high. I'll see if there are any video card settings I can access that may help. Thanks for your help.
Quote:Original post by slack
I'm always using the default pool for my textures because of system memory limitations. When I used the managed pool before, the app memory usage would be too high. I'll see if there are any video card settings I can access that may help. Thanks for your help.
Don't worry about memory usage. Really. The managed pool is there for a reason, and should always be used unless you have a very good reason not to (E.g. you're using dynamic resources). When you use the managed pool, D3D creates two copies of the texture, one in video memory (Default pool), and one in system memory. When D3D needs to swap textures in and out of video memory, it'll copy the system memory texture to the default pool. This also has the nice side effect that you don't need to worry about releasing and recreating all your resources when you reset your device - Because there's a copy in system memory, D3D can restore the default pool version from the system memory copy.
System memory is cheap these days, and paging makes your memory effectively limitless (Well, up to the 2GB address limit). There was a thread in the lounge recently, which may be of interest.

In short - use the managed pool, that's what it's for.
Steve's covered it all nicely, but just to throw something else in... If you're using your 8800 on Vista then you can get instances to the underlying DXGI interfaces that seem to give a pretty complete picture of the (V)RAM breakdown. I wouldn't rely on them being accurate, but purely as a guide they appear to be accurate from my tests...

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Thanks guys.

The 2 GB barrier is a serious concern for me. I have caching mechanisms in place to regulate how much data the app can hold onto. I also need large and somewhat variable amounts of memory available for processing tasks within the app. Textures are dynamic. Sorry I didn't make these issues more obvious!

I will take a look at the DXGI interfaces. I also considered setting the AGP aperture size to zero as a test (on another video card/system) to prevent system memory from being used, but the 8800 is PCI Express and from what I gather the amount of shared memory is automatically determined.

This topic is closed to new replies.

Advertisement