Device.AvailableTextureMemory has lost its mind?

Started by
3 comments, last by InvalidPointer 14 years, 11 months ago
Ok I have an ATI Mobility Radeon which is advertised as having 256mb ram but in advertising speak that means "128mb video memory, 128mb shared system memory." Anyway I was considering offering 16-bit textures as an option, but first I wanted to see how much video memory I'm actually using. So I create my device, call AvailableTextureMemory, allocate my vertices and textures, then call AvailableTextureMemory again. The results? Before allocation: 887 MB After allocation: 887 MB What is going on? I'm running Vista, does that have something to do with the extraordinarily high number? How do I explain the fact that after I've allocated all my textures (which I calculated should run about 25mb) it still shows the same free mem?
Advertisement
WDDM drivers on Vista use virtualized device memory, which means an application can actually use more memory than the physical amount available on the device (since the runtime can page it out).
Thank you for you answer. I guess that explains it. But is there another way I can query the free graphics memory, or at least get a good idea how much mem my app is *actually* using, and not just the calculated minimum?
Firstly until you render using those textures D3D probably won't move them from the managed pool to actual video ram. Try checking the results after rendering a few frames to see if it changes.

You can get a good approximation by adding up the sizes of all textures, vertex buffers and index buffers you create. Don't forget to include the front / back / z buffers.

If you have a PC with an NVidia card handy then installing and using PerfHud will show you how much you're using too.

By the way, if you want to make your textures smaller I'd suggest using DXT1 / DXT5 compression. The compression is somewhat lossy, but you can rarely tell the difference. DXT1 is 1/8th the size of an X8R8G8B8 texture. DXT5 is only 1/4 the size of A8R8G8B8, but supports an alpha channel.
Quote:Original post by Adam_42
Firstly until you render using those textures D3D probably won't move them from the managed pool to actual video ram. Try checking the results after rendering a few frames to see if it changes.

You can get a good approximation by adding up the sizes of all textures, vertex buffers and index buffers you create. Don't forget to include the front / back / z buffers.

If you have a PC with an NVidia card handy then installing and using PerfHud will show you how much you're using too.

By the way, if you want to make your textures smaller I'd suggest using DXT1 / DXT5 compression. The compression is somewhat lossy, but you can rarely tell the difference. DXT1 is 1/8th the size of an X8R8G8B8 texture. DXT5 is only 1/4 the size of A8R8G8B8, but supports an alpha channel.

Don't use those as-is for normal maps, though. I dunno if you're actually going to be using them, but it's worth mentioning. Look into 'DXT5 normal map compression.'

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

This topic is closed to new replies.

Advertisement