Jump to content
  • Advertisement
Sign in to follow this  
George109

Computing reserved memory in the graphic card.

This topic is 4858 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm building an Engine with DirectX and C++. My resource manager needs to know the overall size of its resources. The problem I have, is that I can't find a way to compute the size of the resources which are currently in the manager. The resources are Direct3D objects like IDirect3DTexture9, ID3DXMesh, ID3DXEffect etc. sizeof() doesn't help, returns always the size of the pointer and not the size of the object. Is there any other way to compute the size of these objects, or a way to compute the memory currently reserved in the graphics card memory?

Share this post


Link to post
Share on other sites
Advertisement
There are a few bits and pieces you can get together to work out how much memory is used - but nothing accurate. For optimization purposes you're *not* supposed to know exactly how much data is used as it gives the GPU a bit more freedom to do things the way it wants [grin]

For textures you can grab the format and the height/width using IDirect3DTexture9::GetLevelDesc() and then compute the raw size for each mip level.

Similar applies to vertex and index resources: IDirect3DVertexBuffer9::GetDesc() and IDirect3DIndexBuffer9::GetDesc().

The short answer is that you'll have to do the calculations yourself - there isn't a utility function that'll do it for you [smile]

hth
Jack

Share this post


Link to post
Share on other sites
Actually, there is a query that you can use to get some detailed information about what is happenening on the GPU. If you create a D3DDEVINFO_RESOURCEMANAGER query, you can retrieve the D3DRESOURCESTATS structure. It has the following members:

bThrashing;
ApproxBytesDownloaded;
NumEvicts;
NumVidCreates;
LastPri;
NumUsed;
NumUsedInVidMem;
WorkingSet;
WorkingSetBytes;
TotalManaged;
TotalBytes;


There is some useful info in the, and some stuff that all of us will probably never use (or know what it means [oh]). However, you can only use this query with the debug runtimes.

Share this post


Link to post
Share on other sites
There's the AvailableTextureMemory property of the Device in MDX if that helps. It only gives an estimate of course.

Share this post


Link to post
Share on other sites
Quote:
Original post by DrGUI
There's the AvailableTextureMemory property of the Device in MDX if that helps. It only gives an estimate of course.


The same estimate is also available in C++ using IDirect3DDevice9::GetAvailableTextureMem.


To the OP:

However, as the others have mentioned, and the documentation suggests, you should only use the results of that for rough decisions.



If you have your own resource manager, you have to ask yourself who "owns" the memory for the resource? - memory for an IDirect3DTexture9 interface and the underlying pixel data comes from Direct3D's own memory pool and the display driver's memory pool rather than from your normal process heap; So really, all your resource manager should care about is tracking the interface pointer and the creation/release of the resource rather than the memory used internally by that resource.


If you're talking about low level memory management (i.e. uploading system memory copies to video memory on demand), there are lots of things to be aware of: Memory alignment; driver allocation overhead; fragmentation; driver reformatting; specific driver bugs; AGP texturing; UMA motherboards - all reasons why you can't always rely on a precise figure for remaining video memory.

The kind of decisions you can make based on the results of things like GetAvailableTextureMem() are "this hardware greater than or equal 32Mb but less than 64Mb so I'll use my medium detail textures". You should never make decisions like "there are 768 bytes available, I can fit a 16x16 R8G8B8 texture into that".

The Direct3D resource manager does a very good job - and takes account of the various unusual cases (a UMA motherboard for example might return only a few Kilobytes of video memory and show the rest as system or AGP memory).


Speaking of the Direct3D resource manager, a couple of things you should always do if you want to keep it happy:

1) Allocate *ALL* your non-MANAGED resources *BEFORE* your MANAGED ones; this gives the resource manager memory heuristics a much better chance.

2) If you need to allocate any non-MANAGED resources after the MANAGED resources have been created, call IDirect3DDevice9::EvictManagedResources before creating the non-MANAGED resource.


Finally, IDirect3DResource9::PreLoad and IDirect3DResource9::SetPriority are often overlooked, but can make affect the behaviour of D3Ds resource manager in favourable ways if you know a lot about the nature of the managed resources used by your app (for example, you can preload if you know that draw call n+2 is going to be using a certain texture. Or you could set the priority of your HUD so that it rarely gets evicted).

Share this post


Link to post
Share on other sites
Cool - I didn't know point 2. I learnt something [smile]

For point 1, I seem to remember ATI, NVidia or GPU Gems having a list for the order you should initialize in. I'll have to find that list again won't I...I think it was render targets first?

Share this post


Link to post
Share on other sites
Quote:
Original post by DrGUI
For point 1, I seem to remember ATI, NVidia or GPU Gems having a list for the order you should initialize in. I'll have to find that list again won't I...I think it was render targets first?
You allocate them in most-used-first order. The render target should be created first, when the device is created, along with the depth-stencil buffer if it exists. You then want to allocate your main VBs, IBs and textures.
I'm not 100% sure for the reason, but I believe it's so the graphics driver has more control over where to place the resources if it has more free space to throw things around in.

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
Quote:
Original post by DrGUI
For point 1, I seem to remember ATI, NVidia or GPU Gems having a list for the order you should initialize in. I'll have to find that list again won't I...I think it was render targets first?
You allocate them in most-used-first order. The render target should be created first, when the device is created, along with the depth-stencil buffer if it exists. You then want to allocate your main VBs, IBs and textures.
I'm not 100% sure for the reason, but I believe it's so the graphics driver has more control over where to place the resources if it has more free space to throw things around in.

I would take a guess that if it's most-used-first order you'd be allowing the GPU/Driver to put resources in the most optimal place in memory. The first resource created goes in the most optimal place, then the next resource in the second most optimal place... etc... till the last resource just ends up anywhere [grin]

I guess if you did it in an arbitrary order it'd have to start shuffling pages around in order to get most-used resources into the best memory positions.

Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!