Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Dov Sherman

D3DPOOL_MANAGED

This topic is 5746 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''m loading a large amount of texture data with D3DPOOL_MANAGED in a program. Because of the way this texture data is used by the program, it can be separated into groups of textures, only one such group being used for frame rendering at a given time. How do I tell DirectX to move the unused textures to system memory until the next time they are needed so that I don''t run out of video memory?

Share this post


Link to post
Share on other sites
Advertisement
That's the whole point of managed resources. D3D will move them out when necessary to make room for more recently used ones (the exact scheme is detailed in the docs). You should never need to do that yourself, it works fine in 99% of cases - if you do need to manually reorganise what's in VRAM you've done something to confuse the manager (such as allocate some non managed resources after managed ones)

You can however use IDirect3DDevice8::ResourceManagerDiscardBytes
to ask the resource manager to free some video memory (though beware that freeing small blocks potentially causes fragmentation since you're interfering with the management scheme - better to evict EVERYTHING).

You can also use IDirect3DResource8::SetPriority to change the liklihood of a resource being evicted from VRAM

Finally you can let the manager know what resources you intend using soon with IDirect3DResource8:: PreLoad

[edit: pesky smiley ]

Textures are a type of Resource so those member functions are available for them




--
Simon O'Connor
Creative Asylum Ltd
www.creative-asylum.com

[edited by - S1CA on January 23, 2003 8:25:26 AM]

Share this post


Link to post
Share on other sites
If it''s supposed to do it automatically, I must not be doing something right because it''s not moving them. I''m running out of memory and crashing.

What I''ve done is create a windowed MDI application using swapchains for each window (based on these instructions. Each window contains a unique document with its own resources so I only need to access the resources for a given view when that view is active.

How does Direct3D know which resources are needed and which can be moved? Should I be using a separate IDirect3DDevice8 for each view?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
could the statement in that article, "The frame buffer and depth buffer are sized to the full screen resolution..." cause most of your vram to be used up right away, if your at high res and 32bpp? since you''re running windowed, that might be the case? then if you have a "big" texture, such that it wouldn''t fit into the vram that''s left? have you run w/DX debug to see what it says, with "Break on D3D Error" set?

Share this post


Link to post
Share on other sites
The multiple swapchains shouldn''t be a problem AFAIK.

A driver reporting the wrong amount of video memory when swap chains were used could cause nastiness like that, as could fragmentation of video memory for whatever reason.

However there is a KEY rule when using any managed resources: **ALL** managed resources should ALWAYS be created AFTER non managed resources. So AVOID the following at all costs:

- CreateDevice()
[which allocates some memory for the default swapchain]

- CreateTexture(...MANAGED...)

- CreateAdditionalSwapChain()
[which allocates extra video memory that the resource manager isn''t aware of]

-- CreateTexture(), SetTexture() and similar calls after this point may fail with out of memory errors. ALL non managed resources should be allocated BEFORE managed resources or the resource manager is working from out of date information about how much free VRAM there is. Even nonmanaged CreateVertexBuffer() calls which may use true VRAM could come under this category (though most drivers seem to prefer AGP memory for vertex data)


How does Direct3D know which resources are needed and which can be moved?

Each reasource has amongst other things a "last time I was used" timestamp (which is updated when you SetTexture, SetRenderTarget etc) stored with it AND a priority (set with the call mentioned in my first reply).

If you call say SetTexture(), the resource manager checks to see if that texture is in VRAM, if it is, uses it, if not checks the amount of free video memory available (if this changed since the resource manager was initialised [see above], the whole system breaks down).

If [it thinks] there is enough VRAM, it asks the driver to upload the sample.

If there isn''t enough VRAM it uses [simplified] a LRU scheme to find the resource with the oldest timestamp (i.e. which hasn''t been used for a long time) and evicts it from video memory to make room - it continues until enough memory if available for the texture being set. If two textures have the same time stamp, the priority (and other stuff) is used to tie-break.

Once there''s enough free memory the texture gets uploaded.

That''s vastly oversimplified - the resource manager also has the preload stuff, and an MRU scheme when there is serious overcommit etc.

It usually takes a lot to make it fall over!


--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites
quote:
Original post by S1CA
However there is a KEY rule when using any managed resources: **ALL** managed resources should ALWAYS be created AFTER non managed resources. So AVOID the following at all costs:

- CreateAdditionalSwapChain()
[which allocates extra video memory that the resource manager isn''t aware of]


In that case, this is most likely the problem. Following this article on Rendering to Multiple Windows, I have it recreate the swapchain with that command each time the window changes size. In fact, this is the function call which is returning the out-of-memory error. If I don''t recreate the swapchain, the existing swapchain becomes strected and distorted to fit the new size. I''ll have to experiment with is a little more to see if I can keep the existing swapchain all the time.

Still, I need to create a swapchain for each window and it is inevitable that managed resources will be created after a swapchain is created if more than one document is opened at a time, especially since the swapchain, as part of the view rather than the document, is the last part of the document/view created.

Share this post


Link to post
Share on other sites
Aha! I found a way to make managed resources happy with swapchains. Each time I call CreateAdditionalSwapChain(), I call this first:

pDirect3DDevice->ResourceManagerDiscardBytes(0);

That way, the swapchain winds up at the bottom of the resource stack where it belongs. It slows down some of the later memory access but, generally, not where it matters to my program.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!