Sign in to follow this  

What is managed DirectX?

This topic is 4857 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Do you know what the Microsoft .NET framework is? If not, check out this page.

Basically, .NET is made to make the programmer's job easier. We no longer have to worry about deleting memory (like Java), and the Windows API is organized in a much more productive way. The .NET framework can be used with C#, Visual Basic .NET, and Managed C++.

Managed DirectX is the addition that allows DirectX to be used with those languages. However, Managed DirectX is not a completely separate version. It is simply a wrapper around unmanaged DirectX.

Again, this is a very, very basic explanation of it. If you really want to dig deeper, check out some tutorials and samples that use it, and read the SDK documentation.

Share this post


Link to post
Share on other sites
Yes, those things are managed as well. When the graphics device is up for garbage collection, any resources associated with it will also be garbage collected. However, most resources also implement IDisposable so you have more exact control over disposal, for example if you need to dispose of texture or vertex buffer resources while keeping the main device (quite common indeed).

Share this post


Link to post
Share on other sites
Quote:
just a brief question - does Managed DX handle vertex/index buffer management, including garbage collection of static buffers?

By "management" do you mean lifetime management? Does a VertexBuffer object release its reference to the unmanaged vertex buffer when it's garbage collected and finalized?

Well that's an excellent question. And the answer is technically yes but you should act as if the answer were no.

VertexBuffer, IndexBuffer, Texture, and all other managed objects that wrap resources on the unmanaged side implement the IDisposable interface. Listen to me now very closely: Any class that implements IDisposable is telling you that it's very important for you call the Dispose method when you're done with the object so that it can clean up in a timely and predictable fashion.

Yes, if the class is designed properly, it will clean itself up in its finalizer, even if you don't call Dispose. But the problem is that you can't predict when the finalizer will be called or even WHETHER it will be called at all.

Under the best circumstances, the finalizer will be called shortly after the garbage collector has determined that the object is garbage. But you don't know when that's going to happen, unless you force a garbage collection of all object generations (which is generally a bad idea anyway).

Even then, objects that have finalizers are not immediately collected. They're put in a queue and a separate thread calls their finalizers one after the other. Which means that if some other object's finalizer goes into a infinite loop or in some other way fails to execute successfully and in a timely fashion, the finalizer that you care about may not ever be called AT ALL.

So it's just not a good idea to rely on finalizers, whether with MD3D resources or any other object that implements IDisposable. You absolutely want to call the Dispose method when you're done with the object.

See Jeff Richter's articles in MSDN magazine for more details:
http://msdn.microsoft.com/msdnmag/issues/1100/GCI/
http://msdn.microsoft.com/msdnmag/issues/1200/gci2/

Share this post


Link to post
Share on other sites
Thanks, Donovan for that detailed response. Can I assume that 'disposable' resources can be garbage collected on-the-fly'? Here's the more acute problem that i'm hoping managed DX solves: in multiplayer games, for example, players can join and leave at will. When they leave, i must neatly dispose of their resources (vbuf/ibuf). In non-managed DX, this generally means the resources must be allocated as dynamic. If allocated as static, their place in a shared, large buffer pool becomes unusable, eventually leading to total starvation for video ram (after many joins/leaves).

It would be great to have these sparsely populated buffers coalesced, creating contiguous chunks at the end of the buffer (and totally justifying the existence of Lock/Unlock as the safe way to get access ptrs). Unfortunately, reusing these newly available areas in a static buffer is a bad idea (multiple static locks bad!). So 'disposable' resources seem consigned to dynamic buffers. New problem: massive data copying to dynamic buffers using the discard/nooverwrite method. Even a heavily optimized memcpy dominates the CPU.

So, does managed DX offer a better way of handling this scenario?

Share this post


Link to post
Share on other sites
Quote:
Original post by Scoob Droolins
Can I assume that 'disposable' resources can be garbage collected on-the-fly'?

They're not garbage collected per se but the object typically calls GC.SuppressFinalize, meaning "Don't bother calling my finalizer -- I've already cleaned up." It will be collected the next time there's a GC on its generation.

Quote:
Here's the more acute problem that i'm hoping managed DX solves: in multiplayer games, for example, players can join and leave at will. When they leave, i must neatly dispose of their resources (vbuf/ibuf). In non-managed DX, this generally means the resources must be allocated as dynamic. If allocated as static, their place in a shared, large buffer pool becomes unusable, eventually leading to total starvation for video ram (after many joins/leaves).

I think there may be some confusion here about dynamic vs. static resources? Dynamic just means that the contents of the resource will be changed frequently by the CPU, whereas static resources should not be touched or at least not touched very often by the CPU.

It's about what kind of memory the resources should live in -- AGP or local video memory. It's not about the lifetime of the resources or how they're packed into memory.

Quote:
It would be great to have these sparsely populated buffers coalesced, creating contiguous chunks at the end of the buffer (and totally justifying the existence of Lock/Unlock as the safe way to get access ptrs). Unfortunately, reusing these newly available areas in a static buffer is a bad idea (multiple static locks bad!). So 'disposable' resources seem consigned to dynamic buffers. New problem: massive data copying to dynamic buffers using the discard/nooverwrite method. Even a heavily optimized memcpy dominates the CPU.

So, does managed DX offer a better way of handling this scenario?

I'm not sure if I'm following you. Are you talking here about packing multiple meshes into a single vertex buffer? Because once you allocate the vertex buffer, you're on your own as far as D3D is concerned, as far as how you're going to use that memory. You could quite conceivably implement a packing heap in the buffer, and that seems to be what you're describing. But if the buffer is static (i.e,. it lives most likely in local video memory) then the packing operation is going to be very slow, because there's no way to take advantage of the hardware's ability to move memory around in this situation; it has to be read across the bus and written back out.

And now it occurs to me that maybe this is why you're talking about dynamic versus static here, because with a dynamic VB (in AGP mem) the packing operation will still be slow but not nearly as slow.

I can see a couple of strategies:

1. Punt it to the driver. Allocate a separate VB/IB for each object that might have to be purged later. Let the driver handle fragmentation. In this case the driver can conceivably use the blitting hardware to pack the heap. But you're at the driver's mercy. A really stupid driver might not pack its heap at all -- it'll just fail if it can't find a free block. I doubt that a modern driver would be this inept, but it's possible.

If these buffers are in the default pool then this is going to interfere with D3D's managed pool. It's a bad idea to create new default pool resources after any managed pool resources have been created.

So if you make these resources managed, D3D will be the one handling the heap fragmentation. And... well, I'm just not sure enough about how the managed pool is implementation to say whether it's going to do an adequate job. It might very well do just fine, I don't know.

2. Implement your own heap inside a large static VB & IB. Now at least you're in control but as I said you can't take advantage of the hardware for moving memory around. It would make a lot of sense here to have backing copies of the meshes in system memory so that you can rewrite the buffer without having to read from it. (Note that you're sort of duplicating what D3D does with the managed pool.) If you can be confident that the buffers will never have to grow, then this can be viable. But if they ever have to grow, you're going to interfere with the D3D managed pool, as I mentioned.

Keeping the buffers in AGP memory (D3DUSAGE_DYNAMIC) will help the packing operation but it can hurt overall rendering performance.

And I can think of a couple of other variations. I just don't think there's a clear answer to this question. It just depends on too many factors:

* How often you need to create new meshes
* How big they are
* Whether you're using managed pool resources
* How the driver manages its heap
* How the runtime handles the managed pool

But anyway...
Quote:
So, does managed DX offer a better way of handling this scenario?

No. [smile]

Managed code relieves you of the responsibility of managing your normal app memory, but the managed heap and GC mechanism are completely separate from D3D. Your app objects will live in the managed heap and get the benefits (and detriments) of garbage collection, but your vertex buffers and textures etc. will all still be handled by the unmanaged D3D runtime and driver, neither of which knows anything about the managed world.

Share this post


Link to post
Share on other sites
Quote:

Managed code relieves you of the responsibility of managing your normal app memory, but the managed heap and GC mechanism are completely separate from D3D. Your app objects will live in the managed heap and get the benefits (and detriments) of garbage collection, but your vertex buffers and textures etc. will all still be handled by the unmanaged D3D runtime and driver, neither of which knows anything about the managed world.


I might be a bit off subject, but does that mean that if my device is lost, any texture related to it won't get released and i will have a leak?

Share this post


Link to post
Share on other sites

This topic is 4857 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this