Jump to content
  • Advertisement
Sign in to follow this  
ErnstH

Preventing releasing buffers in the pipeline queue

This topic is 1506 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Most of my meshes are created before rendering starts and deleted afterwards.

 

Some have to be recreated every frame.

 

And some have to be recreated multiple times during a frame.

 

The last category flickers: meshes are invisible at random moments and sometimes even take over the appearance of other meshes!!!

 

When I say "mesh" I mean a class encapsulating a vertex and an index buffer, both using an ID3D11Buffer. When the mesh is deleted those bufers are released.

 

The loop looks like this:

 

[source]

for (i=0; i<10; i++){
 delete Mesh;
 Mesh=new Mesh(MyParameters);
 Mesh->Render();
}

[/source]

 

The flickering can be fixed by adding a Sleep before deleting the mesh:

 

[source]

for (i=0; i<10; i++){
 Sleep(10);
 delete Mesh;
 Mesh=new Mesh(MyParameters);
 Mesh->Render();
}

[/source]

 

So I have to wait for something before I can delete my mesh. But what?

 

My guess is that I have to wait for the mesh to be rendered:

 

[source]

for (i=0; i<10; i++){
 WaitForGPU2Finish();
 delete Mesh;
 Mesh=new Mesh(MyParameters)
 Mesh->Render();
}

[/source]

 

This is my WaitForGPU2Finish function:

 

[source]
 

void MDirectX_Device::WaitForGPU2Finish() const{

 

 D3D11_QUERY_DESC d;
 ZeroMemory(&d, sizeof(d));
 d.Query = D3D11_QUERY_EVENT;

 

 ID3D11Query* Q = nullptr;
 HRESULT hr =mDevice->CreateQuery(&d, &Q);
 if (FAILED(hr)) return;

 

 mContext->End(Q);

 

 BOOL data = 0;
 while (true){
  hr =mContext->GetData(Q, &data, sizeof(data), 0);
  if (hr ==S_OK && data) break;
 }
 Q->Release();
}

[/source]

 

This also works but since sleep works equally well, something totally different could be going on.

 

I worry there's something fundamental I do not understand about DirectX. I create and delete buffers all the time and wonder if I should be more careful.

 

The WaitForGPU2Finish is very cryptic and if it does what I hope it does also not very efficient because it waits for everything. I only want to wait for a specific buffer to be processed by the queue.

 

Does anyone know what's going on here?

Share this post


Link to post
Share on other sites
Advertisement


My guess is that I have to wait for the mesh to be rendered:

 

That's correct. CPU and GPU work is taking place asynchronously. In particular, the device (creation/deletion of resources) can work in a multi-threaded fashion (thread-safe), but not the context (rendering resources). You're deleting resources before they've been properly used.

 

A better solution to the problem is to determine why you create/delete buffers during frame rendering. If you can describe the problem you've solving by creating/deleting resources that way, perhaps a better (safer, more efficient) method can be suggested.

Share this post


Link to post
Share on other sites

The driver and runtime is supposed to handle resource lifetime in cases like these. In theory you should be able to create and delete as much as you want, and the driver is supposed to make sure that the associated resources stay "alive" long enough for the GPU to consume them. However in practice what you're doing is extremely abnormal, and you're probably running afoul of driver bugs. In general the more you go off the beaten path, the more likely it is that you're going to find bugs (this true for almost all software).

I would strongly suggest that you re-architect things such that you're not constantly creating and destroying D3D resources. At the very least it's going to give you very poor performance. If you really need things to change from frame-to-frame, you should look into using dynamic buffers and filling them with new data as-needed.

Share this post


Link to post
Share on other sites

I'm generating meshes during rendering for a lot of effects:
-lightning flashes
-extended silhouette edges for lightbeams en stencil shadows
-marching cube particle blobs
-voxel models (optimized for view and transparency bounds)

 

I guess some of them could be generated with geometry shaders, but others are too complex or need too much data.

 

And I run out of video memory when I keep every instance so I have to delete some during rendering. Question is when?

Share this post


Link to post
Share on other sites

Thank you for your replies. I will look into using dynamic buffers.

 

Hope I no longer have to use the WaitForGPU2Finish() function when using its map method.

Edited by ErnstH

Share this post


Link to post
Share on other sites

All the effects you describe can be done using dynamic vertex buffers. Just create one dynamic vertex buffer and fill it without overwriting the old data, and when you arrive at the end, map it with discard so you can restart filling the buffer. You can do this as many times per frame as you need. 

 

Cheers!

Share this post


Link to post
Share on other sites

Interesting technique. I found more info about it on this page:

http://msdn.microsoft.com/en-us/library/windows/desktop/dn508285(v=vs.85).aspx

 

However, you can't dynamically change the size of the buffer.

 

What I really would like to have is a render and forget function. I can't create this myself without using the WaitForGPU2Finish() function.

 

It could have been super easy if the pipeline would use refcounting: releasing the buffer after use.

Share this post


Link to post
Share on other sites


It could have been super easy if the pipeline would use refcounting: releasing the buffer after use.

 

You don't want the pipeline to make decisions like that. You can, however, use ComPtr (or other smart pointer) inside the scope of an object's lifetime. When the ComPtr goes out of scope, the object is released.

Share this post


Link to post
Share on other sites

Interesting technique. I found more info about it on this page:

http://msdn.microsoft.com/en-us/library/windows/desktop/dn508285(v=vs.85).aspx

 

However, you can't dynamically change the size of the buffer.

 

The thing is... you don't need to dynamically change the size of the buffer using this technique (at this stage I'm guessing that you're porting from older OpenGL where this technique wasn't possible and where glBufferData would change the size of a buffer).

 

What you do is create a large-ish buffer, about 4mb or so, then you append the current vertices to the buffer (i.e map with no-overwrite).  Adjust the parameters of the draw call to use the portion of the buffer you've just written to.  Keep on appending until you run out of space, then map with discard and start again from the beginning of the buffer.

 

That way there's no creating or releasing objects at runtime, no dynamic changes of buffer sizes required, you've only got a single buffer to have to manage, no waiting for the GPU to finish (discard will cause the driver to automatically handle this for you), and everything runs well.

 

This, by the way, isn't voodoo - the no-overwrite/discard pattern is something that's been known and used since the D3D8 days, if not earlier.  So you can safely use it with the knowledge that drivers are optimized around this usage pattern and that it will give you good performance with dynamic vertices.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!