Jump to content

  • Log In with Google      Sign In   
  • Create Account

DeviceContext->map() using system and GPU memory?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 Vexal   Members   -  Reputation: 416

Like
0Likes
Like

Posted 30 November 2012 - 05:50 PM

How come when I do this, the memcpy increases the amount of memory my program appears to take up in the task manager? Am I wrong in assuming that map() sends the data to the GPU video memory and does not keep it stored in main RAM?

D3D11_MAPPED_SUBRESOURCE ms;
deviceContext->Map(bufferData->vertexBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);
memcpy(ms.pData, vertices, sizeof(VertexPositionColor) * bufferData->vertexCount);
deviceContext->Unmap(bufferData->vertexBuffer, NULL);
delete[] vertices;

After the end of these lines, the amount of memory the program uses is greater than before. If it does in fact keep the vertices stored in main RAM and video RAM, what is the reason for this?

Sponsor:

#2 mhagain   Crossbones+   -  Reputation: 8278

Like
1Likes
Like

Posted 30 November 2012 - 06:29 PM

Task Manager is just not a reliable indicator of memory used.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#3 xoofx   Members   -  Reputation: 888

Like
1Likes
Like

Posted 30 November 2012 - 06:54 PM

After the end of these lines, the amount of memory the program uses is greater than before. If it does in fact keep the vertices stored in main RAM and video RAM, what is the reason for this?

This is expected, as MAP_WRITE_DISCARD requires behind the scene to invalidate the previous buffer, and It has no other ways than to allocate a new buffer on the CPU side before sending it later to a new buffer on the GPU (and releasing the previous GPU buffer). So basically, at the time you use MAP_WRITE_DISCARD, It is not writing directly to the GPU memory, but delaying the writing (and storing this in a temp buffer).

#4 MJP   Moderators   -  Reputation: 11770

Like
3Likes
Like

Posted 30 November 2012 - 07:22 PM

Strictly speaking, you can't make any assumptions about where resource memory is located or where it might get copied to. The implementation (driver) is free to manage these things however it wants, provided that the end result matches the behavior described by the API.

In practice, drivers for dedicated GPU's are likely to store DYNAMIC resources in CPU-writable memory that's also accessible by the GPU. This memory could be system memory which means that the GPU is required to read it across the PCI-e bus, or it could GPU memory in which case the CPU has to write across the PCI-e bus. Either way there's not likely to be any copying going on, however the driver does have to transparently manage a fixed pool of memory which means it may be making all kinds of book-keeping allocations behind the scenes.

Edited by MJP, 01 December 2012 - 02:00 AM.


#5 xoofx   Members   -  Reputation: 888

Like
1Likes
Like

Posted 30 November 2012 - 08:23 PM

@MJP, indeed, unlike D3D9, D3D11 doesn't make any strict assumption about the location of the memory, but a MAP_WRITE_DISCARD needs anyway to allocate a new buffer, even if it is in a shared memory, and the previous memory of the buffer will be released at a later "flush" when It is no longer being used by the GPU. So in practice, when using MAP_WRITE_DISCARD, you are more likely to at *least* double the memory usage (exactly like a double buffering).

Edited by xoofx, 30 November 2012 - 08:24 PM.


#6 MJP   Moderators   -  Reputation: 11770

Like
0Likes
Like

Posted 01 December 2012 - 01:54 PM

@MJP, indeed, unlike D3D9, D3D11 doesn't make any strict assumption about the location of the memory, but a MAP_WRITE_DISCARD needs anyway to allocate a new buffer, even if it is in a shared memory, and the previous memory of the buffer will be released at a later "flush" when It is no longer being used by the GPU. So in practice, when using MAP_WRITE_DISCARD, you are more likely to at *least* double the memory usage (exactly like a double buffering).


The driver only needs to "allocate" more memory if previously allocated memory is still in use. Either way it's going to be pulling from a driver-managed memory pool that's not going to be tracked by something like task manager, so it's not particularly relevant to the OP's concerns.

#7 Vexal   Members   -  Reputation: 416

Like
0Likes
Like

Posted 01 December 2012 - 02:36 PM

Thanks for the replies. If I know for sure that the CPU will not need to read or write from the memory in the buffer, and initialize it with CPU_ACCESS_READ instead of write, and D3D11_USAGE_IMMUTABLE instead of D3D11_USAGE_DYNAMIC, and send the information when it's created instead of with map(), will it no longer need the extra memory?

#8 MJP   Moderators   -  Reputation: 11770

Like
1Likes
Like

Posted 01 December 2012 - 03:46 PM

If IMMUTABLE suits your needs, then you should definitely use it. Doing this will allow the driver to properly optimize for this use case by placing the resource in the appropriate memory location (this usually means it will be placed directly in high-speed GPU memory). Just be aware that you can't use CPU_ACCESS_READ with IMMUTABLE resources, that's only valid for STAGING resources.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS