Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


#ActualBornToCode

Posted 01 June 2013 - 09:28 AM

Are you saying that the issue is that you can't map the buffer while it is bound to the pipeline?  If so, why don't you just un-bind it by setting a null to that buffer slot?

 

Vertex and index buffers most certainly can be mapped and directly used, so I'm not really sure what the issue is that you are facing...

I am not talking about when I am binding it through the binding stage. I mean at creation time when setting the Bind Flags stages the buffer will mapped to. The Bind

Flag stage can only be set if the buffer will not be read from the CPU which means staging. Therefore preventing me from creating any type of resource view using the buffer.

 

The old D9D9 way of dealing with resources was completely broken in terms of how GPU's actually work. When you put a resource in GPU memory it's no longer accessible to the CPU, since it's a completely different memory pool that's not accessible to userspace code. In order to support the old (broken) D3D9 semantics drivers had to do crazy things behind the scenes, and the D3D runtime usually had to keep a separate copy of the resource contents in CPU memory. Starting with D3D10 they cleaned all of this up in order to better reflect the way CPU's work, and to also force programs to use the "fast path" by default by not giving them traps to fall into that would cause performance degradation or excessive memory allocation by the runtime or driver. Part of this is that you can no longer just grab GPU resources on the CPU, and you have explicitly specify up-front what behavior you want from a resource.

That said, why would you ever need to read back vertex buffer data? If you've provided the data, then you surely already have access to that data and you can keep it around for later. You wouldn't be wasting any memory compared to the old D3D9 behavior or using a staging buffer, and it would be more efficient to boot.

I know that CPUVM cannot see GPUVM, but that does not matter in that case. Currently the drivers just copies the GPU memory over to the CPU memory whenever you do an map, then copy it back to GPUVM whenever you do Unmap. So those flags are just bogus, all they do is give the driver some hint of which memory pool to put it in.. Even with the implementation of this new system. The driver side has not change on how the buffers are handle today, because CPU still cannot see GPUVM. The only reason they are those restriction is because Microsoft introduced them in the spec. The reason I know about the how the driver works is because last year I work on them. So the two copy in memory in D3D9 like you said is a false statement.

 

Now to answer your other question, the reason why I want to access it. Let's say I have a buffer X that has been updated and outputted in an OuputStream. How do I read that output stream back on the CPU if I want to that since the CPU cannot read the resource.


#4BornToCode

Posted 01 June 2013 - 09:27 AM

Are you saying that the issue is that you can't map the buffer while it is bound to the pipeline?  If so, why don't you just un-bind it by setting a null to that buffer slot?

 

Vertex and index buffers most certainly can be mapped and directly used, so I'm not really sure what the issue is that you are facing...

I am not talking about when I am binding it through the binding stage. I mean at creation time when setting the Bind Flags stages the buffer will mapped to. The Bind

Flag stage can only be set if the buffer will not be read from the CPU which means staging. Therefore preventing me from creating any type of resource view using the buffer.

 

The old D9D9 way of dealing with resources was completely broken in terms of how GPU's actually work. When you put a resource in GPU memory it's no longer accessible to the CPU, since it's a completely different memory pool that's not accessible to userspace code. In order to support the old (broken) D3D9 semantics drivers had to do crazy things behind the scenes, and the D3D runtime usually had to keep a separate copy of the resource contents in CPU memory. Starting with D3D10 they cleaned all of this up in order to better reflect the way CPU's work, and to also force programs to use the "fast path" by default by not giving them traps to fall into that would cause performance degradation or excessive memory allocation by the runtime or driver. Part of this is that you can no longer just grab GPU resources on the CPU, and you have explicitly specify up-front what behavior you want from a resource.

That said, why would you ever need to read back vertex buffer data? If you've provided the data, then you surely already have access to that data and you can keep it around for later. You wouldn't be wasting any memory compared to the old D3D9 behavior or using a staging buffer, and it would be more efficient to boot.

I know that CPUVM cannot see GPUVM, but that does not matter in that case. Currently the drivers just copies the GPU memory over to the CPU memory whenever you do an map, then copy it back to GPUVM whenever you do Unmap. So those flags are just bogus, all they do is give the driver some hint of which memory pool to put it in.. Even with the implementation of this new system. The driver side has not change on how the buffers are handle today, because CPU still cannot see GPUVM. The only reason they are those restriction is because Microsoft introduced them in the spec. The reason I know about the how the driver works is because last year I work on them.

 

Now to answer your other question, the reason why I want to access it. Let's say I have a buffer X that has been updated and outputted in an OuputStream. How do I read that output stream back on the CPU if I want to that since the CPU cannot read the resource.


#3BornToCode

Posted 01 June 2013 - 09:24 AM

Are you saying that the issue is that you can't map the buffer while it is bound to the pipeline?  If so, why don't you just un-bind it by setting a null to that buffer slot?

 

Vertex and index buffers most certainly can be mapped and directly used, so I'm not really sure what the issue is that you are facing...

I am not talking about when I am binding it through the binding stage. I mean at creation time when setting the Bind Flags stages the buffer will mapped to. The Bind

Flag stage can only be set if the buffer will not be read from the CPU which means staging. Therefore preventing me from creating any type of resource view using the buffer.

 

The old D9D9 way of dealing with resources was completely broken in terms of how GPU's actually work. When you put a resource in GPU memory it's no longer accessible to the CPU, since it's a completely different memory pool that's not accessible to userspace code. In order to support the old (broken) D3D9 semantics drivers had to do crazy things behind the scenes, and the D3D runtime usually had to keep a separate copy of the resource contents in CPU memory. Starting with D3D10 they cleaned all of this up in order to better reflect the way CPU's work, and to also force programs to use the "fast path" by default by not giving them traps to fall into that would cause performance degradation or excessive memory allocation by the runtime or driver. Part of this is that you can no longer just grab GPU resources on the CPU, and you have explicitly specify up-front what behavior you want from a resource.

That said, why would you ever need to read back vertex buffer data? If you've provided the data, then you surely already have access to that data and you can keep it around for later. You wouldn't be wasting any memory compared to the old D3D9 behavior or using a staging buffer, and it would be more efficient to boot.

I know that CPUVM cannot see GPUVM, but that does not matter in that case. Currently the drivers just copies the GPU memory over to the CPU memory whenever you do an map, then copy it back to GPUVM whenever you do Unmap. So those flags are just bogus, all they do is give the driver some hint of which memory pool to put it in.. Even with the implementation of this new system. The driver side has not change on how the buffers are handle today, because CPU still cannot see GPUVM. The only reason they are those restriction is because Microsoft introduced them in the spec. The reason I know about the how the driver works is because last year I work on them.


#2BornToCode

Posted 01 June 2013 - 09:23 AM

Are you saying that the issue is that you can't map the buffer while it is bound to the pipeline?  If so, why don't you just un-bind it by setting a null to that buffer slot?

 

Vertex and index buffers most certainly can be mapped and directly used, so I'm not really sure what the issue is that you are facing...

I am not talking about when I am binding it through the binding stage. I mean at creation time when setting the Bind Flags stages the buffer will mapped to. The Bind

Flag stage can only be set if the buffer will not be read from the CPU which means staging. Therefore preventing me from creating any type of resource view using the buffer.

 

The old D9D9 way of dealing with resources was completely broken in terms of how GPU's actually work. When you put a resource in GPU memory it's no longer accessible to the CPU, since it's a completely different memory pool that's not accessible to userspace code. In order to support the old (broken) D3D9 semantics drivers had to do crazy things behind the scenes, and the D3D runtime usually had to keep a separate copy of the resource contents in CPU memory. Starting with D3D10 they cleaned all of this up in order to better reflect the way CPU's work, and to also force programs to use the "fast path" by default by not giving them traps to fall into that would cause performance degradation or excessive memory allocation by the runtime or driver. Part of this is that you can no longer just grab GPU resources on the CPU, and you have explicitly specify up-front what behavior you want from a resource.

That said, why would you ever need to read back vertex buffer data? If you've provided the data, then you surely already have access to that data and you can keep it around for later. You wouldn't be wasting any memory compared to the old D3D9 behavior or using a staging buffer, and it would be more efficient to boot.

CPUVM or GPUVM does not matter in that case. Currently the drivers just copies the GPU memory over to the CPU memory whenever you do an map, then copy it back to GPUVM whenever you do Unmap. So those flags are just bogus, all they do is give the driver some hint of which memory pool to put it in.. Even with the implementation of this new system. The driver side has not change on how the buffers are handle today, because CPU still cannot see GPUVM. The only reason they are those restriction is because Microsoft introduced them in the spec. The reason I know about the how the driver works is because last year I work on them.


#1BornToCode

Posted 01 June 2013 - 09:22 AM

Are you saying that the issue is that you can't map the buffer while it is bound to the pipeline?  If so, why don't you just un-bind it by setting a null to that buffer slot?

 

Vertex and index buffers most certainly can be mapped and directly used, so I'm not really sure what the issue is that you are facing...

I am not talking about when I am binding it through the binding stage. I mean at creation time when setting the Bind Flags stages the buffer will mapped to. The Bind

Flag stage can only be set if the buffer will not be read from the CPU which means staging. Therefore preventing me from creating any type of resource view using the buffer.

 

The old D9D9 way of dealing with resources was completely broken in terms of how GPU's actually work. When you put a resource in GPU memory it's no longer accessible to the CPU, since it's a completely different memory pool that's not accessible to userspace code. In order to support the old (broken) D3D9 semantics drivers had to do crazy things behind the scenes, and the D3D runtime usually had to keep a separate copy of the resource contents in CPU memory. Starting with D3D10 they cleaned all of this up in order to better reflect the way CPU's work, and to also force programs to use the "fast path" by default by not giving them traps to fall into that would cause performance degradation or excessive memory allocation by the runtime or driver. Part of this is that you can no longer just grab GPU resources on the CPU, and you have explicitly specify up-front what behavior you want from a resource.

That said, why would you ever need to read back vertex buffer data? If you've provided the data, then you surely already have access to that data and you can keep it around for later. You wouldn't be wasting any memory compared to the old D3D9 behavior or using a staging buffer, and it would be more efficient to boot.

CPUVM or GPUVM does not matter in that case. Currently the drivers just copies the GPU memory over to the CPU memory whenever you do an map, then copy it back to GPUVM whenever you do Unmap. So those flags are just bogus. Even with the implementation of this new system. The driver side has not change on how the buffers are handle today, because CPU still cannot see GPUVM. The only reason they are those restriction is because Microsoft introduced them in the spec.


PARTNERS