Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualHodgman

Posted 01 January 2013 - 02:39 AM

"AGP memory" is a fairly old term (from when GPU's generally used an AGP-port, instead of a PCIe port).
It's basically refers to regular "main memory" that the OS has allowed the GPU to access over the AGP/PCI channel. This type of memory means you can quickly update it from the CPU (as it's just regular RAM), but the GPU can also access it as if it were GPU-RAM (aka video RAM), albiet it will be a bit slower than reading from local GPU-RAM, depending on the AGP/PCI bus speeds.

 

The way D3D/GL are made, you can never actually know where your buffers are stored. They might be stored in main memory, or in "AGP memory" (AKA GPU-accessible main memory) or in GPU-memory. All you can do is give the API/Driver the appropriate hints (e.g. DYNAMIC, READ_ONLY, etc) and hope that the driver allocates your memory in the appropriate place. Also, on PC, there's really no reliable way to measure the amount of available RAM in any of these places either, and tell exactly how much RAM your buffers are using.

 


For the next part, keep in mind that the GPU/CPU are not synchronized. All D3D/GL commands to the GPU are sent through a pipe, which typically has a lot of latency (e.g. 10-100ms). Whenever you ask the GPU to do something, it will do it some time in the future.

When you map a resource that's being used by the GPU -- e.g. 1. put data in buffer, 2. draw from buffer, 3. put new data in same buffer, 4. draw from buffer -- the driver has two choices once the CPU gets up to #3:

1) It introduces a sync point. The CPU stops and waits for the first "draw" (#2) to be complete, and then maps the buffer. This could stall the CPU for dozens of milliseconds.

2) It creates another internal allocation for that buffer, which the CPU writes to during #3. From your code, you still just have the one handle to the buffer, but internally, there can be any number of versions of the buffer "in the pipe", on their way to being consumed by the GPU. Some time after the GPU executes #2, the allocation from #1 will be freed/recycled automatically by the driver.

 

Specifying a "DISCARD" hint will help the driver choose option 2.


#3Hodgman

Posted 01 January 2013 - 02:38 AM

"AGP memory" is a fairly old term (from when GPU's generally used an AGP-port, instead of a PCIe port).
It's basically refers to regular "main memory" that the OS has allowed the GPU to access over the AGP/PCI channel. This type of memory means you can quickly update it from the CPU (as it's just regular RAM), but the GPU can also access it as if it were GPU-RAM (aka video RAM), albiet it will be a bit slower than reading from local GPU-RAM, depending on the AGP/PCI bus speeds.

 

The way D3D/GL are made, you can never actually know where your buffers are stored. They might be stored in main memory, or in "AGP memory" (AKA GPU-accessible main memory) or in GPU-memory. All you can do is give the API/Driver the appropriate hints (e.g. DYNAMIC, READ_ONLY, etc) and hope that the driver allocates your memory in the appropriate place. Also, on PC, there's really no reliable way to measure the amount of available RAM in any of these places either, and tell exactly how much RAM your buffers are using.

 


For the next part, keep in mind that the GPU/CPU are not synchronized. All D3D/GL commands to the GPU are sent through a pipe, which typically has a lot of latency (e.g. 10-100ms). Whenever you ask the GPU to do something, it will do it some time in the future.

When you map a resource that's being used by the GPU -- e.g. 1. put data in buffer, 2. draw from buffer, 3. put new data in same buffer, 4. draw from buffer -- the driver has two choices once the CPU gets up to #3:

1) It introduces a sync point. The CPU stops and waits for the first "draw" (#2) to be complete, and then maps the buffer. This could stall the CPU for dozens of milliseconds.

2) It creates another internal allocation for that buffer, which the CPU writes to during #3. From your code, you still just have the one handle to the buffer, but internally, there can be any number of versions of the buffer "in the pipe", on their way to being consumed by the GPU. Some time after the GPU executes #2, the allocation from #1 will be freed/recycled automatically by the driver.


#2Hodgman

Posted 01 January 2013 - 02:34 AM

"AGP memory" is a fairly old term (from when GPU's generally used an AGP-port, instead of a PCIe port).
It's basically refers to regular "main memory" that the OS has allowed the GPU to access over the AGP/PCI channel. This type of memory means you can quickly update it from the CPU (as it's just regular RAM), but the GPU can also access it as if it were GPU-RAM (aka video RAM), albiet it will be a bit slower than reading from local GPU-RAM, depending on the AGP/PCI bus speeds.

 

The way D3D/GL are made, you can never actually know where your buffers are stored. They might be stored in main memory, or in "AGP memory" (AKA GPU-accessible main memory) or in GPU-memory. All you can do is give the API/Driver the appropriate hints (e.g. DYNAMIC, READ_ONLY, etc) and hope that the driver allocates your memory in the appropriate place. Also, on PC, there's really no reliable way to measure the amount of available RAM in any of these places either, and tell exactly how much RAM your buffers are using.

 


For the next part, keep in mind that the GPU/CPU are not synchronized. All D3D/GL commands to the GPU are sent through a pipe, which typically has a lot of latency (e.g. 30ms). Whenever you ask the GPU to do something, it will do it some time in the future.

When you map a resource that's being used by the GPU -- e.g. put data in buffer, draw from buffer, put new data in same buffer, draw from buffer -- the driver has two choices:

1) It introduces a sync point. The CPU stops and waits for the first "draw" to be complete, and then maps the buffer. This could stall the CPU for dozens of milliseconds.

2) It creates another internal allocation for that buffer. From your code, you still just have the one handle to the buffer, but internally, there can be any number of versions of the buffer "in the pipe", on their way to being consumed by the GPU.


#1Hodgman

Posted 01 January 2013 - 02:30 AM

"AGP memory" is a fairly old term (from when GPU's generally used an AGP-port, instead of a PCIe port).
It's basically refers to regular "main memory", but the GPU can read from it over the AGP/PCI channel. This means you can quickly update it from the CPU (as it's just regular RAM), but the GPU can also access it as if it were GPU-RAM (aka video RAM).

For the next part, keep in mind that the GPU/CPU are not synchronized. All D3D/GL commands to the GPU are sent through a pipe, which typically has a lot of latency (e.g. 30ms). Whenever you ask the GPU to do something, it will do it some time in the future.

When you map a resource that's being used by the GPU -- e.g. put data in buffer, draw from buffer, put new data in same buffer, draw from buffer -- the driver has two choices:

1) It introduces a sync point. The CPU stops and waits for the first "draw" to be complete, and then maps the buffer. This could stall the CPU for dozens of milliseconds.

2) It creates another internal allocation for that buffer. From your code, you still just have the one handle to the buffer, but internally, there can be any number of versions of the buffer "in the pipe", on their way to being consumed by the GPU.


PARTNERS