Quote:Original post by Etnu
Any command can force the command queue to flush, once it's full.
Yeah, but that shouldn't normally happen. If it does, you're not using the API correctly. Normally, if you're overflowing the queue you should increase the batch sizes and decrease the number of commands.
Quote:Original post by Etnu
Microsoft's own documentation specifically points to the swap as being one of the most intensive tasks that can be done.
Can you give a link? This is the first time I hear about this being a problem.
Quote:Original post by Etnu
The setup you described guarantees nothing, unless you wrote the driver yourself, and write your code to be exactly sure of when the optimal time to do work will be.
Not according to this presentation. There's a lot more information scattered on NVidia and ATI sites. I'll try to find more links tonight or tomorrow.
Quote:Original post by Etnu
You can most certainly gain from a rendering thread in a seperate loop, as it's the only way to be 100% sure that the thread is not wasting clock cycles.
Sorry, I still don't see the benefit. The thread isn't waiting for which clock cycles? CPU or GPU? If your GPU pipeline stalls, it doesn't really matter if you use a separate thread to issue commands: you have to wait until the pipeline is rendered until you can continue filling it up. Can you clarify the benefit of a separate thread?
Quote:Original post by Etnu
Read the SDK documentation if you don't believe me on this one. It's clearly outlined that there is no way to be sure of when the card is busy and when it's not better than I could possibly explain.
There is no way to gurantee the GPU isn't waiting for the CPU. However, "gurantee" is a really strong word. You can be reasonably sure the GPU isn't being idle if you spend enough time profiling and ironing out the bottlenecks.
[Edited by - CoffeeMug on August 4, 2004 8:22:17 AM]