I don’t know how else I can explain what I have already explained before.
If so how does this not create the issue where I'm writing to the buffer, send it to the GPU, and then I start writing to the buffer again before it is ready?
You aren’t writing to it while it is in use by the GPU. You are writing to the other buffer while it is in use by the GPU. Hence the term: Double Buffer.
If you have further questions on what double-buffering is and how it works there are plenty of resources on Google (rather on the Internet; Google just helps you find them).
I should explain myself a little better. I understand that the double buffer is supposed to mitigate the GPU stalls, but I guess what I'm trying to get at is. Since the GPU and CPU are asynchronous, how well does double buffering essentially guarantee that the GPU won’t stall?
Let just say that my CPU is going so fast around that the GPU just cannot keep up. Even when using double buffers.
The CPU will keep going, where I assume the GPU will start using ‘invalid’ data at some point right?
So I'm assuming, because the chance is so small that the above will happen and the fact that you will already have an exact needed size
(The buffers will always have enough room to hold everything) those are the major benefits over the DISCARD / OVERWRITE method.
Although that maybe the case, is the DISCARD/OVERWRITE method not safer or is it that safety does not come at a great price?