Sign in to follow this  
SteveDeFacto

Vsync without double buffer?

Recommended Posts

I'm using a frame buffer in my project and for this reason I don't believe a double buffer is necessarily anymore. I tried using glFinish before my draw call but it didn't stop tearing at all. Is there anyway I can do vsync without double buffer?

Share this post


Link to post
Share on other sites
AFAIK, GL doesn't allow for you to replace the default back-buffer with your own frame buffer objects...

So you render to your FBO, then copy it to the back-buffer, and then the system copies it from there to the front buffer. I don't think there's a way around this on PC.

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1313382809' post='4849253']
AFAIK, GL doesn't allow for you to replace the default back-buffer with your own frame buffer objects...

So you render to your FBO, then copy it to the back-buffer, and then the system copies it from there to the front buffer. I don't think there's a way around this on PC.
[/quote]

I can turn it off without a problem by just creating my context without a double buffer.

Share this post


Link to post
Share on other sites
[quote name='SteveDeFacto' timestamp='1313379780' post='4849241']
I tried using glFinish before my draw call but it didn't stop tearing at all.[/quote]

How would it? It's not about being "done", it's about quickly swapping buffers at exactly the right time in sync with your screens refresh rate. glFinish has nothing to do with that and I doubt there is anything but the actual buffer swapping that does.

Share this post


Link to post
Share on other sites
[quote name='mhagain' timestamp='1313423519' post='4849428']
Does double-buffering actually cause you a problem? And if so, does it cause you more problems than you're currently getting by trying to avoid it?
[/quote]

It hurts performance by a tinny bit and I need all I can get.

Share this post


Link to post
Share on other sites
[quote name='SteveDeFacto' timestamp='1313442307' post='4849567']
[quote name='mhagain' timestamp='1313423519' post='4849428']
Does double-buffering actually cause you a problem? And if so, does it cause you more problems than you're currently getting by trying to avoid it?
[/quote]

It hurts performance by a tinny bit and I need all I can get.
[/quote]

Uhm, you're sure it isn't vsync that is hurting your performance rather than double buffering?

Share this post


Link to post
Share on other sites
[quote name='Syranide' timestamp='1313485938' post='4849765']
[quote name='SteveDeFacto' timestamp='1313442307' post='4849567']
[quote name='mhagain' timestamp='1313423519' post='4849428']
Does double-buffering actually cause you a problem? And if so, does it cause you more problems than you're currently getting by trying to avoid it?
[/quote]

It hurts performance by a tinny bit and I need all I can get.
[/quote]

Uhm, you're sure it isn't vsync that is hurting your performance rather than double buffering?
[/quote]

I only assume double buffering hurts performance since it's a completely useless step...

Share this post


Link to post
Share on other sites
[quote name='SteveDeFacto' timestamp='1313492698' post='4849795']
[quote name='Syranide' timestamp='1313485938' post='4849765']
[quote name='SteveDeFacto' timestamp='1313442307' post='4849567']
[quote name='mhagain' timestamp='1313423519' post='4849428']
Does double-buffering actually cause you a problem? And if so, does it cause you more problems than you're currently getting by trying to avoid it?
[/quote]

It hurts performance by a tinny bit and I need all I can get.
[/quote]

Uhm, you're sure it isn't vsync that is hurting your performance rather than double buffering?
[/quote]

I only assume double buffering hurts performance since it's a completely useless step...
[/quote]

Don't assume; benchmark and base your conclusion on actual measured facts instead.

You're getting seriously into the realm of micro-optimizing here. GPUs have been swapping buffers for over 15 years now; you can bet anything you like that this is one seriously optimized process (depending on various factors, it could be as simple as exchanging two pointers). It's also highly likely that you have much higher bottlenecks in your program than use of double-buffering; it would be a far more productive use of your time to tackle those instead.

Share this post


Link to post
Share on other sites
[quote name='SteveDeFacto' timestamp='1313492698' post='4849795']
I only assume double buffering hurts performance since it's a completely useless step...
[/quote]

It's not useless at all. You can't just have the GPU render onto the front buffer, since that buffer is being used for scan-out to the display. The point of the backbuffer is to give you a surface that the GPU can access while the front buffer is being displayed. Then you swap, and keep going.

Share this post


Link to post
Share on other sites
[quote name='MJP' timestamp='1313529598' post='4850048']
[quote name='SteveDeFacto' timestamp='1313492698' post='4849795']
I only assume double buffering hurts performance since it's a completely useless step...
[/quote]

It's not useless at all. You can't just have the GPU render onto the front buffer, since that buffer is being used for scan-out to the display. The point of the backbuffer is to give you a surface that the GPU can access while the front buffer is being displayed. Then you swap, and keep going.
[/quote]

Sigh... I already have my own back buffer which I manage...

Share this post


Link to post
Share on other sites
[quote name='SteveDeFacto' timestamp='1313535988' post='4850075']
[quote name='MJP' timestamp='1313529598' post='4850048']
[quote name='SteveDeFacto' timestamp='1313492698' post='4849795']
I only assume double buffering hurts performance since it's a completely useless step...
[/quote]

It's not useless at all. You can't just have the GPU render onto the front buffer, since that buffer is being used for scan-out to the display. The point of the backbuffer is to give you a surface that the GPU can access while the front buffer is being displayed. Then you swap, and keep going.
[/quote]

Sigh... I already have my own back buffer which I manage...
[/quote]

No you don't. The driver manages the backbuffer, not you.

Share this post


Link to post
Share on other sites
[quote name='MJP' timestamp='1313540274' post='4850090']
[quote name='SteveDeFacto' timestamp='1313535988' post='4850075']
[quote name='MJP' timestamp='1313529598' post='4850048']
[quote name='SteveDeFacto' timestamp='1313492698' post='4849795']
I only assume double buffering hurts performance since it's a completely useless step...
[/quote]

It's not useless at all. You can't just have the GPU render onto the front buffer, since that buffer is being used for scan-out to the display. The point of the backbuffer is to give you a surface that the GPU can access while the front buffer is being displayed. Then you swap, and keep going.
[/quote]

Sigh... I already have my own back buffer which I manage...
[/quote]

No you don't. The driver manages the backbuffer, not you.
[/quote]

I have a texture which I render to with a frame buffer. My scene is completely drawn before I copy it to the back buffer. Factually this texture acts in every way shape or form as a back buffer. The back buffer on my window is simply an unnecessary extra step I have to go through to get it to the front buffer...

Share this post


Link to post
Share on other sites
[quote name='SteveDeFacto' timestamp='1313544473' post='4850102']
I have a texture which I render to with a frame buffer. My scene is completely drawn before I copy it to the back buffer. Factually this texture acts in every way shape or form as a back buffer. The back buffer on my window is simply an unnecessary extra step I have to go through to get it to the front buffer...
[/quote]

Based on your other thread your frame buffer is using a different format. So if there was any way to do what you want, you'd probably get this:

Front Buffer: Format X
Frame Buffer: Format Y
Screen: Reading Buffer, expecting Format X
You: Swap pointers to Front and Frame Buffer
Screen: Getting Format Y -> displaying garbage

You worries about performance are also pointless, because buffer swapping is likely to just swap pointers (and as such has basically no overhead), there is absolutely no difference between copying your stuff to the front or back buffer (except that one of these will cause ugly tearing). Not to mention that unless your frame rendering takes exactly 1/hz ms, you will have to wait for the display to be ready anyway, so you are literally not going to lose a nanosecond. On top of that, drivers allow to render a few frames ahead, so unlike any manual attempt they might not even block when swapping and can therefore be potentially _faster_ than whatever you're trying to do (unless you're constantly below 1/hz in which case you don't have any performance issues anyway).

Share this post


Link to post
Share on other sites
[quote name='SteveDeFacto' timestamp='1313544473' post='4850102']
I have a texture which I render to with a frame buffer. My scene is completely drawn before I copy it to the back buffer. Factually this texture acts in every way shape or form as a back buffer. The back buffer on my window is simply an unnecessary extra step I have to go through to get it to the front buffer...
[/quote]

A back buffer is not a normal a render target. Display flipping/swapping requires synchronization with both the GPU and the scan out, in order to make sure the GPU is done writing to the back buffer before the flip occurs and also that the flip occurs on a vertical or horizontal retrace. A back buffer also typically requires special alignment and formats to work with the scan out hardware. This is all completely hardware and display subsystem specific, which is why it's handled by the driver and not by you.

Either way Trienco is right: having a backbuffer should never cause you an extra copy. Just make it the output for the last step of rendering or post-processing, and you're done.

Share this post


Link to post
Share on other sites
[quote name='Trienco' timestamp='1313555382' post='4850145']
[quote name='SteveDeFacto' timestamp='1313544473' post='4850102']
I have a texture which I render to with a frame buffer. My scene is completely drawn before I copy it to the back buffer. Factually this texture acts in every way shape or form as a back buffer. The back buffer on my window is simply an unnecessary extra step I have to go through to get it to the front buffer...
[/quote]

Based on your other thread your frame buffer is using a different format. So if there was any way to do what you want, you'd probably get this:

Front Buffer: Format X
Frame Buffer: Format Y
Screen: Reading Buffer, expecting Format X
You: Swap pointers to Front and Frame Buffer
Screen: Getting Format Y -> displaying garbage

You worries about performance are also pointless, because buffer swapping is likely to just swap pointers (and as such has basically no overhead), there is absolutely no difference between copying your stuff to the front or back buffer (except that one of these will cause ugly tearing). Not to mention that unless your frame rendering takes exactly 1/hz ms, you will have to wait for the display to be ready anyway, so you are literally not going to lose a nanosecond. On top of that, drivers allow to render a few frames ahead, so unlike any manual attempt they might not even block when swapping and can therefore be potentially _faster_ than whatever you're trying to do (unless you're constantly below 1/hz in which case you don't have any performance issues anyway).
[/quote]

When I asked this question I had already rendered my scene directly to the front buffer and it works fine other than the lack of vsync. The format of the texture does not matter since I render it to the front buffer with a quad. Also I think you are right about it just swapping the pointers for the front and back buffers. In that case there is really no loss in speed however it's still wasted video memory but I'm not too concerned about that.

Share this post


Link to post
Share on other sites
[quote name='Katie' timestamp='1313570910' post='4850191']
"It hurts performance by a tinny bit and I need all I can get."

Why -- what is it that you're doing that needs that sort of performance?
[/quote]

Soft shadow mapping, bloom, and procedural planets to name a few. The blurring shader I'm using right now for bloom and motion blur takes an astounding amount of GPU power... I'm probably going to need to rethink the way it works if I'm going to have enough speed left for the other things I want to implement...

Share this post


Link to post
Share on other sites
[url="http://www.opengl.org/wiki/Swap_Interval"]http://www.opengl.or...i/Swap_Interval[/url]

The way to get vsync is by using double buffering. The names of the functions used to enable/disable vsync should even clue you in a little: wglSwapInterval, glXSwapInterval. They're obviously relating to something being swapped - I wonder what that could be?

Sorry, but:

[list][*]You're obsessing over something that isn't actually a problem.[*]You asked a question, you didnt like the answers you got, and you're refusing to accept them.[*]You're seeing a backbuffer as "waste" when it's not actually so - it's being used to allow you to have vsync.[*]You've made up your mind that you want to do something a certain way and you're determined to do it that way, even when it's been pointed out that it's not a good idea.[/list]
Like I said earlier on, it's a much more productive use of your time and energy to tackle some of the other - [i]much bigger[/i] - bottlenecks in your program.

Furthermore, you may think that you don't need a backbuffer right now, but I'd lay odds that's going to change. What if you want to draw a GUI overlay with no effects, for example?

Share this post


Link to post
Share on other sites
[quote name='mhagain' timestamp='1313578495' post='4850236']
[url="http://www.opengl.org/wiki/Swap_Interval"]http://www.opengl.or...i/Swap_Interval[/url]

The way to get vsync is by using double buffering. The names of the functions used to enable/disable vsync should even clue you in a little: wglSwapInterval, glXSwapInterval. They're obviously relating to something being swapped - I wonder what that could be?

Sorry, but:

[list][*]You're obsessing over something that isn't actually a problem.[*]You asked a question, you didnt like the answers you got, and you're refusing to accept them.[*]You're seeing a backbuffer as "waste" when it's not actually so - it's being used to allow you to have vsync.[*]You've made up your mind that you want to do something a certain way and you're determined to do it that way, even when it's been pointed out that it's not a good idea.[/list]
Like I said earlier on, it's a much more productive use of your time and energy to tackle some of the other - [i]much bigger[/i] - bottlenecks in your program.

Furthermore, you may think that you don't need a backbuffer right now, but I'd lay odds that's going to change. What if you want to draw a GUI overlay with no effects, for example?
[/quote]

Already drawing a GUI overlay without effects... Anyway I completely agree the loss of vsync is not worth the small amount of video memory freed You act as if I'm going to not use a double buffer but I've been using one and don't plan to stop unless there is a way to vsync without it...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this