framebuffer, pbuffer, ect...? which can have non power of 2 resolution?

Started by
7 comments, last by soconne 16 years, 6 months ago
I'm making my own GUI system right now and currently all of it is rendered to the screen before any actual 3D rendering takes place. Then OpenGL states, ect... are pushed onto the stack and popped off before/after 3D rendering. But I would like to be able to render the entire UI to a backbuffer, separate from the regular frame buffer, BUT that has the same resolution as the window. Then each frame, simply perform a fast copy of the entire contents of the buffer to the screen while also allowing me to perform blend, alpha test and other operations on it. Can I do this with the framebuffer object or pbuffers? Can they have arbitrary resolutions, such as 1083x847 and is there an easy way to copy their contents to the screen? And not just copy, but perhaps also blend the contents onto what's already there.
Author Freeworld3Dhttp://www.freeworld3d.org
Advertisement
Just out of interest: why don't you just render your UI after you've rendered the 3D world? No need for pbuffers, framebuffers, render-to-texture, etc. Blending is performed as usual, alpha tests don't change, etc etc blah blah blah.

In fact, unless you wanted to do some special kind of post-processing on the UI alone I don't see why you wouldn't render it last.
Quote:Original post by soconneCan I do this with the framebuffer object or pbuffers? Can they have arbitrary resolutions, such as 1083x847 and is there an easy way to copy their contents to the screen? And not just copy, but perhaps also blend the contents onto what's already there.

You can render to every pbuffer size, even when npotd on most HW (I would say all, but some will have scrapped this by sure).
Copying it is a bit more involved, especially if you want to apply other operations such as blend - you need texturing for this.

Unluckly, binding a npotd texture is not the problem of pbuffers, or FBO...
Some HW will allow npotd only for rendertargets but not for textures... most old hardware works that way. Usually, you have to use rectangular textures which are not exactly nice.

On recent hardware you can render whatever dimension you want (within reason!) and bind it happily to every texture unit, but there are two capabilities to check:
1- NPOT rendering is always there
2- NPOT texture binding is not always there (but RECT textures may rescue your day).

Also, please, stay away of pbuffers... they NEVER worked for real.

EDIT: after a bit of thinking, maybe it's the case I make clear that if you need the same frame resolution and you can work on the same color format then it's extremely likely a simple framebuffer copy will work on a similar performance level.

[Edited by - Krohm on October 1, 2007 6:59:54 AM]

Previously "Krohm"

Quote:Original post by MENTAL
Just out of interest: why don't you just render your UI after you've rendered the 3D world? No need for pbuffers, framebuffers, render-to-texture, etc. Blending is performed as usual, alpha tests don't change, etc etc blah blah blah.

In fact, unless you wanted to do some special kind of post-processing on the UI alone I don't see why you wouldn't render it last.


The reason I do not do it this way is that I want to avoid rendering the UI each frame. I'd rather render it to a backbuffer and redisplay that saved copy each frame and ONLY update it if a control is refreshed.
Author Freeworld3Dhttp://www.freeworld3d.org
You'll probably find that rendering the UI at the end is actually better (in terms of optimisations) than messing around with pbuffers and FBO's. At the end of the day your UI is only going to have a maximum of a couple of hundred poly's, that's nothing for a modern gfx card.
Absolutely true. If you're doing this for performance, you are wasting your time (furthermore slick GUIs are updated each frame anyway).

Previously "Krohm"

Well my entire GUI system is all drawn using points, lines and quads. So there are quite a few state changes in there, as well as many calls to glScissor, ect... This is why I wanted to render the entire thing into some sort of buffer and simply 'paste' it onto the screen and only redraw the contents if needed.
Author Freeworld3Dhttp://www.freeworld3d.org
Fix that problem rather than work around it. I think you'll find that its worth it in the long run. Forcing the GUI to render in strange out-of-order cached way will be more initial investment for less long-term gain, because you will be pigeonholing your system fairly early on in its life.
Quote:Original post by jpetrie
Fix that problem rather than work around it. I think you'll find that its worth it in the long run. Forcing the GUI to render in strange out-of-order cached way will be more initial investment for less long-term gain, because you will be pigeonholing your system fairly early on in its life.


Actually I've already completed the system the way I described. I have a special control called GLPanel when rendered, saves all of the main OpenGL states needed by the UI and then allows me to render anything I need to (3D scene, ect...). Then when the GLPanel's OnPaint is finished being called, all previous states are restored and the GUI proceeds to render any remaining controls.

The way I optimize the GUI rendering is each frame the backbuffer is copied to the frontbuffer and nothing is ever redrawn unless it needs to be. If the GLPanel needs to be refreshed each frame, then I simply manually draw a screen aligned quad that fills up the panel's viewport with z = far plane, and set up the appropriate glViewport calls and I'm done.

It all works rather nicely at the moment, but the only thing it does not allow me to do is an overlay UI, such as when you start up Counter-strike Source or when you're playing WoW. For that, each control is being re-rendered each frame. This is what I'd like to avoid.
Author Freeworld3Dhttp://www.freeworld3d.org

This topic is closed to new replies.

Advertisement