Jump to content
  • Advertisement
Sign in to follow this  
Kwizatz

OpenGL Best way to get 2D overlays or images on screen (for a GUI).

This topic is 2014 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am finally putting some effort towards my open source GUI library, which is meant to be generic but has OpenGL 3.x as the main focus for now since that's what my engine uses.

 

I decided to do away with a lot of cruft it had accumulated, doing away with OpenGL 1.5 support and focus on OpenGL 3.2+ core profile.

 

In order to keep the library generic enough that it could eventually be used with D3D, SDL or some other graphics API, I have separated the library into modules, a core, which does all graphic API (GAPI) independent operations and a renderer which does the specific API operations.

 

I keep an image buffer that represents the screen or the drawable area of the window hosting the GAPI, and the base renderer class draws lines, rects and so on to this buffer.

 

On the specific GAPI renderer, OpenGL in this case, I keep a texture object and each frame (for now, I will be doing it only when necessary later on), I issue a glTexSubImage2D with the contents of the image system buffer, then I render a quad (triangle strip really) textured with this texture over the whole screen.

 

So, I was wondering if this is the way to go or if I should be looking for some of the new fancy stuff on OpenGL 3.x+, it would be nice to keep image data entirely on the GPU, I don't think there is a simple way to access that memory for example to only change 15 pixels worth of a text caret, is there?

 

I considered keeping multiple textures (one per window/widget), but I don't know how optimal would that be, I guess as long as the window dimensions don't change much, it may be an improvement.

 

Share this post


Link to post
Share on other sites
Advertisement

Thanks, I read that page before, but couldn't figure out exactly how it could help, looking at it, it seems that PBO's would replace the calls to glTexSubImage, am I correct?

 

I'll look further into it.

Share this post


Link to post
Share on other sites

Ok, I see, I can create a PBO to which I would copy over my client-side pixel buffer data and then call glTexSubImage on it, which would immediately return firing the transfer from PBO to texture memory asynchronously. As the article says, this may not be much of an improvement depending on what I do after the call to glTexSubImage since it likely already copies over the client memory to do the upload.

Share this post


Link to post
Share on other sites

As a general rule, a PBO is really only useful if you don't need to do anything until a frame or more after the transfer.  Otherwise you're incurring the transfer overhead twice - once from system memory to the PBO and once from the PBO to the texture - and both must complete before you can draw anything.  You'd also have a hard time porting it to D3D as the concept of PBOs doesn't even exist there (they're not needed as the specific problems they solve don't exist in the same way).

Share this post


Link to post
Share on other sites

Yes, I was thinking about that, and thinking that maybe if I made all changes to the client buffer and fire the transfer at the end of a frame, doing a deferred render of the overlay on the next frame, in other words, fill the buffer and start the transfer, do all other rendering operations and then render the overlay, the overlay would always be a frame behind though.

 

Anyway, it seems that the way I am doing it is the way to go, I tried before using OpenGL primitives (for example use GL_LINES to draw a line, GL_QUADS to draw rectangles, etc), but that was never consistent between different graphic cards. I also tried glRasterPos and glDrawPixels, but I read somewhere that doing that was far from optimal...

Share this post


Link to post
Share on other sites

glDrawPixels can be reasonably optimal; I wouldn't rule it out.  If you've a simple enough use case (i.e. no alpha blending, no scaling) and if you match the format and type parameters to what your framebuffer uses natively (commonly GL_BGRA and GL_UNSIGNED_INT_8_8_8_8_REV), and if you remember to disable all texturing/lights/etc before issuing the call, it can do quite a fast transfer, and is probably the simplest way to get the job done.

Share this post


Link to post
Share on other sites

Well, I do support alpha blending, in fact I have a software implementation in the library to blend at the client buffer, I can't recall if this was one of the reasons to drop it.

 

I do think one of the main reasons for the drop is that with glDrawPixels I have to make the call each single frame, even if no widget changes are recorded, with a texture, if no changes are recorded, then no changes to the texture are required, no call to glTexSubImage and you can just render the overlay quad with the same texture as it was on the previous frame.

Share this post


Link to post
Share on other sites

If you look for alternative approaches, my opengl gui lib is based solely on immediate OGL calls (OGL2.x), even every single char is rendered this way. Though this is highly unoptimized, I can render 1000 gui elements this way without any problem per frame. When seeing the optimization potential (using VBO to store larger collections of elements, e.g. windows or panels), you can see, that there's a really hi-potential in here.

 

IMHO the greatest benefit of all this is the use of shaders.

Share this post


Link to post
Share on other sites

immediate OGL calls? what exactly do you mean? did you mean as I said before render lines with GL_LINES, rects with GL_QUADS, on DrawArrays/DrawElements?

 

I did that once, but it was not consistent, depending on the card nvidia/ati/intel, a line would end a pixel short or a pixel too long, an outline rect would not exactly match a filled rect, etc, even with the 0.375f pixel offset trick, I wouldn't get pixel perfect matches and would have to compensate one way or another.

 

I am not seeing any issues right now with my approach either, I am not really looking for alternatives right now, but its kind of something you don't see talked about too much, I've seen all kinds of shaders for example, but not one specific to GUI rendering, so I was just wondering if there was some sort of defacto way to do it I didn't knew about.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!