Sign in to follow this  
ElPeque2

OpenGL Making a GUI system with C++ OpenGL

Recommended Posts

All this could sound extremely noob, you have been warned :P. I'd like it to be OO, and of course have it represented in a tree structure. (a window has more windows within it, and those windows can also have more windows, buttons, and all the other widgets). I was thinking it would be nice to give widgets the ability to draw themselves to their parent widget's surface and not to the screenbuffer so that: - if i decide to hide/move the parent, then all of its children will hide/move. - i only have to render them once, until the next time they change or move relative to their parent. - all children share globaly the z position of the parent (but still have an order within that parents children). In other words, the depth of the widgets respects the tree structure. So... seeing what OpenGL has to offer.......... (and asking to be given advice about everithing :P) i can draw raster graphics (glDrawPixels and such) or go the 3d way (textured quads, and i would have a nice way to handle the relative widgets positions, as using opengl's matrixes feels tree-ish enough already :P) The problem is (besides im a noob), i dont know how (or if it is even possible or REASONABLE) to use that to draw to some image data in each widget and not directly to the screenbuffer. Can anybody put some light in this for me? better all around ideas? I did a GUI system that worked very well in DirectorMX2004 drawing with "copypixels", but using OpenGL seems to me like a quite different paradigm than just doing blits to the screen. thanks for reading me :).

Share this post


Link to post
Share on other sites
I'd advise against using OpenGL's matrix popping for two reasons;

1. When I used it in the past it wasn't brilliantly fast.

2. You only need translations in 2 dimensions and full matrix stuff seems overkill.

An aggro will start to come with clipping -- when you draw components in X or Win32, the rectangle is a window, and clips the children at its edges. You won't so easily get the clipping if your rectangle is just a rectangle, you'll have to do it yourself and that gets boring rapidly. You could simply ignore this and not do the clipping, but it gets more useful for scrolling lists and similar controls.

Share this post


Link to post
Share on other sites
The description of what you want to do seems spot on. Go for it.

If you want to learn how to do it then don't use a 3rd party library, if you want a GUI then do use a 3rd party. My point being there is no better way to learn than to do.

Share this post


Link to post
Share on other sites
I am also writing a simple GUI system for use in my own 2D game and when I first started thinking on how to structure it I came up with something similar to your's. My first idea was something like this: I started with a window which had a number of child-windows. These child-windows would be rendered in a OpenGL viewport that was sized and positioned as the parent-window. And windows that were connected to each of these child-windows would be drawn with the viewport of the child's window in turn. That way I wouldn't have to worry about clipping since OGL would handle it for me.

But that meant a lot of calls to glViewport() and glOrtho(). I had to re-position the viewport every time I wanted to draw a child window and I also had to keep track of a number of other things which made me realize that this was not the best way to do it. (It felt like a hack right from the start to be honest. :)

So I decided to clip the windows and widgets myself. What I have now is a class Widget which is essentially a 2D rectangle on the screen. Each widget have a number of child-widgets that are drawn on top of the parent-widget. Each widget also has a number of "primitives" which are the graphical elements that make up the widget. These primitives are then clipped against the viewable area of the widget they belong to. Right now I have 2 types of primitives, lines and boxes which are just lines and quads with a given color. I'm also going to add textured quad- and text primitives when I need them.

When I draw the widgets I start from the top of the tree-structure of widgets and draw each widget's primitives inside the clipping area that I pass along the drawing function. The clipping area covers the whole GUI area to start with then it is calculated by clipping the current clipping area with the widget being drawn. The clipping area is also going to be used to pass input to the right widget.

Clipping rectangles or quads is not that big of a deal. What can be tricky is clipping lines and textured quads.

I hope my ramble could help you in some way or at least give you some ideas. Now I'm off coding the z-sorting. :)

Edit: To answer your question: You could use the glCopyTexImage2D extension to render the child windows to a texture which you then render to screen. Nehe OpenGL tutorial #36

Share this post


Link to post
Share on other sites
Quote:
Original post by Redien
You could use the glCopyTexImage2D extension to render the child windows to a texture which you then render to screen. Nehe OpenGL tutorial #36


Another option would be to use OpenGL framebuffer objects (FBOs), which are supposed to be more efficient than drawing and then using glCopyTexImage2D. With FBOs you can draw directly to a texture. FBOs may (?) not be as widely supported:
Framebuffer Objects 101

Share this post


Link to post
Share on other sites
When you initialize a frame buffer object, is it the same size as your rendering context (i.e. the size of your window) ? So basically, can it be a non-power of 2 size?

Share this post


Link to post
Share on other sites
Quote:
Original post by Redien

... lots of text ...


Why not use glScissor for clipping?

Quote:

The glScissor function defines the scissor box.

void glScissor(
GLint
x,
GLint y,
GLsizei width,
GLsizei height
);

Parameters
x, y
The lower-left corner of the scissor box. Initially (0,0).

width, height
The width and height of the scissor box. When an OpenGL context is first attached to a window, width and height are set to the dimensions of that window.


Remarks
The glScissor function defines a rectangle, called the scissor box, in window coordinates. The first two parameters, x and y, specify the lower-left corner of the box. The width and height parameters specify the width and height of the box.

The scissor test is enabled and disabled using glEnable and glDisable with argument GL_SCISSOR_TEST. While the scissor test is enabled, only pixels that lie within the scissor box can be modified by drawing commands. Window coordinates have integer values at the shared corners of framebuffer pixels, so glScissor(0,0,1,1) allows only the lower-left pixel in the window to be modified, and glScissor(0,0,0,0) disallows modification to all pixels in the window.

When the scissor test is disabled, it is as though the scissor box includes the entire window.

The following functions retrieve information related to glScissor:

glGet with argument GL_SCISSOR_BOX

glIsEnabled with argument GL_SCISSOR_TEST

Error Codes
The following are the error codes generated and their conditions.

GL_INVALID_VALUE either width or height was negative.
GL_INVALID_OPERATION glScissor was called between a call to glBegin and the corresponding call to glEnd.

Share this post


Link to post
Share on other sites
Quote:
Original post by ggp83
Why not use glScissor for clipping?


I wasn't aware of that such a function existed. Thanks for the suggestion but I'd still like to do the clipping by hand since I could actually learn something from it and it would keep the OGL state as untouched as possible. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Katie
I'd advise against using OpenGL's matrix popping for two reasons;

1. When I used it in the past it wasn't brilliantly fast.

2. You only need translations in 2 dimensions and full matrix stuff seems overkill.

An aggro will start to come with clipping -- when you draw components in X or Win32, the rectangle is a window, and clips the children at its edges. You won't so easily get the clipping if your rectangle is just a rectangle, you'll have to do it yourself and that gets boring rapidly. You could simply ignore this and not do the clipping, but it gets more useful for scrolling lists and similar controls.


Full matrixes would let you add lots of cool 3D effects, and i don't think it would slow anything down, imho. Of course it would be quite more complicated than keeping it simple, but you can do lots of fancy stuff :). GUIs are a game's first impression and this kind of improvements could add an edge.

Quote:
Original post by nmi
What about using a library that fits your needs:
http://libufo.sourceforge.net/

Maybe you also want to help those people to improve it.


Well, im not against using libs, but right now id like to do this my myself.

Quote:
RedienLots of stuff :P


After i made my post o went on searching for answers (redbook mainly :P) and yes, you helped me put some order to my thougts XD. i had seen the "glScissor" tool in the "redbook" and i was coming happily to tell you, but someone did that already :P.

Quote:
Original post by venzon
Quote:
Original post by Redien
You could use the glCopyTexImage2D extension to render the child windows to a texture which you then render to screen. Nehe OpenGL tutorial #36


Another option would be to use OpenGL framebuffer objects (FBOs), which are supposed to be more efficient than drawing and then using glCopyTexImage2D. With FBOs you can draw directly to a texture. FBOs may (?) not be as widely supported:
Framebuffer Objects 101


Right, i had found exactly that article, and i was wondering if i could actually rely in such extentions for a GUI (provided i probably shouldn't leave the game without a GUI if it is not supported :P).

I'm starting to realize there is no simple answer to my problem. Maybe i should go the glCopyTexImage2D way?

Is there a way to bit a pixelbuffer to another pixelbuffer (and not to de screenbuffer or a texture) so i can then do a glDrawPixels, or something like that? (this is supposing i totally drop the idea of making a 3D GUI).

And i had another idea that may be too crazy (or plain supid XD) to achieve that. If there is no such way, maybe i could have SDL do all those intermediate blits to "surface" objects and just in the end blit the full composition to the opengl screenbuffer.

¿What do you think?

PD: Thanks everyone for your help. :)

Share this post


Link to post
Share on other sites
You might want to check out some more libs, just to see how other people have done it.

I have been following guichan [http://guichan.sf.net] development for some time; nicely written code in my opinion. They have both a SDL and an OpenGL backend, so you can see
either way (software blitting and glXXX) in action.

Personally I wouldn't use any extensions in this case, partially because of the need to provide a fallback (if a card doesn't support it) but mainly because the GUI shouldn't cost that many vertices. I like the flexibility that immediate mode (glVertex3f, ...) offers, but that may also be because I lack experience with the newer stuff (my hardware isn't exactly new).

If you really want to blit buffers into buffers SDL_BlitSurface is a good choice; though from what I understood from your first post I don't see why you want to.

If you 'glClear' your screen each frame you have to redraw the complete GUI anyway; and then you don't want to recursively blit buffers (every frame!), so you need some kind of caching. You might want to skip software blitting entirely.

Relative positions and show/hide of children-nodes seem features of your tree & nodes; I don't get why this would require a certain drawing mechanism (blitting).

Best regards,
s

Share this post


Link to post
Share on other sites
Quote:
Original post by sunky
You might want to check out some more libs, just to see how other people have done it.

I have been following guichan [http://guichan.sf.net] development for some time; nicely written code in my opinion. They have both a SDL and an OpenGL backend, so you can see
either way (software blitting and glXXX) in action.

Personally I wouldn't use any extensions in this case, partially because of the need to provide a fallback (if a card doesn't support it) but mainly because the GUI shouldn't cost that many vertices. I like the flexibility that immediate mode (glVertex3f, ...) offers, but that may also be because I lack experience with the newer stuff (my hardware isn't exactly new).

If you really want to blit buffers into buffers SDL_BlitSurface is a good choice; though from what I understood from your first post I don't see why you want to.

If you 'glClear' your screen each frame you have to redraw the complete GUI anyway; and then you don't want to recursively blit buffers (every frame!), so you need some kind of caching. You might want to skip software blitting entirely.

Relative positions and show/hide of children-nodes seem features of your tree & nodes; I don't get why this would require a certain drawing mechanism (blitting).

Best regards,
s


agreed about extensions.

about blit, id do that to:
- draw every window component at the right depth, respecting the tree structure, and optionally for clipping.
- to caché al widgets that have not changed relative to their parents.

so if im about to render a new frame, i glClear, then draw the game (optionally), and then draw the GUI on top. If nothing in the GUI tree changed, then the tree's root will have a cache image of exactly what the gui should look like, so i only have to blit that image on top.

Isn't it the way windows work? when you drag a window, all its graphical content is moved as if it were just a graphic, but it is probably composed of maybe 100 controls and buttons. And it is not being completely redrawn, right? Just guessing here.

Share this post


Link to post
Share on other sites
Quote:
Original post by sunky
Personally I wouldn't use any extensions in this case, partially because of the need to provide a fallback (if a card doesn't support it) but mainly because the GUI shouldn't cost that many vertices. I like the flexibility that immediate mode (glVertex3f, ...) offers, but that may also be because I lack experience with the newer stuff (my hardware isn't exactly new).

Don’t use the immediate mode. Use display lists or vertex array instead if you want something widely supported. You probably doesn’t need to support OpenGL version older than 1.2. It’s probably better to do things the right way (with VBO) and than provide fallback for hardware that doesn’t support the extension.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this