Sign in to follow this  
ElPeque2

OpenGL Making a GUI system with C++ OpenGL

Recommended Posts

ElPeque2    122
All this could sound extremely noob, you have been warned :P. I'd like it to be OO, and of course have it represented in a tree structure. (a window has more windows within it, and those windows can also have more windows, buttons, and all the other widgets). I was thinking it would be nice to give widgets the ability to draw themselves to their parent widget's surface and not to the screenbuffer so that: - if i decide to hide/move the parent, then all of its children will hide/move. - i only have to render them once, until the next time they change or move relative to their parent. - all children share globaly the z position of the parent (but still have an order within that parents children). In other words, the depth of the widgets respects the tree structure. So... seeing what OpenGL has to offer.......... (and asking to be given advice about everithing :P) i can draw raster graphics (glDrawPixels and such) or go the 3d way (textured quads, and i would have a nice way to handle the relative widgets positions, as using opengl's matrixes feels tree-ish enough already :P) The problem is (besides im a noob), i dont know how (or if it is even possible or REASONABLE) to use that to draw to some image data in each widget and not directly to the screenbuffer. Can anybody put some light in this for me? better all around ideas? I did a GUI system that worked very well in DirectorMX2004 drawing with "copypixels", but using OpenGL seems to me like a quite different paradigm than just doing blits to the screen. thanks for reading me :).

Share this post


Link to post
Share on other sites
Katie    2244
I'd advise against using OpenGL's matrix popping for two reasons;

1. When I used it in the past it wasn't brilliantly fast.

2. You only need translations in 2 dimensions and full matrix stuff seems overkill.

An aggro will start to come with clipping -- when you draw components in X or Win32, the rectangle is a window, and clips the children at its edges. You won't so easily get the clipping if your rectangle is just a rectangle, you'll have to do it yourself and that gets boring rapidly. You could simply ignore this and not do the clipping, but it gets more useful for scrolling lists and similar controls.

Share this post


Link to post
Share on other sites
dave    2187
The description of what you want to do seems spot on. Go for it.

If you want to learn how to do it then don't use a 3rd party library, if you want a GUI then do use a 3rd party. My point being there is no better way to learn than to do.

Share this post


Link to post
Share on other sites
Redien    122
I am also writing a simple GUI system for use in my own 2D game and when I first started thinking on how to structure it I came up with something similar to your's. My first idea was something like this: I started with a window which had a number of child-windows. These child-windows would be rendered in a OpenGL viewport that was sized and positioned as the parent-window. And windows that were connected to each of these child-windows would be drawn with the viewport of the child's window in turn. That way I wouldn't have to worry about clipping since OGL would handle it for me.

But that meant a lot of calls to glViewport() and glOrtho(). I had to re-position the viewport every time I wanted to draw a child window and I also had to keep track of a number of other things which made me realize that this was not the best way to do it. (It felt like a hack right from the start to be honest. :)

So I decided to clip the windows and widgets myself. What I have now is a class Widget which is essentially a 2D rectangle on the screen. Each widget have a number of child-widgets that are drawn on top of the parent-widget. Each widget also has a number of "primitives" which are the graphical elements that make up the widget. These primitives are then clipped against the viewable area of the widget they belong to. Right now I have 2 types of primitives, lines and boxes which are just lines and quads with a given color. I'm also going to add textured quad- and text primitives when I need them.

When I draw the widgets I start from the top of the tree-structure of widgets and draw each widget's primitives inside the clipping area that I pass along the drawing function. The clipping area covers the whole GUI area to start with then it is calculated by clipping the current clipping area with the widget being drawn. The clipping area is also going to be used to pass input to the right widget.

Clipping rectangles or quads is not that big of a deal. What can be tricky is clipping lines and textured quads.

I hope my ramble could help you in some way or at least give you some ideas. Now I'm off coding the z-sorting. :)

Edit: To answer your question: You could use the glCopyTexImage2D extension to render the child windows to a texture which you then render to screen. Nehe OpenGL tutorial #36

Share this post


Link to post
Share on other sites
venzon    256
Quote:
Original post by Redien
You could use the glCopyTexImage2D extension to render the child windows to a texture which you then render to screen. Nehe OpenGL tutorial #36


Another option would be to use OpenGL framebuffer objects (FBOs), which are supposed to be more efficient than drawing and then using glCopyTexImage2D. With FBOs you can draw directly to a texture. FBOs may (?) not be as widely supported:
Framebuffer Objects 101

Share this post


Link to post
Share on other sites
soconne    105
When you initialize a frame buffer object, is it the same size as your rendering context (i.e. the size of your window) ? So basically, can it be a non-power of 2 size?

Share this post


Link to post
Share on other sites
ggp83    122
Quote:
Original post by Redien

... lots of text ...


Why not use glScissor for clipping?

Quote:

The glScissor function defines the scissor box.

void glScissor(
GLint
x,
GLint y,
GLsizei width,
GLsizei height
);

Parameters
x, y
The lower-left corner of the scissor box. Initially (0,0).

width, height
The width and height of the scissor box. When an OpenGL context is first attached to a window, width and height are set to the dimensions of that window.


Remarks
The glScissor function defines a rectangle, called the scissor box, in window coordinates. The first two parameters, x and y, specify the lower-left corner of the box. The width and height parameters specify the width and height of the box.

The scissor test is enabled and disabled using glEnable and glDisable with argument GL_SCISSOR_TEST. While the scissor test is enabled, only pixels that lie within the scissor box can be modified by drawing commands. Window coordinates have integer values at the shared corners of framebuffer pixels, so glScissor(0,0,1,1) allows only the lower-left pixel in the window to be modified, and glScissor(0,0,0,0) disallows modification to all pixels in the window.

When the scissor test is disabled, it is as though the scissor box includes the entire window.

The following functions retrieve information related to glScissor:

glGet with argument GL_SCISSOR_BOX

glIsEnabled with argument GL_SCISSOR_TEST

Error Codes
The following are the error codes generated and their conditions.

GL_INVALID_VALUE either width or height was negative.
GL_INVALID_OPERATION glScissor was called between a call to glBegin and the corresponding call to glEnd.

Share this post


Link to post
Share on other sites
Redien    122
Quote:
Original post by ggp83
Why not use glScissor for clipping?


I wasn't aware of that such a function existed. Thanks for the suggestion but I'd still like to do the clipping by hand since I could actually learn something from it and it would keep the OGL state as untouched as possible. :)

Share this post


Link to post
Share on other sites
ElPeque2    122
Quote:
Original post by Katie
I'd advise against using OpenGL's matrix popping for two reasons;

1. When I used it in the past it wasn't brilliantly fast.

2. You only need translations in 2 dimensions and full matrix stuff seems overkill.

An aggro will start to come with clipping -- when you draw components in X or Win32, the rectangle is a window, and clips the children at its edges. You won't so easily get the clipping if your rectangle is just a rectangle, you'll have to do it yourself and that gets boring rapidly. You could simply ignore this and not do the clipping, but it gets more useful for scrolling lists and similar controls.


Full matrixes would let you add lots of cool 3D effects, and i don't think it would slow anything down, imho. Of course it would be quite more complicated than keeping it simple, but you can do lots of fancy stuff :). GUIs are a game's first impression and this kind of improvements could add an edge.

Quote:
Original post by nmi
What about using a library that fits your needs:
http://libufo.sourceforge.net/

Maybe you also want to help those people to improve it.


Well, im not against using libs, but right now id like to do this my myself.

Quote:
RedienLots of stuff :P


After i made my post o went on searching for answers (redbook mainly :P) and yes, you helped me put some order to my thougts XD. i had seen the "glScissor" tool in the "redbook" and i was coming happily to tell you, but someone did that already :P.

Quote:
Original post by venzon
Quote:
Original post by Redien
You could use the glCopyTexImage2D extension to render the child windows to a texture which you then render to screen. Nehe OpenGL tutorial #36


Another option would be to use OpenGL framebuffer objects (FBOs), which are supposed to be more efficient than drawing and then using glCopyTexImage2D. With FBOs you can draw directly to a texture. FBOs may (?) not be as widely supported:
Framebuffer Objects 101


Right, i had found exactly that article, and i was wondering if i could actually rely in such extentions for a GUI (provided i probably shouldn't leave the game without a GUI if it is not supported :P).

I'm starting to realize there is no simple answer to my problem. Maybe i should go the glCopyTexImage2D way?

Is there a way to bit a pixelbuffer to another pixelbuffer (and not to de screenbuffer or a texture) so i can then do a glDrawPixels, or something like that? (this is supposing i totally drop the idea of making a 3D GUI).

And i had another idea that may be too crazy (or plain supid XD) to achieve that. If there is no such way, maybe i could have SDL do all those intermediate blits to "surface" objects and just in the end blit the full composition to the opengl screenbuffer.

¿What do you think?

PD: Thanks everyone for your help. :)

Share this post


Link to post
Share on other sites
sunky    136
You might want to check out some more libs, just to see how other people have done it.

I have been following guichan [http://guichan.sf.net] development for some time; nicely written code in my opinion. They have both a SDL and an OpenGL backend, so you can see
either way (software blitting and glXXX) in action.

Personally I wouldn't use any extensions in this case, partially because of the need to provide a fallback (if a card doesn't support it) but mainly because the GUI shouldn't cost that many vertices. I like the flexibility that immediate mode (glVertex3f, ...) offers, but that may also be because I lack experience with the newer stuff (my hardware isn't exactly new).

If you really want to blit buffers into buffers SDL_BlitSurface is a good choice; though from what I understood from your first post I don't see why you want to.

If you 'glClear' your screen each frame you have to redraw the complete GUI anyway; and then you don't want to recursively blit buffers (every frame!), so you need some kind of caching. You might want to skip software blitting entirely.

Relative positions and show/hide of children-nodes seem features of your tree & nodes; I don't get why this would require a certain drawing mechanism (blitting).

Best regards,
s

Share this post


Link to post
Share on other sites
ElPeque2    122
Quote:
Original post by sunky
You might want to check out some more libs, just to see how other people have done it.

I have been following guichan [http://guichan.sf.net] development for some time; nicely written code in my opinion. They have both a SDL and an OpenGL backend, so you can see
either way (software blitting and glXXX) in action.

Personally I wouldn't use any extensions in this case, partially because of the need to provide a fallback (if a card doesn't support it) but mainly because the GUI shouldn't cost that many vertices. I like the flexibility that immediate mode (glVertex3f, ...) offers, but that may also be because I lack experience with the newer stuff (my hardware isn't exactly new).

If you really want to blit buffers into buffers SDL_BlitSurface is a good choice; though from what I understood from your first post I don't see why you want to.

If you 'glClear' your screen each frame you have to redraw the complete GUI anyway; and then you don't want to recursively blit buffers (every frame!), so you need some kind of caching. You might want to skip software blitting entirely.

Relative positions and show/hide of children-nodes seem features of your tree & nodes; I don't get why this would require a certain drawing mechanism (blitting).

Best regards,
s


agreed about extensions.

about blit, id do that to:
- draw every window component at the right depth, respecting the tree structure, and optionally for clipping.
- to caché al widgets that have not changed relative to their parents.

so if im about to render a new frame, i glClear, then draw the game (optionally), and then draw the GUI on top. If nothing in the GUI tree changed, then the tree's root will have a cache image of exactly what the gui should look like, so i only have to blit that image on top.

Isn't it the way windows work? when you drag a window, all its graphical content is moved as if it were just a graphic, but it is probably composed of maybe 100 controls and buttons. And it is not being completely redrawn, right? Just guessing here.

Share this post


Link to post
Share on other sites
apatriarca    2365
Quote:
Original post by sunky
Personally I wouldn't use any extensions in this case, partially because of the need to provide a fallback (if a card doesn't support it) but mainly because the GUI shouldn't cost that many vertices. I like the flexibility that immediate mode (glVertex3f, ...) offers, but that may also be because I lack experience with the newer stuff (my hardware isn't exactly new).

Don’t use the immediate mode. Use display lists or vertex array instead if you want something widely supported. You probably doesn’t need to support OpenGL version older than 1.2. It’s probably better to do things the right way (with VBO) and than provide fallback for hardware that doesn’t support the extension.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now