Followers 0

OpenGL Rendering a GUI efficiently

5 posts in this topic

I did some profiling the other day and I found that drawing my gui over my 3d scene increased my frametime by up to 30%. I'm using GWEN for my gui but it allows you to write a custom renderer which I did. I used the same terrible techniques as the sample renderer but was able to squeeze out a little more performance by better integrating it into my engine which removed a few OpenGL calls. The problem with the renderer is that the way it works is that I basically get to derive a renderer class and rewrite a DrawTexturedRect function and that is the only way I can get geometry. In the sample implementation (and mine) It places two triangles to form a quad in a buffer on the cpu side when everything is finished drawing or the buffer is full it calls a flush function which uploads the buffer to the gpu and draws it, then the buffer is emptied. The problem is that this happens every frame and I have no way of knowing whether the geometry has actually changed between frames so I have no choice but to do everything all over again. So I was thinking of ways to change this awful setup while only being allowed to use DrawTexturedRect and I thought about instancing a single 1x1 quad (everthing is a quad with the same texcoords) and then scaling and translating it in the vertex shader. But unfortunately instanced rendering is only core in OpenGL 3.1+ and my minimum is GL 2.1 support where it is available as an extension (but probably not on intel cards). So my question boils down to is it faster to draw individual quads (one draw call per quad) that are already in the gpu memory or to use the terrible buffering solution? any other general GUI drawing advice would be appreciated as well.

0

Share on other sites

Gwen apparently supports caching (take a look at the ICacheToTexture interface in the base renderer). Unfortunately I haven't tried to implement it yet so I can't say anything about its usefulness.

2

Share on other sites

Gwen apparently supports caching (take a look at the ICacheToTexture interface in the base renderer). Unfortunately I haven't tried to implement it yet so I can't say anything about its usefulness.

Hmm thats very interesting. Depending on how it works that may be ideal. Unfortunately none of the samples seem to use it.

Edited by ic0de
2

Share on other sites
&nbsp;

It places two triangles to form a quad in a buffer on the cpu side when everything is finished drawing or the buffer is full it calls a flush function which uploads the buffer to the gpu and draws it, then the buffer is emptied.

&nbsp;
This waiting (draw other things, then draw GUI) might waste some parallelism.
Do you need normal 3D scene drawing while there is a dialog in front of it? A freeze frame (copy last frame to a texture, then draw it as a full-screen quad) should be cheaper.
0

Share on other sites

This waiting (draw other things, then draw GUI) might waste some parallelism.

Do you need normal 3D scene drawing while there is a dialog in front of it? A freeze frame (copy last frame to a texture, then draw it as a full-screen quad) should be cheaper.

Unfortunately the 3D scene must remain dynamic while gui elements are being displayed.

0

Share on other sites

I dunno if it's optimal how I have my stuff set up, but I have a simple sprite batching class that I use to draw quads with. I don't really do anything special at all.

SpriteBatch manages the state, etc. It's really dead simple and really doesn't do that much work internally. Just tracking a few things, starting a new batch, finally issuing the draw call, etc. I pre-fill the index buffer and cap how large batches can get. Adding a sprite to the batch just needs a Rect for the position and another for the texture coordinates.

A batch is a simple struct like this

struct Batch
{
Vertex2D*	verts;
Texture*	texture;
u32		numSprites;
u32		numVerts;
u32		numIndicies;
}; 

Then keep a vector of them. Batches hang around until you explicitly purge them, so once you add a bunch of quads, you don't need to re-add them and the only thing that needs to happen is the draw call (and prior memcpy() to push whatever batch verts to the underlying vertex buffer).

It's not fancy or super robust, but I don't see why I couldn't draw an entire UI with just one or two draw calls. Drawing thousands of textured quads costs practically nothing, and my framerate is still well into the thousands. What's GWEN doing that's taking so long?

Edited by clashie
1

Create an account

Register a new account

Followers 0

• Similar Content

• I googled around but are unable to find source code or details of implementation.
What keywords should I search for this topic?
Things I would like to know:
A. How to ensure that partially covered pixels are rasterized?
Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
How to ensure proper synchronizations in GLSL?
GLSL seems to only allow int32 atomics on image.
C. Is there some simple ways to estimate coverage on-the-fly?
In case I am to draw 2D shapes onto an exisitng target:
1. A multi-pass whatever-buffer seems overkill.
2. Multisampling could cost a lot memory though all I need is better coverage.
Besides, I have to blit twice, if draw target is not multisampled.

• By mapra99
Hello

I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

Thanks!

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• By Tchom
Hey devs!

I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.