Followers 0

# OpenGL OpenGL ES 2.0, FBO, low performance

## 6 posts in this topic

Hi

I added FBO support but looks like the performance is much lower than I expected. I'm using two screens (scene rendered for two cameras). Without FBO I have 55 fps. After adding it (texture 1024x768) - it's only 26 fps.

I also checked different configurations:

1. "Slower" - scene rendered 4 times + 2 quads with texture:

Camera1 to FBO1

Camera2 to FBO2

Screen1: Camera1 - fullscreen, Camera2 - FBO2, shown on small quad

Screen2: Camera2 - fullscreen, Camera1 - FBO1, shown on small quad

Results (for different texture size):

256x256 - 50 fps
1024x768 - 30 fps
2048x2048 - 9 fps

2. "Faster" - scene rendered 2 times + 4 quads with texture:

Camera1 to FBO1

Camera2 to FBO2

Screen1: Camera1 - FBO1, fullscreen, Camera2 - FBO2, shown on small quad

Screen2: Camera2 - FBO2, fullscreen, Camera1 - FBO1, shown on small quad

256x256 - 43 fps
1024x768 - 23 fps
2048x2048 - 8 fps

I'm using device with Vivante gc2000.

Part of code:

// ## rendering to texture1 ##:

glViewport(0, 0, 1024, 768);
glBindFramebuffer(GL_FRAMEBUFFER, m_FBO);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// drawing scene

glBindFramebuffer(GL_FRAMEBUFFER, 0);
glViewport(0, 0, 1024, 768);

// ## rendering to texture2 ##:
// (performed for 2nd FBO)

eglMakeCurrent(...);// setting screen1
// drawing fullscreen quad with FBO1
// drawing small quad with FBO2
eglSwapBuffers(...);// for screen1

eglMakeCurrent(...);// setting screen2
// drawing fullscreen quad with FBO2
// drawing small quad with FBO1
eglSwapBuffers(...);// for screen2


Do I need to change/add something or the device is not enough fast for using FBO?

0

##### Share on other sites

Have you tried with a power of 2 size texture? Say, 1024x1024. It shouldn't matter on desktop hardware but in embedded it might.

0

##### Share on other sites

I tried for 1024x1024 - 22 fps and 512x512 - 36 fps.

I also checked it for single FBO (still rendering twice, the same amount of bind/unbind operations etc.) - 30 fps instead of 23 (1024x768). Strange. Maybe if we use another buffer, the data needs to be moved somewhere and we have performance loss?

Edited by _OskaR
0

##### Share on other sites

FPS should not be used as development performance metric. Have you tried getting the actual CPU time incurred by the FBO issue?

0

##### Share on other sites

Looks like the rendering takes only small part of each frame. I'm loosing ~90% of performance because of using eglMakeCurrent funcion in each frame (twice).

Drawing scene - 1-2ms, eglMakeCurrent - 10-20ms.

I found info that better not to use it so often but I need to draw on two screens.

0

##### Share on other sites

I don't think you should be using two contexts. You should probably be using a single context, two off screen FBOs to textures, and then compositing both FBOs to the actual context render buffer.

2

##### Share on other sites
I second the above.
There is no reason to have 2 contexts in this situation—at most you have 1-per-thread and in any case you set the context(s) as active once and then never mess with it/them again.

The number of screens has no relationship with the number of contexts you should have. Especially considering that if you have only 1 thread (as you do) then there is nothing you can do on that thread with 2 contexts that you can’t do with only 1—the 2nd context is superfluous.

L. Spiro
1

## Create an account

Register a new account

Followers 0

• ### Similar Content

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• By Tchom
Hey devs!

I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.

uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
• By yahiko00
Hi,
Not sure to post at the right place, if not, please forgive me...
For a game project I am working on, I would like to implement a 2D starfield as a background.
I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

Is there someone who could have an idea of a distribution which could result in such a starfield?
Any insight would be appreciated

• I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.
Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

• 9
• 17
• 28
• 14
• 11