• Advertisement
Sign in to follow this  

OpenGL Optimizing OpenGL ES 2.0 performance on low-end mobiles

Recommended Posts

I've just got my latest build of my OpenGL ES 2.0 jungle game working on my android devices (nexus 7 2012 tablet, cat b15 phone) and am currently deciding the best way to address performance. (more details here: https://www.gamedev.net/blog/2199/entry-2262867-sound-particles-water-etc/)
All the graphics seem to be working fine, but it seems that the biggest issue is fillrate / texture samples / shader complexity.
So far I've identified the biggest issues: :blink:
  1. dynamic water shader
  2. particles
  3. pcf shadows
[attachment=35790:dynamic.jpg]
I'm aiming for 60fps even on low end phones, if possible. It seems to me that I should have graphic options so the user can get the best graphics / performance for their device.
 
Some of the issues are a consequence of using a scrolling pre-rendered background, with colour and a custom depth texture (as depth textures are not supported on some devices). When rendering the background as the viewer moves around I currently use 2 passes, one for the colour and one to write the depth into an RGBA, then in realtime I render dynamic objects on top (e.g. the animals) and I read from the depth texture, decode it and compare to the fragment z value.
 
One obvious speedup is to remove the depth comparison with the background for shaders that do not require it. For the particles, they look much nicer when they are hidden by trees / vegetation, but still look acceptable without it.
 
The PCF shadows I always suspected were going to be a problem. I was using PCF shadows for the pre-rendered scrolling background (only need refreshing every few frames) and PCF shadow on the animals as they get shaded by trees etc. Taking this down to a single sample greatly sped the shader up, so it is obviously a bottleneck. The single sample shadows look very bad however, so I think the options should be:
  • turning them off for animals
  • perhaps simplifying them for background or using some kind of pre-calculation.
  • There is also the option of randomized jitter / rotating sample window to get a softer shadow with less performance hit.
The biggest question I am still facing is how to do the water. :huh:  Is it actually *feasible* to run a complex water shader covering the whole screen on these devices (worst case for sea parts) or do they lack the horse power? I am actually considering (!!) pre-rendering a static water as part of the background. Then bodging in some kind of depth blue colour for parts of animals that are below the surface on each frame. It won't look amazing but should be super fast. I could even add some dynamic particles or something on the water surface to make it look at least a little dynamic.
This is what static water might look like: :blink: 
[attachment=35791:simpleshader.jpg]
I am currently just rendering a giant quad for the water, then using depth testing against the custom depth texture to handle visibility. But this is a bottleneck, as well as the calculations of the water colour. I have already considered drawing polys for the rough area where water will be (around the shores etc) rather than the whole screen, however this will only help in best case scenarios, not in worst cases. Maybe there is a cheaper way of deciding where it can draw the water? I would use the standard z buffer but that option does not appear to be open, given that I am using a custom encoded depth texture, and the shaders cannot write to the standard z buffer without an opengl extension (which may or may not be present lol :rolleyes: ).
 
I could maybe wangle another background luminance layer or something for where to draw realtime water, but this seems a lot of effort for not much reward (it would only be saving on decoding the depth texture and doing a comparison).
 
Another question that does occur is, whether all of these bottlenecks are simple bottlenecks, or whether I am stalling the pipeline somewhere with a dependency, and could I double / triple buffer the resources to alleviate the problem.
 
Anyway sorry for this long rambling post, but I would welcome and thoughts / ideas - probably along the lines of whether these should actually be causing such problems, and any ideas around them, particularly the water. In fact any suggestions for super fast simple water shaders would be useful .. I suspect just adding 2 scrolled tiled textures might produce something useable enough, if the texture reads are faster than calculations within the shader.

Share this post


Link to post
Share on other sites
Advertisement

you need to prerender water normals texture for a set of time, so it will loop or even use some fancy perlin noise texture for that.

 

 

i dont get pcf

 

you should draw into 3 diffrent framebuffers

yes depth needs to be written to texture so you have 2 passes for each framebuffer

one colorbuffer, one depthbuffer

you use same dimensions for all framebuffers, along with same view and projection matrix

 

you draw all  shadowcasting characters first (in your example it could be trees then animals) to depthbuffer (but no depthbuffer itself but a shader depthbuffer - the one that draws depthtexture) (from light view)

then you draw terrain to depthbuffer and apply lightmap test of depththing we wrote to tex before (from light view :P)

you could even draw a translucent quad in a water place to apply a straight shadow on water surface....

 

 such shadows will reduce framerate by 50% or more

Share this post


Link to post
Share on other sites

you need to prerender water normals texture for a set of time, so it will loop or even use some fancy perlin noise texture for that.

Ah yes! Good thinking, :D The simplest solution to pre-render some frames. I actually did this as the first version, long ago but had forgotten!

i dont get pcf

PCF is just basic shadow mapping but taking multiple samples:
http://fabiensanglard.net/shadowmappingPCF/
 

you should draw into 3 diffrent framebuffers
yes depth needs to be written to texture so you have 2 passes for each framebuffer
one colorbuffer, one depthbuffer
you use same dimensions for all framebuffers, along with same view and projection matrix


It kind of works like this already, a little more complex though as it has a wrapping tiling background bigger than the screen, and handles ground textures separately.
 

you draw all  shadowcasting characters first (in your example it could be trees then animals) to depthbuffer (but no depthbuffer itself but a shader depthbuffer - the one that draws depthtexture) (from light view)
then you draw terrain to depthbuffer and apply lightmap test of depththing we wrote to tex before (from light view :P)
you could even draw a translucent quad in a water place to apply a straight shadow on water surface....

 such shadows will reduce framerate by 50% or more


This is how it does things already with the shadows, except I am not casting from the animals at this stage as I figured that would be too expensive, I'll probably just add simple blob shadows for the animals. Although the shadows are received by the animals, from trees etc. The shadow map only needs to be regenerated as you move across the map, it is not rendered every frame.

With the dynamic shadows received on the animals turned off, the shadows on the terrain are essentially free for most frames, but they do cost when scrolling to a new tile.

I have this afternoon implemented the static water as part of the background (although not yet done the bit to add blue colour to under water animals). It doesn't look really bad on my low end phone and is now rendering mostly 60fps. There are occasional dropped frames during scrolling to new tiles but I'll see if I can address that.

I will see if I can add random jitter to the terrain shadows to make it look better with fewer samples.

Share this post


Link to post
Share on other sites

On low end devices, overdraw and shader instructions are the bigger problems.

i doubt turning off depth testing for particles will matter.

try single sample shadows instead of multiple samples.

reduce the particle count and crop the particle geometry size to be smaller.

don't use too big triangles! more than 10x10 pixels may start hurting, depending on the hardware.

simplify per fragment calculations.

Share this post


Link to post
Share on other sites

On low end devices, overdraw and shader instructions are the bigger problems.

Yes definitely, I've been finding this. Has made me so glad I went with pre-rendering the scrolling background as rendering all those sprites every frame would have killed performance. Most of the work on a frame is done by just drawing one big screen size quad for the background. The 'big work' is done when rendering a new row or column of the background, which only happens every few frames, and is limited to a small viewport so it minimizes the fillrate requirements.

See here: 

 which shows it working on the ground texture.

i doubt turning off depth testing for particles will matter.

As well as hardware depth testing (so the particles interact with the animals), the particles and models also can do a depth check against the custom encoded RGBA depth texture for the background, so they go behind trees etc. This is an extra texture read and calculations in the fragment shader so did give a speedup when turned off.

try single sample shadows instead of multiple samples.

Yup I definitely found this to be the case.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement