Jump to content
  • Advertisement

OpenGL ES Debugging precision issues in OpenGL ES 2

Recommended Posts

Am currently debugging compatibility issues with my OpenGL ES 2.0 shaders across several different Android devices.

One of the biggest problems I'm finding is how the different precisions in GLSL (lowp, mediump, highp) equate to actual precisions in the hardware. To that end I've been using glGetShaderPrecisionFormat to get the log2 of each precision for vertex and fragment shaders, and outputting this in-game to the game screen.

On my PC the precision is coming back as 23, 23, 23 for all 3 (lo, medium, high), running under linux natively, or the Android Studio emulator. On my tablet, it is 23, 23, 23 also. On my phone it comes back with 8, 10, 23. If I get a precision issue on the phone I can always bump it up to the next level to cure it. However, the fun comes on my android TV box (Amlogic S905X) which seems to only support 10, 10, 0 for fragment shaders. That is, it doesn't even support high precision in fragment shaders.

However being the only device with this problem it is incredibly difficult to debug the shaders, as I can't attach it via USB (unless I can get it connected via the LAN which I haven't tried yet). I'm having to compile the APK, put it on a usb stick, take into the other room, install and run. Which is ridiculous. :o

My question is what method do other people use to debug these precision issues? Is there a way to get the emulator to emulate having rubbish precision? That would seem the most convenient solution (and if not, why haven't they implemented this?). Other than that it seems like I need to buy some old phones / tablets off Ebay, or 'downgrade' the precision in the shader (to mediump) and debug it on my phone...


Share this post

Link to post
Share on other sites

I dont know how to solve your problem but i encountered same problem in fragment shaders, i did what i had to do that means i used a texture that stored 1 float per 4 pixels and used that, instead of reading colors maybe this helps....

Share this post

Link to post
Share on other sites

So splitting up the precision into multiple pixels? I did wonder about this but couldn't see an easy way of getting it to work in my case. As I needed to somehow finally combine the precision and I was only getting 10 bits of precision.

For the reference of anyone else who comes up against the same problem, after more research and struggling I found a couple of useful articles:



In my particular problem the precision issue was because I am generating texture coordinates in the fragment shader. This was a problem because my texture was 2048 in size, and 10 bits of precision wasn't enough to effectively cover all the texels. This can typically exhibit looking like point filtering when you use the texture map. But in my case I was using a procedural method with completely different neighbouring texels, and getting calculations out by 1 texel led to wildy different results.

This begs the question, why are there hardware devices that only offer 10 bits of precision, yet support large textures (2048 or 4096), if they can't even address all the texels?? I suspected that they were using a different high precision path for varyings that were not touched in the fragment shader. My suspicions were confirmed by a comment in one of the Arm articles:


*We have one special "fast path" for varyings used directly as texture coordinates which is actually fp24.

There are good reasons for using texture coordinates 'as is' from the varying, because afaik it can do the lookup of the texture ahead of time. Whenever you generate texture coords in the fragment shader, I believe there can be a penalty. However it is necessary in some shaders.

One of the problems I am still facing is that on OpenGL ES 2.0 many devices don't support texture wrapping on non-POT textures. I still needed texture wrapping for my use case so was having to do something like this manually in the fragment shader:

uv.x = fract(uv.x);
uv.y = fract(uv.y);

Even this I suspect 'breaks' the high precision fast path. My alternative I am looking into now is trying to do the wrapping in the vertex shader, which will require duplicate verts at the 1.0 / 0.0 boundary. If there is any other cunning way of doing the wrapping I'd love to hear it! :)

In general it has been proving a nightmare to debug, because of the seeming lack of ability to emulate low precision on the PC. I've had to go with the approach of deliberately setting mediump and debugging on my phone, plus trying to work it out in my head.

What doesn't help is I'm not exactly sure how the 10 bit precision floating point format works and what ranges it works most efficiently at. Plus with the vague OpenGL specs, it could actually be doing anything in the hardware.

Share this post

Link to post
Share on other sites

I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).

Share this post

Link to post
Share on other sites
26 minutes ago, dave j said:

I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).

Good idea! :D I will investigate!

Share this post

Link to post
Share on other sites

Maybe I'm missing something but I still haven't seen where you mentioned the actual precision issue you are experience aside from the fact that the HW expose/support different level of precision. Are you seeing visual anomalies, incorrect textures, jitter etc ?  The OpenGL ES Shading Language specification covers the behavior of different precision, but for the most part these are hints more or less..

Oops disregard the part about not actually specifying the issue, saw that in a post further down. In either case wrt to precision, its more than just the computation itself as the values returned from samplers are also limited by precision. Are you sure the issue you are seeing is related to the precision of the uv computation or just that the texture samplers on these HW are just atrocious in terms of filtering quality and whatnot ? I've seen cases where the same exact texture using the same shader looks completely different on different HW and the only conclusion I could draw from this observation is that the texture sampling/filtering logic for each is what is causing the difference.

Share this post

Link to post
Share on other sites
17 hours ago, cgrant said:

Are you sure the issue you are seeing is related to the precision of the uv computation or just that the texture samplers on these HW are just atrocious in terms of filtering quality and whatnot ? I've seen cases where the same exact texture using the same shader looks completely different on different HW and the only conclusion I could draw from this observation is that the texture sampling/filtering logic for each is what is causing the difference.

I was in the same boat previously: while most of my texturing was fine, on the problem hardware one of the textures in an offending shader  looked to be being filtered incorrectly (as if it was using point filtering instead of linear). I had assumed my filtering states were wrong but on further investigation I now believe it is down to the precision in the texture coordinate calculations. The ARM articles suggest this, that on the most basic hardware calculated tex coords will have 10 bit precision (presumably a half float), and directly passed coordinates (which is far more common) get a fast 24p path. According to the specs I believe you could theoretically have hardware with only the 10 bit path (although texture filtering would look pretty bad).

I actually pinned down the problem in another more complex terrain procedural shader where it was far more pronounced.

I will know the answer soon as I am altering code to calculate the tex coords in the vertex shader, but have yet to try it on the offending hardware. Hopefully it will solve the problems. :)

>> EDIT Confirmed. It was the precision. Moving the texture coordinate calculation into the vertex shader cured the 'filtering' issues on the TV box, as expected.

Edited by lawnjelly

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By mapmip
      Hi, I am fresh to database design, recently trying building my mobile multiplayer game, it will be something like pokemonGo.
      I have experience on MySQL, and I know some NoSQL engines like redis.
      I saw some existing game projects which store their data on both SQL database and noSQL database.
      Could anyone give some advice that what kind of data should store in SQL and what kind of data is better to be in noSQL.
      It would be nice if giving some real scenario examples.
      My understanding is data like user profile, purchase transactions should be in SQL.
      Field map information, enemy status can be NoSQL.  
    • By DevAndroid
      Hello everyone,
      I'm trying to display a 2D texture to screen but the rendering isn't working correctly.
      First of all I did follow this tutorial to be able to render a Text to screen (I adapted it to render with OpenGL ES 2.0) : https://learnopengl.com/code_viewer.php?code=in-practice/text_rendering
      So here is the shader I'm using :
      const char gVertexShader[] = "#version 320 es\n" "layout (location = 0) in vec4 vertex;\n" "out vec2 TexCoords;\n" "uniform mat4 projection;\n" "void main() {\n" " gl_Position = projection * vec4(vertex.xy, 0.0, 1.0);\n" " TexCoords = vertex.zw;\n" "}\n"; const char gFragmentShader[] = "#version 320 es\n" "precision mediump float;\n" "in vec2 TexCoords;\n" "out vec4 color;\n" "uniform sampler2D text;\n" "uniform vec3 textColor;\n" "void main() {\n" " vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);\n" " color = vec4(textColor, 1.0) * sampled;\n" "}\n"; The render text works very well so I would like to keep those Shaders program to render a texture loaded from PNG.
      For that I'm using libPNG to load the PNG to a texture, here is my code :
      GLuint Cluster::loadPngFromPath(const char *file_name, int *width, int *height) { png_byte header[8]; FILE *fp = fopen(file_name, "rb"); if (fp == 0) { return 0; } fread(header, 1, 8, fp); if (png_sig_cmp(header, 0, 8)) { fclose(fp); return 0; } png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); return 0; } png_infop info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL); fclose(fp); return 0; } png_infop end_info = png_create_info_struct(png_ptr); if (!end_info) { png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL); fclose(fp); return 0; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); int bit_depth, color_type; png_uint_32 temp_width, temp_height; png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type, NULL, NULL, NULL); if (width) { *width = temp_width; } if (height) { *height = temp_height; } png_read_update_info(png_ptr, info_ptr); int rowbytes = png_get_rowbytes(png_ptr, info_ptr); rowbytes += 3 - ((rowbytes-1) % 4); png_byte * image_data; image_data = (png_byte *) malloc(rowbytes * temp_height * sizeof(png_byte)+15); if (image_data == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_bytep * row_pointers = (png_bytep *) malloc(temp_height * sizeof(png_bytep)); if (row_pointers == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); fclose(fp); return 0; } int i; for (i = 0; i < temp_height; i++) { row_pointers[temp_height - 1 - i] = image_data + i * rowbytes; } png_read_image(png_ptr, row_pointers); GLuint texture; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, GL_ZERO, GL_RGB, temp_width, temp_height, GL_ZERO, GL_RGB, GL_UNSIGNED_BYTE, image_data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); free(row_pointers); fclose(fp); return texture; } This code just generates the texture and I store the id on memory
      And then I want to display my texture on any position (X, Y) of my screen so I did the following (That's works, at least the positioning).
      //MY TEXTURE IS 32x32 pixels ! void Cluster::printTexture(GLuint idTexture, GLfloat x, GLfloat y) { glActiveTexture(GL_TEXTURE0); glBindVertexArray(VAO); GLfloat vertices[6][4] = { { x, y + 32, 0.0, 0.0 }, { x, y, 0.0, 1.0 }, { x + 32, y, 1.0, 1.0 }, { x, y + 32, 0.0, 0.0 }, { x + 32, y, 1.0, 1.0 }, { x + 32, y + 32, 1.0, 0.0 } }; glBindTexture(GL_TEXTURE_2D, idTexture); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferSubData(GL_ARRAY_BUFFER, GL_ZERO, sizeof(vertices), vertices); glBindBuffer(GL_ARRAY_BUFFER, GL_ZERO); glUniform1i(this->mTextShaderHandle, GL_ZERO); glDrawArrays(GL_TRIANGLE_STRIP, GL_ZERO, 6); } My .png is a blue square.
      The result is that my texture is not loaded correctly. It is not complete and there are many small black spots. I don't know what's going on ? It could be the vertices or the load ? Or maybe I need to add something on the shader. I don't know, I really need help.
      Thanks !
    • By DelicateTreeFrog
      I have a 9-slice shader working mostly nicely:

      Here, both the sprites are separate images, so the shader code works well:
      varying vec4 color; varying vec2 texCoord; uniform sampler2D tex; uniform vec2 u_dimensions; uniform vec2 u_border; float map(float value, float originalMin, float originalMax, float newMin, float newMax) { return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin; } // Helper function, because WET code is bad code // Takes in the coordinate on the current axis and the borders float processAxis(float coord, float textureBorder, float windowBorder) { if (coord < windowBorder) return map(coord, 0, windowBorder, 0, textureBorder) ; if (coord < 1 - windowBorder) return map(coord, windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder); return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1); } void main(void) { vec2 newUV = vec2( processAxis(texCoord.x, u_border.x, u_dimensions.x), processAxis(texCoord.y, u_border.y, u_dimensions.y) ); // Output the color gl_FragColor = texture2D(tex, newUV); } External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels).
      This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet.

      Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you!
    • By Final Flames
      Hi everyone I am newbie got some ideas , but don't know how the whole things work out .
       I want to learn how things work with a developer from start to finish.
      So ideally looking to team up with android developer to complete some simple fun games.
      So either reply me here or mail at palani650@gmail.com 
    • By Dr. Michael Garbade
      Are you considering developing a mobile game? If you want to be successful, you should avoid making the most common mistakes. Trying to build a game without figuring out the right approach is a recipe for disaster.
      There are experienced developers like MyIsaak from Sweden, an expert in C# and Unity game development who frequently livestreams his Diablo III Board game development process.
      The more you learn from professionals like him, who have gone through the processes, the faster you can avoid making the common game development mistakes.

      Here are the top 5 game developments mistakes to avoid.
      1. Ignoring the target group
      Creating a game without properly studying your target group is a huge barrier that will keep it from being downloaded and played.
      Who are you building the game for? What are their main interests? What activities do they like participating in? Can the target group afford the gaming app? Does your target audience use iOS or Android operating system?
      Seeking answers to the above questions and others can assist in correctly identifying your target group. Consequently, you can design its functionalities around their preferences.
      Just like an ice cream vendor is likely to set up shop at the beach during summer, you should focus on consumers whose behaviors are likely to motivate them to play your game.
      For example, if you want to create a gun shooting game, you can target college-educated men in their 20s and 30s, while targeting other demographic groups secondarily.
      2. Failure to study the competitors
      To create a successful game that will increase positive reviews and retention, you should analyze the strengths and weaknesses of your competitors.
      Studying your competition will allow you to understand your capabilities to match or surpass the consumer demand for your mobile or web-based game.
      If you fail to do it, you will miss the opportunity to fill the actual needs in the gaming industry and correct the mistakes made by the developers in your niche.
      You should ask questions like “What is their target audience?” “How many downloads do their gaming app receive per month?” “What resources do they have?”.
      Answering such questions will give you a good idea of the abilities of your competition, the feasibility of competing with them, and the kind of strategies to adopt to out-compete them.
      Importantly, instead of copying the strategies of your competitors, develop a game that is unique and provides an added value to users.
      3.  Design failure
      When building a mobile or a web-based game, it’s essential that you employ a unique art style and visually appealing design—without any unnecessary sophistication. People are attracted to games based on the user interface design and intuitiveness.
      So, instead of spending a lot of time trying to write elegant and complicated lines of code, take your time to provide a better design.
      No one will download a game because its code is beautiful. People download games to play them. And, the design of the game plays a critical part in assisting them to make the download decision.
      4. Trying to do everything
      If you try to code, develop 3D models, create animations, do voice-overs—all by yourself—then you are likely to create an unsuccessful game.
      The secret to succeeding is to complete tasks that align with your core competencies and outsource the rest of the work. Learn how to divide your work to other experts and save yourself the headaches.
      You should also avoid trying to reinvent the wheel. Instead of trying to do everything by yourself, go for robust tools available out there that can make your life easier.
      Trying to build something that is already provided in the open source community will consume a lot of your development time and make you feel frustrated.
      Furthermore, do not be the beta tester of your own game. If you request someone else to do the beta testing, you’ll get useful outside perspective that will assist in discovering some hidden issues.
      5. Having unrealistic expectations
      Unrealistic expectations are very dangerous because they set your game development career up for failure. Do not put your expectations so high such that you force somethings to work your way.
      For example, dreaming too big can make you include too many rewards in your game. As much as rewards are pivotal for improving engagement and keeping users motivated, gamers will not take you seriously if you incorporate rewards in every little achievement they make.
      Instead, you should select specific rewards for specific checkpoints; this way, the players will feel that they’ve made major milestones.
      The mistakes discussed in this article have made several game developers to be unsuccessful in their careers. So, be cautious and keep your head high so that you don’t fall into the same trap.
      The best way to avoid making the common mistakes is through learning how to build games from the experts.
      Who knows? You could develop the next big game in the industry.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!