Jump to content
  • Advertisement
lawnjelly

OpenGL ES Debugging precision issues in OpenGL ES 2

Recommended Posts

Am currently debugging compatibility issues with my OpenGL ES 2.0 shaders across several different Android devices.

One of the biggest problems I'm finding is how the different precisions in GLSL (lowp, mediump, highp) equate to actual precisions in the hardware. To that end I've been using glGetShaderPrecisionFormat to get the log2 of each precision for vertex and fragment shaders, and outputting this in-game to the game screen.

On my PC the precision is coming back as 23, 23, 23 for all 3 (lo, medium, high), running under linux natively, or the Android Studio emulator. On my tablet, it is 23, 23, 23 also. On my phone it comes back with 8, 10, 23. If I get a precision issue on the phone I can always bump it up to the next level to cure it. However, the fun comes on my android TV box (Amlogic S905X) which seems to only support 10, 10, 0 for fragment shaders. That is, it doesn't even support high precision in fragment shaders.

However being the only device with this problem it is incredibly difficult to debug the shaders, as I can't attach it via USB (unless I can get it connected via the LAN which I haven't tried yet). I'm having to compile the APK, put it on a usb stick, take into the other room, install and run. Which is ridiculous. :o

My question is what method do other people use to debug these precision issues? Is there a way to get the emulator to emulate having rubbish precision? That would seem the most convenient solution (and if not, why haven't they implemented this?). Other than that it seems like I need to buy some old phones / tablets off Ebay, or 'downgrade' the precision in the shader (to mediump) and debug it on my phone...

 

Share this post


Link to post
Share on other sites
Advertisement

I dont know how to solve your problem but i encountered same problem in fragment shaders, i did what i had to do that means i used a texture that stored 1 float per 4 pixels and used that, instead of reading colors maybe this helps....

Share this post


Link to post
Share on other sites

So splitting up the precision into multiple pixels? I did wonder about this but couldn't see an easy way of getting it to work in my case. As I needed to somehow finally combine the precision and I was only getting 10 bits of precision.

For the reference of anyone else who comes up against the same problem, after more research and struggling I found a couple of useful articles:

https://community.arm.com/graphics/b/blog/posts/benchmarking-floating-point-precision-in-mobile-gpus

https://community.arm.com/graphics/b/blog/posts/at-home-on-the-range---why-floating-point-formats-matter-in-graphics

In my particular problem the precision issue was because I am generating texture coordinates in the fragment shader. This was a problem because my texture was 2048 in size, and 10 bits of precision wasn't enough to effectively cover all the texels. This can typically exhibit looking like point filtering when you use the texture map. But in my case I was using a procedural method with completely different neighbouring texels, and getting calculations out by 1 texel led to wildy different results.

This begs the question, why are there hardware devices that only offer 10 bits of precision, yet support large textures (2048 or 4096), if they can't even address all the texels?? I suspected that they were using a different high precision path for varyings that were not touched in the fragment shader. My suspicions were confirmed by a comment in one of the Arm articles:

Quote

*We have one special "fast path" for varyings used directly as texture coordinates which is actually fp24.

There are good reasons for using texture coordinates 'as is' from the varying, because afaik it can do the lookup of the texture ahead of time. Whenever you generate texture coords in the fragment shader, I believe there can be a penalty. However it is necessary in some shaders.

One of the problems I am still facing is that on OpenGL ES 2.0 many devices don't support texture wrapping on non-POT textures. I still needed texture wrapping for my use case so was having to do something like this manually in the fragment shader:

uv.x = fract(uv.x);
uv.y = fract(uv.y);

Even this I suspect 'breaks' the high precision fast path. My alternative I am looking into now is trying to do the wrapping in the vertex shader, which will require duplicate verts at the 1.0 / 0.0 boundary. If there is any other cunning way of doing the wrapping I'd love to hear it! :)

In general it has been proving a nightmare to debug, because of the seeming lack of ability to emulate low precision on the PC. I've had to go with the approach of deliberately setting mediump and debugging on my phone, plus trying to work it out in my head.

What doesn't help is I'm not exactly sure how the 10 bit precision floating point format works and what ranges it works most efficiently at. Plus with the vague OpenGL specs, it could actually be doing anything in the hardware.

Share this post


Link to post
Share on other sites

I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).

Share this post


Link to post
Share on other sites
26 minutes ago, dave j said:

I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).

Good idea! :D I will investigate!

Share this post


Link to post
Share on other sites

Maybe I'm missing something but I still haven't seen where you mentioned the actual precision issue you are experience aside from the fact that the HW expose/support different level of precision. Are you seeing visual anomalies, incorrect textures, jitter etc ?  The OpenGL ES Shading Language specification covers the behavior of different precision, but for the most part these are hints more or less..

Oops disregard the part about not actually specifying the issue, saw that in a post further down. In either case wrt to precision, its more than just the computation itself as the values returned from samplers are also limited by precision. Are you sure the issue you are seeing is related to the precision of the uv computation or just that the texture samplers on these HW are just atrocious in terms of filtering quality and whatnot ? I've seen cases where the same exact texture using the same shader looks completely different on different HW and the only conclusion I could draw from this observation is that the texture sampling/filtering logic for each is what is causing the difference.

Share this post


Link to post
Share on other sites
17 hours ago, cgrant said:

Are you sure the issue you are seeing is related to the precision of the uv computation or just that the texture samplers on these HW are just atrocious in terms of filtering quality and whatnot ? I've seen cases where the same exact texture using the same shader looks completely different on different HW and the only conclusion I could draw from this observation is that the texture sampling/filtering logic for each is what is causing the difference.

I was in the same boat previously: while most of my texturing was fine, on the problem hardware one of the textures in an offending shader  looked to be being filtered incorrectly (as if it was using point filtering instead of linear). I had assumed my filtering states were wrong but on further investigation I now believe it is down to the precision in the texture coordinate calculations. The ARM articles suggest this, that on the most basic hardware calculated tex coords will have 10 bit precision (presumably a half float), and directly passed coordinates (which is far more common) get a fast 24p path. According to the specs I believe you could theoretically have hardware with only the 10 bit path (although texture filtering would look pretty bad).

I actually pinned down the problem in another more complex terrain procedural shader where it was far more pronounced.

I will know the answer soon as I am altering code to calculate the tex coords in the vertex shader, but have yet to try it on the offending hardware. Hopefully it will solve the problems. :)

>> EDIT Confirmed. It was the precision. Moving the texture coordinate calculation into the vertex shader cured the 'filtering' issues on the TV box, as expected.

Edited by lawnjelly

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!