Jump to content

  • Log In with Google      Sign In   
  • Create Account

C0lumbo

Member Since 02 Nov 2012
Offline Last Active Today, 01:41 PM

Posts I've Made

In Topic: Getting start with OpenGL ES 2.0 and using NDK in C++

26 September 2016 - 01:51 PM

I think the OP is wondering more about how to use OpenGLES2 in native code as part of a Java application (as opposed to a NativeActivity), rather than wondering how GLES2 actually works.

 

Myself, I've always used GlSurfaceView to manage the EGL side of things (context creation, etc), and make calls into native code from there. I'm not aware of a good solid tutorial, let alone an entire book that showcases that sort of approach.


In Topic: Sharpen shader performance

24 September 2016 - 08:30 AM

If you were targeting mobile GPUs, I'd say you could expect another huge performance gain by calculating the UV coordinates on the vertex shader and passing them through as vec2 varyings (well, 7 vec2 and 1 vec4 because you only get 8 varyings on some mobile GPUs). There'd be 2 big gains, firstly, you'd be skipping a bunch of per fragment calculations and secondly you'd be minimizing dependent texture reads.

 

As you're targeting desktop, I'm not so confident it'll have any measurable effect, but it's a simple enough experiment, so if I were in your shoes I'd give it a go.


In Topic: OpenGl Es analysis

23 September 2016 - 06:04 AM

Turning depth write/tests off is not enough on tiled based hardware typical to mobile devices. I believe both PowerVR and Mali chips (qualcomm too probably) do analysis in hidden surface removal hardware which will eliminate the hidden layers even if there's no depth buffer at all.

 

I would imagine that projecting vertices to a tiny point would work.

 

Measuing cost of varyings will be hard. Apart from anything else, you're limited to 8 on some devices so it's difficult to really stress it in isolation. 


In Topic: OpenGl Es analysis

22 September 2016 - 11:40 PM

To stress the vertex shader, I'd render a high polygon mesh (if you can't get access to one, then make something in code by subdividing a tetrahedron or icosahedron to make a sphere perhaps). Then render it multiple times, but use the scissor rectangle so that you only rasterize a tiny amount of the screen.

 

With your fragment shader test, can I just check that each of your screen quads has transparency? If not, then mobile hardware often has hidden-surface-removal which will invalidate your test as only the top quad will actually be rendered.

 

I have no idea how you would measure the cost of rasterization as a distinct measure from fragment shader, or even if trying to do so makes any sense.


In Topic: how to differentiate performance on android devices

30 June 2016 - 11:40 AM

This is a hard problem, and one of the reasons that more people make their games free on the Play Store than on Apple's app store. There's just so many devices out there it is effectively impossible to avoid selling your game to a customer who won't be able to play it.

 

One tool in the manifest to add to Nanoha's suggestion is the screen size: <supports-screens android:smallScreens="false" /> ditches tiny phones which are probably underpowered. But in general it's hard to do much in the manifest beyond OS version, screen size and GLES version.

 

At runtime, I assess device's graphics capabilities mainly by proxy, I look at information like:

 

* Number of CPU cores

* Amount of RAM

* Whether highp is supported in fragment shaders

* Whether GLES3 is supported

* Whether depth textures are supported

 

And I choose a graphics option based on that - I let the user override my choice.


PARTNERS