Using GLES2/3 depending on hardware (Android)

This topic is 911 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

This is a few highly related questions but it boils down to:

TLDR - Can I get my app to use GLES2 on older phones that don't support GLES3 at runtime?

I have made an app (written in c++ using Visual Studio's Native Application project) which sorta grows Fractals (some 2d, some 3d) I originally made using GLES1 which simply wasn't good enough for what I needed so I set about getting it to work with GLES3 which I did. When it was 'done' I decided to try it one some other phones I had lying around and it crashed on start. I checked Logcat and foudn this:

The phone doesn't support it so then I tried compiling with GLES2 but that doens't support vertex array objects. That wasn't so bad, I had abstracted those away thanks to suggestions in another thread and I was easily able to make the object work like a VAO without OpenGL vao support. I had a simple preprocessor define that uses the appropriate code so I could switch between GLES2 and 3 when needed (needs a rebuild though). Unfortunately I then discovered the other reason I wanted to use GLES3 and that is GLES2 only supports unsigned shorts as indices which massively limits the detail I can show on my fractals, I am stuck with ~65k where as my phone (GLES 3 supporting one) can easily play with ~2 million without issue. When GLES2 is enabled I limit the amount of vertices to ~65k and when GLES3 I have much higher (hard coded) limits. I do this by having a GLES2 and GLES3 configuration which links as appropriate so I can build one or the other or both.

I can split up my vertex buffer and use multiple element arrays to get around that 65k limit but it really makes things messy, particularly for my 3d fractals.

My options therefore are have 2 versions of the app or get both working in one app. Although I link the appropriate lib when I am building, the error message suggests to me they are actually loaded dynamically by the app on the device so can I get it to pick the appropriate one at runtime?

Can I get the app to cleanly fail? That loading stuff seems to happen before it gets to things that I control so I can't give any feedback to the user saying the device is not supported etc. I see I can say which versions of android I am supporting in the manifest which should help. I made this as a demo project of sorts so getting it to behave is important to me but I have limited experience in this department.

I've tried it on 3 devices:
Nexus 5 (this is what I developed it on so it works perfectly here).
Alcatel One Touch Pop C1 (I have no idea what this phone is or where it came from, I found it lying around my house), this works rather well with GLES2 but I do get some wierd behaviour which is another question.
HTC Desire (apparently there are many versions of this, I think this one was an original). VS complained that it's too old, something about sdk not supported so I'm happy to skip this one and change my supported version int he manifest as appropriate.

I've also tried it using an emulator though I've messed with the settings and it's difficult to know what device it was supposed to emulate now :/. I believe it was a Galaxy Note - it was the original/default device on the Android Virtual Device manager.

Share on other sites

One way could be to check the GL version through Java, and then load the right library depending on that.

(having two versions of your JNI library in the APK, one built and linked to gles2 and the other to gles3)

http://developer.android.com/guide/topics/graphics/opengl.html#version-check

You might have to abandon native activity then though, and write your own, to have better control over when the System.loadLibrary call happens from the Java side.

Another way is to build two different APKs, one supporting 2.0 and another supporting 3.0

You can use

<uses-feature
android:glEsVersion="0x00030000"
android:required="true" />

In your AndroidManifest.xml to block an APK from being installed on a device without 3.0.

I'm a bit unsure how Google Play would handle that though, since you then technically can use the gles2 version on a gles3 device still... hopefully it is smart enough to install the gles3 version on gles3-devices if such an apk exists, but you'd have to look that up.

It is at least possible to have several apks active at once with different configurations, if you enable "advanced mode" in GP.
Edited by Olof Hedman

Share on other sites
I'm not saying this would work (I'm making an educated guess here)

Now I'm now sure how you are creating your opengl context. Since you are using the NDK I assume you are doing the part where you create a EGL Display with the function eglCreateContext

What I think you could do is try to create a context with EGL_CONTEXT_CLIENT_VERSION 3 and if the creation fails default to EGL_CONTEXT_CLIENT_VERSION 2. Then set some kind of flag saying you should be using ES 3 or ES 2 render commands

I'm a bit unsure how Google Play would handle that though, since you then technically can use the gles2 version on a gles3 device still

This is also very true, even though I dont think you want to do this. You could build and run your code for ES 2 and at least that way you can have your application run on both ES 2 and ES 3 devices

Share on other sites

Olof I like your idea of simply having 2 versions with the manifest., it's definitely the simplest way to do it (and probably the idea I am most keen on going with myself). I like the idea of loading the libraries myself on the fly but this is on the fringes of what I know how to do at the moment.

NoodleBowl, I will try your suggestion, right now I do:

#ifdef USING_GLES3
const EGLint attribs2[] = { EGL_CONTEXT_CLIENT_VERSION, 3, EGL_NONE };
#else
const EGLint attribs2[] = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE };
#endif
context = eglCreateContext(display, config, NULL, attribs2);

So I could change that quite easily though I think I will hit the problem with the lib trying to load anyway and giving me my initial problem. I will try it though.

Edit: Unfortunately it doesn't get as far as that without failing first (from not finding the lib). It doesn't even get as far as launching the main thread .

I see suggestions in a Stackoverflow thread about dynamically linking GLES3 but it doesn't show how to do it. If I can do that then your suggestion should work.

I found this:
dlopen returned me a value for "libGLESv2.so" so I think I'm onto something. Just need to dynamically  get the addresses of the gles3 functions I use now (since I can't statically link them).
Edited by Nanoha

Share on other sites

#ifdef USING_GLES3
const EGLint attribs2[] = { EGL_CONTEXT_CLIENT_VERSION, 3, EGL_NONE };
#else
const EGLint attribs2[] = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE };
#endif
context = eglCreateContext(display, config, NULL, attribs2);

This would not work, because you are doing the USING_GLES3 if statement. What you have here would be a compile time thing.
You want a run time thing. I was thinking something like this


EGLint attribsES3[] = { EGL_CONTEXT_CLIENT_VERSION, 3, EGL_NONE };
context = eglCreateContext(display, config, NULL, attribsES3);
if(context == EGL_NO_CONTEXT)
{
//We failed to create the ES 3 context. Try ES 2
EGLint attribsES2[] = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE };
context = eglCreateContext(display, config, NULL, attribsES2);
if(context == EGL_NO_CONTEXT)
{
//Something is wrong... Failed to create the ES2 context
LogError("Failed to create a supported ES Context");
}
}

//Other code to create the EGL Display



Share on other sites

Now I'm now sure how you are creating your opengl context. Since you are using the NDK I assume you are doing the part where you create a EGL Display with the function eglCreateContext

What I think you could do is try to create a context with EGL_CONTEXT_CLIENT_VERSION 3 and if the creation fails default to EGL_CONTEXT_CLIENT_VERSION 2. Then set some kind of flag saying you should be using ES 3 or ES 2 render commands

This is a very good suggestion and I highly recommend it ( The approach I take is similar, wherein I start with the currently highest version and reduce the major and minor version until context creation succeed ). If this is not currently being done, thats one of the source of your problem. Also, the response made to NoodleBowl comment, is somewhat incorrect as that is a static solution that requires a rebuild to work, resulting in 2 different application. One should ALWAYS check for any GL functionality used instead of assuming they are present. VAO is available on some GLES2 platform as an extension so just ruling out GLES2 or assuming that all OpenGL ES2 implementation does not support VAO is erroneous.  In a nutshell check for feature support, if its available use it, if not then use suitable fallback, thats just good software practice.

Share on other sites

Ok I am well on my way but I have one problem.

So far I statically link against libGLES2.so. Then I try to dynamically link libGLES3.so. If that works then GLES3 is available:

// work out what version of GLES the device supports
DeviceCapabilities* device = DeviceCapabilities::GetInstance();
// GLES 2 is our minimum (set in manifest)
device->SetGLESVersion(2);
// Try to load the GLES3 lib
void* handle = dlopen("libGLESv3.so", RTLD_NOW);
// if it loaded then GLES3 is available
if (handle != NULL)
{
Logger::Log(LoggerPriorityInfo, "GLES3 support detected");
device->SetGLESVersion(3);
}
else
{
Logger::Log(LoggerPriorityInfo, "GLES2 support only");
}

if (glesVersion == 3)
{
}

Then I also use the version to determine the most vertices I have (65k for GLES2, custom value for GLES3 ~ about 2 million. But now I have a problem, since GLES2 only supports unsigned shorts as indices but GLES3 lets us use unsigned ints I have been doing this:

INDEX_TYPE* indices = (INDEX_TYPE*)malloc(numSegments * 2 * sizeof(INDEX_TYPE));
Where INDEX_TYPE is either unsigned short or unsigned int depending on my previous USING_GLES3 define. I know I can solve this using templates but that means putting my class code in a header doesn't it? and I have 6 of them that need to switch between types :/. Is there a neater way to do this? I am just using them as arrays: e.g.

unsigned int numSegments = m_NumLineSegments * 2;
INDEX_TYPE* indices = (INDEX_TYPE*)malloc(numSegments * 2 * sizeof(INDEX_TYPE));
for (unsigned int i = 0; i < numSegments; i++)
{
indices[i * 2 + 0] = i;
indices[i * 2 + 1] = i + 1;
}

I will still be able to allocate enough memory since I can know the size dynamically but if I can't code the type how do I assign value to something that might be 2 bytes or might be 4 bytes? Unless I assume it will be 4 bytes, assign the value and chop the extra 2 bytes off if I need to (since I am limiting the size of the value else where I'll never chop off anything I need).

I need a function that does something like this:
void AssignValue(void* variable, size_t bytes, unsigned int value);
Then I can do:
AssignValue(indices + offset, 2, index); (where I would either put 2 or 4 in).
Is it possible to do that?

Thanks for the help so far, it definitely led me down the right path, I've been scratching my head for days thinking how to tackle this, I ask the one question and suddenly make leaps and bounds .
Edited by Nanoha

Share on other sites

I tried the following which works in the test but not in practice :/ Visual Studio tells me there was a segmentation fault when I use it in my actual class.

void AssignValue(void* index, size_t indexSize, unsigned int value)
{
if (indexSize == sizeof(unsigned short))
{
unsigned short* ind = (unsigned short*)index;
*ind = value;
}
else if (indexSize == sizeof(unsigned int))
{
unsigned int* ind = (unsigned int*)index;
*ind = value;
}
}


unsigned int uInt[4];
unsigned short uShort[4];
for (unsigned int i = 0; i < 4; i++)
{
uInt[i] = i;
uShort[i] = i;
}
char* uIntMem = (char*)uInt;
char* uShortMem = (char*)uShort;
size_t uIntSize = sizeof(unsigned int);
size_t uShortSize = sizeof(unsigned short);
for (unsigned int i = 0; i < 4; i++)
{
AssignValue(uIntMem + i*uIntSize, uIntSize, 3 - i);
AssignValue(uShortMem + i*uShortSize, uShortSize, 3 - i);
}

It just reverses the values from 0, 1, 2, 3 to 3, 2, 1, 0. Which it does in the test.

Share on other sites

Ok I have no idea why or how but it actually works. This can only be black magic. The same app runs on both my phones and uses VAO + unsigned int indices on the Nexus 5 and uses normal binding and unsigned short indices on the rubbish phone I have.

I am very impressed with myself and your help. I am worried I am doing something stupid with the way I am dealing with memory though...

Edited by Nanoha

Share on other sites

... depending on my previous USING_GLES3 define

IMO, you need to get away from this train of thought

You are not making 2 builds. You have one platform, which is Android.
You need to be able to keep track of that new glesVersion flag you have. The purpose of that flag is to know if you are running ES 2 or ES 3. That flag should be the deciding factor of whether you use shorts or ints.

You could do something like
int* AllocIndicesBufferES3(int numSegments)
{
int* indices = (int*)malloc(numSegments * 2 * sizeof(int));
for (unsigned int i = 0; i < numSegments; i++)
{
indices[i * 2 + 0] = i;
indices[i * 2 + 1] = i + 1;
}
}

short* AllocIndicesBufferES2(int numSegments)
{
short* indices = (short*)malloc(numSegments * 2 * sizeof(short));
for (unsigned int i = 0; i < numSegments; i++)
{
indices[i * 2 + 0] = i;
indices[i * 2 + 1] = i + 1;
}
}

//Whereever in your code that you would normally have this
void ThatOneMethod()
{

if(glesVersion == 3)
{
indicesES3 = AllocIndicesBufferES3(numSegments);
}
else
{
indicesES2 = AllocIndicesBufferES2(numSegments);
}
}


Edited by noodleBowl

• 40
• 12
• 10
• 10
• 9
• Forum Statistics

• Total Topics
631371
• Total Posts
2999612
×