We're about to release on Android and are support 4.0+. Version 3.* was never widely distributed, so you can pretty much pretend it doesn't exist. Version 2.* is still used by about 5% of devices but we decided to ditch it because it doesn't support setPreserveEGLContextOnPause which we're using to keep our OpenGL context alive after multitasking.
Picking an API level has little to do with dealing with low-end performance devices. That's a whole other kettle of fish. I'd use the above link and try to identify a few popular low end devices, and try to hit a variety of GPU manufacturers. Also, borrow as many devices off friends as possible. It took testing on about 20 devices before the flow of weird device specific issues dried up for me, but then I'm writing an engine from scratch. If you're using Unity or something, then you won't have to worry about all the quirky workarounds for various device specific issues.
I think Jihodg is on the right track. Your code that's hardcoding a test value for texture two's alpha is only modifying the top mip.
If the lower mips are already generated, then they will still be opaque. Try regenerating the lower mips after hardcoding that transparency, or modify your code so that you iterate through all the mips and hardcode an alpha value.
Sounds like a precision issue to me. Your math is being done at mediump which could have a maximum range of -16384 to 16384, which is quite a lot more than your 250-300 but do bear in mind that the distance function is almost certainly having to square. In fact, maybe the compiler is smart enough to remove the square root from the 'distance' function and compare with your radius squared (which can be calculated per draw call) instead of using radius.
First up, if you have a device that supports highp in the pixel shader try just changing 'precision mediump float;' to 'precision highp float;'. If that fixes it then we're on the right track.
But I wouldn't stop there because lots of Android devices don't support that. Try doing your lighting math in normalized screen coordinates instead of pixel coordinates maybe? (i.e. divide both the pixel pos and the light pos by the screen height or the screen width)
I would check out the Mikey games (Mikey Shorts, Mikey Boots) for example of how to do it right. Basically, you want a left and right button for your left thumb, and a jump plus a maximum of one other button for your right thumb. I wouldn't recommend any more than that.
Extra points if you make the button size/positioning customizable.
This is wrong. You're camera movement should be tied to the frame-time just like all movement. Otherwize, a 30 fps game would result in a more sluggish camera rotation compared to when it was running at 60 fps.
In any case, it doesn't explain the issue.
Edit : Unless of course, you're not taking input snapshots per frame, but rather updating input at a frame-independent rate, such as by directly pushing windows msgs into your engine.
We're talking about mouse cursor movement.
If we were talking about control pad analog sticks rotating the camera, then there should be a time delta applied, but we're talking about mouse movement, which should definitely not have a time delta applied (as proved by the fact that it fixed the bug!)
The best way to start would be to force everything to use highp and see if the problem goes away. If it does, then you can start the process of figuring out what needs to be highp and what can be mediump or lowp.
IMO, using a precision declaration at the top (the line: "precision mediump float;") is bad practice, on mobile fragment shaders you ought to think about the required precision of every operation. But in this case it makes life easier because all you need to do to confirm whether it's a precision problem is change that line to be "precision highp float;". You should also go into your vertex shader and make sure the texture_coord varying is being output as a highp too.
If that fixes it, then that's great, but you do need to bear in mind that not all Android GPUs support high precision floats in their fragment shader. However, you might be able to get it all working at mediump. Most likely the place where precision is being lost is the iterative adjustment of textCoo, you might see an improvement by calculating textCoo on each iteration rather than applying a delta. (change the line "textCoo -= deltaTextCoord;" to something like "textCoo = texture_coord.st - (deltaTextCoord * (i + 1));" or if tweak stuff correctly before you enter your loop you could use the far more optimal "textCoo = texture_coord.st + (deltaTextCoord * i);")
As an aside, attempting 128 samples might be too many unless you're targeting only the extreme high end devices.
BC2 is a bit of an odd duck, and frankly is never used nowadays. It stores RGBA data, using BC1 for the RGB part, and a straight 4 bits per pixel for the alpha channel. The alpha part doesn’t use any endpoints-and-indices scheme, just stores explicit pixel values. But since each alpha value is just 4 bits, there are only 16 distinct levels of alpha, which causes extreme banding and makes it impossible to represent a smooth gradient or edge even approximately. Like BC3, it totals 16 bytes per block. As far as I can think of, there’s no reason ever to use this format, since BC3 can do a better job in the same amount of memory. I include it here just for historical reasons.
What I'd do is transform all the 8 corner points of the cube to 2D screen space, and then use a 2D convex hull algorithm to recompute which edges are the outside edges of the projected silhouette. That solution has the appeal of using only generic "stock" algorithms (3D->2D projection, and 2D convex hull), so it will be very easy to reason about and maintain in the long run.
Way easier than calculating silhouette edges, I think.
Sounds like silhouette calculation. For each face of the cube you can calculate whether it is facing toward the camera or away from the camera. Then for each edge of the cube you can calculate if it connects a face that's pointing away and a face that's pointing toward the camera. If so, then it forms part of the silhouette.
Tricky to implement though, because you need to deal with floating point precision issues, and once you've figured out the silhouette edges, working out the correct ordering you want might be fiddly.
"Also, if I subdivide the cube into 8, would the same relative points for the smaller cubes also form an outline for each smaller cubes?" - With an orthographic view, I think yes. With a projection view, I think no.