You should be happy to be back at the build-something-useful-stage. Obsessing about details is like the worst, and probably not that uncommon. Like, one moment you think about programming a game, next you wanna know how that compiler and linker works, next what the OS and graphics driver does, then what that BIOS is for, what happens the very moment the computer gets turned on, how those logic gates fit together to make a CPU, what types of transistors there are and how they make a gate, how a transistor works, what chemicals there are inside, how those holes and electrons inside can jump the impassable terrain, and before you know you are reading about electric fields, subatomic particles, then quantum theory, how it relates to the theory of relativity, and finally arrive at string theory and loop quantum gravitation and wonder if there is anything more to know and how that could happen.
That's called a 'scientific mindset", I suppose. The natural curiosity is the greatest thing ever. But you *really* need the way to control it, because, yup, you'll end up with a lot of knowledge but without any real significant results, just ready for the next "let-me-know-that-and-that-too" dive.
Where depthMVP is : camera projection matrix * camera view matrix * camera model matrix
And then you'll have to put an empty fragment shader to finish your depth rendering.
So basically you bind your FBO, render your objects to the depth buffer, unbind it, then bind your main FB, bind your shadowmap from prev. stage as a 2D texture and sample distance from it. Correct me, if I'm wrong.
Also, is your u_lightPosition uniform in camera frag shader is in the same space as your vertex position? (which is in fact MVP matrix multiplied by vec4( a_vertexPos, 1.0 ) ) ?
You sending that uniform as light.combined. What matrix do you have there?
agleed is absolutely right about the 'length()' stuff. You want the coords there, to sample from the shadowmap.
Allright, after some digging and experiments here the complex solution:
To avoid gaps between objects or graphics entities in a libGDX application one should:
1. ensure to use floats everywhere - even for storing target screen resolution, to avoid unwanted casts and following precision loss; OpenGL is using floats for positioning, so should you;
2. use power-of-two textures: those with side sizes aliquot to 2^x - 32x32, 16x64, 128x256 and so on; doing so will satisfy the minimization of memory usage on GPU and will help to avoid texture coordinates distortions due to float-int casts, that may or may not happen whithin the rendering process ( whatch for your shaders and datatypes );
3. use POT textures even if documentation says that you can do the opposite; POTs are great;
4. try to use NEAREST filtering for loaded textures and texture atlases; this would protect you from wrong 'pixel mix', when OpenGL will try to average your color-filled pixels with transparent ones; however, for me that showed no significant results;
5. hit your artist ensure that all of your full-frame graphics elements have no transparent or half-transparent pixels at the edges - sith happens, you now
6. use tight alignment and/or scale your objects just a little bit, so that they will overlay each other.