So, I'm in need of using a couple texture arrays, rather than packing everything into a single array. I'm baking a lightmap/normals/etc into various images to use for most of the background objects, as the player doesn't interact with them, so there's little need to be calculating shadows/normals etc in real-time. However, I'm finding that using higher definition images when baking those results in much better results, especially as I'm scaling some of them rather large. But, since I don't want to use huge textures for everything, I thought I would just use two arrays. One for regular objects, that will have their shadows mapped, and normal/specular images separate, and one for the higher definition images, with the shadows/normal lighting baked in.
However, I'm struggling a little with the mipmap levels, I think. It's possible I'm erring elsewhere, but I'll try to explain:
When I draw two different objects, each using a different texture array, the image from the array with smaller sized images array loaded first becomes darker, especially when viewed from a distance (like it's being improperly mipmapped). Upon moving closer to the object, it becomes more visible, but still a bit wonky. If I drop the mipmap to 1 for the array that's loaded first, the darkness on the texture goes away (but, then I have no mipmapping on the textures in that array).
Here's a stripped down example, forgive the naming conventions, I'm terrible at coming up with things on the fly, and needed to rewrite these in a simpler format than they exist:
void Textures::LoadTextures()
{
CreateTextureArray();
LoadImages();
CreateHDTextureArray();
LoadHDImages();
}
void Textures::CreateTextureArray()
{
glGenTextures(1, &TextureArray);
glBindTexture(GL_TEXTURE_2D_ARRAY, TextureArray);
glTexStorage3D(GL_TEXTURE_2D_ARRAY,11, GL_RGBA8, 1024, 1024, 18);
}
void Textures::CreateHDTextureArray()
{
glGenTextures(1, &HDTextureArray);
glBindTexture(GL_TEXTURE_2D_ARRAY, HDTextureArray);
glTexStorage3D(GL_TEXTURE_2D_ARRAY,11, GL_RGBA8, 4096, 4096, 1);
}
void Textures::LoadImageToArray(sf::Image &imageIN, const char* imageName, int layerNumber)
{
if(!imageIN.loadFromFile(imageName))
{
std::cout<<imageName<<" is borked..."<<std::endl;
}
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, layerNumber, imageIN.getSize().x,
imageIN.getSize().y,1, GL_RGBA, GL_UNSIGNED_BYTE, imageIN.getPixelsPtr());
}
When drawing:
void EnableTexture(Object &ObjectIN, GLuint &textureArrayIN, glm::vec3 &imageIndexIN)
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, textureArrayIN);
glUniform1i(ObjectIN.gTextureArrayID(),0);
glUniform1i(ObjectIN.gLayerNumID1(), imageIndexIN.x);
glUniform1i(ObjectIN.gLayerNumID2(), imageIndexIN.y);
glUniform1i(ObjectIN.gLayerNumID3(), imageIndexIN.z);
}
the opengl references are a little light on information for glTexStorage3D, but from what I gather, I can use a value up to a certain fraction of the image size for the mipmap level (in this case 11, for the 1024, and I believe 13 for the 4096 textures.
My question is, when binding a new array, does that override the settings for the previous array (specifically mipmap sizes for both arrays)? Or, am I perhaps going about this entirely incorrectly?
Is there a step I'm missing when binding the array for use in the shader?
Additionally, I've read I can just use the single larger array, and use a mipmap portion when I need a smaller image. Is there any advantage to that, rather than this approach? I imagine there's some benefit, as I'm not rebinding the arrays, but if it's marginal, this seems like the simpler approach.
If I'm not providing enough information, or if some images would help explain, please let me know.
I appreciate any advice
Edit*
Here are a couple of images to maybe help explain. Forgive the overwhelming redness, it's just a hodgepodge of stuff thrown together for an example with none of the shaders balanced