Jump to content
  • Advertisement
Sign in to follow this  
oblivion81

Array Textures and mipmapping issue

This topic is 2388 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all,
I'm experiencing a very bad issue using GL_TEXTURE_2D_ARRAY and automatic mipmap generation.
I don't really understand what's going on.

Essentially I create a Texture array of 6 elements using

glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA8, 512, 384, 6, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);


then I fill it with

glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, 512, 384, 6, GL_RGB, GL_UNSIGNED_BYTE, data);

where data is a linear memory area correctly filled from files.

Then I generate mipmaps calling

glGenerateMipmaps(GL_TEXTURE_2D_ARRAY);


What I experience is that in the fragment shader accessing texture at index 0 is never a problem,
at any LOD level.
Accessing texture at indices > 0 will result in a correct texture when accessing LOD 0 using the textureLod function,
but it results in accessing some random texture memory at bigger lods. I mean that accessing levels > 0
will actually somehow access textures at indices > 0 in the array (with some evident glitch!), just like if I'm overflowing into some near memory. :S.

My shader code is very simple:

uniform sampler2DArray DiffuseTexture;
in vec2 uv;

void main(void)
{
out_Color = texture(DiffuseTexture, vec3(uv, 'hardcoded index'));
}


I've tried a lot of solutions, but none has changed the outcome! I'm really stucked !

(I'm using ATI drivers on Linux with an ATI Mobility Radeon HD 2400).

No error is logged using glGetError.

So before I start posting detailed pieces of code here, maybe someone here already experiences a similar behaviour,
or has some idea for fixing this. I'm afraid I'm completely missing something. (Even though my code perfectly works for standard
2d Textures...).


Thanks a lot!!

Daniele

Share this post


Link to post
Share on other sites
Advertisement
Hi,

OpenGL 2, 3 and 4 references say that glTexSubImage3D accepts only GL_TEXTURE_3D as the first parameter, watch out for that. It seems strange to me how to fill a sub-region of such texture, your code looks legit. However not according to the manual, maybe there's an extension for that? Maybe a backwards compatible version 3 or 4 context might help?

Share this post


Link to post
Share on other sites
I've got an update on this topic!

I had this issue on a ATI Mobility HD 2400, Catalyst 9.1 for Linux. (This is kinda an embedded HW, that's why I've got old drivers!).

Then I tried at home, on the latest versione of Ubuntu with Catalyst 9.8 and ATI Mobility HD 4500, and it perfectly works!

So it is very likely this to be a driver problem! I'll try to update catalyst to latest version on the same HW and see what happens.

I was also thinking...could it be an opengl lib issue? Maybe I should check the opengl lib version on both the platforms.

Could the HW also affect it? I mean...the HW should not be involved in any way in mipmap generation, should it?

About this:

Hi,

OpenGL 2, 3 and 4 references say that glTexSubImage3D accepts only GL_TEXTURE_3D as the first parameter,


I've seen that sentence in the reference and I thought it was some kind of mistake (and my experience above seems to prove it!) because
if you take a look here
http://www.opengl.org/wiki/Texture

you'll read: "glTexImage3D[font="sans-serif"] can only be used with GL_TEXTURE_3D, GL_TEXTURE_2D_ARRAY, and GL_TEXTURE_CUBE_MAP_ARRAY targets."[/font]


Thanks a lot anyway :-)

bye

Daniele

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!