Jump to content
  • Advertisement

Recommended Posts

Posted (edited)

I have textures that are created when an application is running (for example, depth textures). And I need to transfer them to a fragment shader. The data is read from them in a loop. Now I have implemented it something like this:

uniform int activeMaps;
uniform sampler2D maps [N];
...
for (int i = 0; i <activeMaps; i ++) {
    ...

    float someData = texture (maps [i], texCoord) .r;
    ...
}


This code works well with OpenGL 3.3 on Nvidia graphics cards. But on Intel and AMD I can get the following error: `error: sampler arrays indexed with non-constant expressions are forbidden in GLSL 1.30 and later`. Changing to OpenGL 4.0 version, everything works fine. The first question is - will everything work well on other video cards?

Second question. If I use GL_TEXTURE_2D_ARRAY, how can I add textures, bypassing the reading of pixels from the GPU, and how can I add a ready-made texture by it id? The sample code implies that the texture data is already in the CPU and it simply passes it through glTexSubImage3D. Here is the code:

Spoiler

GLuint texture = 0;

GLsizei width = 2;
GLsizei height = 2;
GLsizei layerCount = 2;
GLsizei mipLevelCount = 1;

// Read you texels here. In the current example, we have 2*2*2 = 8 texels, with each texel being 4 GLubytes.
GLubyte texels[32] = 
{
     // Texels for first image.
     0,   0,   0,   255,
     255, 0,   0,   255,
     0,   255, 0,   255,
     0,   0,   255, 255,
     // Texels for second image.
     255, 255, 255, 255,
     255, 255,   0, 255,
     0,   255, 255, 255,
     255, 0,   255, 255,
};

glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D_ARRAY,texture);
// Allocate the storage.
glTexStorage3D(GL_TEXTURE_2D_ARRAY, mipLevelCount, GL_RGBA8, width, height, layerCount);
// Upload pixel data.
// The first 0 refers to the mipmap level (level 0, since there's only 1)
// The following 2 zeroes refers to the x and y offsets in case you only want to specify a subrectangle.
// The final 0 refers to the layer index offset (we start from index 0 and have 2 levels).
// Altogether you can specify a 3D box subset of the overall texture, but only one mip level at a time.
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, width, height, layerCount, GL_RGBA, GL_UNSIGNED_BYTE, texels);

// Always set reasonable texture parameters
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);

 

Thanks in advance for your help.

Edited by congard

Share this post


Link to post
Share on other sites
Advertisement
Posted (edited)

Hi!

Beware that a TextureArray is a single texture with one ID. All its slices have the same width, height, format and everything else, similarly to a 3D texture (forgive me for the simplification). It isn't really  an "array" of 2D textures, it's rather a "deep" 2D texture. That means you can't "add" textures to it, you have to copy to its individual texels. In the sample they upload the data from CPU using glTexSubImage3D.

You can copy to it using the GPU using glCopyTexSubImage3D (from framebuffer), or  glCopyImageSubData (from another texture), see https://www.khronos.org/opengl/wiki/Texture_Storage#Texture_copy

Edited by pcmaster

Share this post


Link to post
Share on other sites
Posted (edited)

For the first part, if you do this

uniform sampler2D maps [N];

in your GLSL shader, you have an array of 2D textures, not a Texture2DArray (that's sampler2DArray  in GLSL). It's really problematic on many actual GPUs to access the individual 'maps' by a non-compile-time index. That's why it's complaining, it's a complicated matter, older cards had a limited number of "slots" for each texture and you couldn't just iterate over the textures bound to the individual slots. Perhaps somebody would answer if OpenGL 4.0 guarantees this to work with all vendors, I'd guess it doesn't (bindless textures?)

Edited by pcmaster

Share this post


Link to post
Share on other sites
54 minutes ago, pcmaster said:

Hi!

Beware that a TextureArray is a single texture with one ID. All its slices have the same width, height, format and everything else, similarly to a 3D texture (forgive me for the simplification). It isn't really  an "array" of 2D textures, it's rather a "deep" 2D texture. That means you can't "add" textures to it, you have to copy to its individual texels. In the sample they upload the data from CPU using glTexSubImage3D.

You can copy to it using the GPU using glCopyTexSubImage3D (from framebuffer), or  glCopyImageSubData (from another texture), see https://www.khronos.org/opengl/wiki/Texture_Storage#Texture_copy

And if I use texture array instead of array of textures and copy using GPU, will I lose performance? If so, how much?

Share this post


Link to post
Share on other sites

I bypassed this by changing N to lest say $NumOfTexAtOnceAvil

In the beggining when you have loaded proper opengl subsystem you chce k how many texture units you can handle at one render pass, then when you load the shader you seek for this $var_name and change the value to a static constant that you have received earlier.

Then you compile the shader

Share this post


Link to post
Share on other sites
9 minutes ago, _WeirdCat_ said:

I bypassed this by changing N to lest say $NumOfTexAtOnceAvil

In the beggining when you have loaded proper opengl subsystem you chce k how many texture units you can handle at one render pass, then when you load the shader you seek for this $var_name and change the value to a static constant that you have received earlier.

Then you compile the shader

Yes, I use the same way to "build" shaders.  But it is necessary for me that the value be dynamic

Share this post


Link to post
Share on other sites

From my understanding, One way to bypass this is to create large enough array and additional uniform that stores actual size, but still you need to know how large array needs to be - cause im assuming reloading shader is not an option?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!