Sign in to follow this  

OpenGL GL_TEXTURE_3D or GL_TEXTURE_2D?

This topic is 2016 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi!
I'm developing a game that manage a lot of textures per frame. To avoid the high use of glBindTexture I realized that these textures can be unified in a single big texture, because a lot of them uses the same size and now the theoretical problem comes:
1) Using 2D texture: If I have 8 256x256 textures, I can put them in a single 256x2048 texture. It can be a problem because some video cards doesn't support big textures (mine supports 8192x8192). If the object has only one texture, its texture will be 256x256, pretty simple.
2) Using 3D texture: If I have 8 256x256 textures, I can put them in a single 256x256x8 texture. I think that here there aren't size problems with this way. If the object has only one texture, the 3D texture can't be 256x256x1, because the official site of OpenGL tells me that a texture can't have odd numbers, so I need to put it in a 2D texture, calls glEnable(GL_TEXTURE_2D) apply it on my 3D model and when I need to use again the 3D texture I'll need to call glEnable(Gl_TEXTURE_3D) again (this is what I read somewhere that tells to call glEnable every time that you need to switch between a 1D, 2D and 3D texture.
3) Continue to using my current method: a lot of 256x256 textures, a lot of glBindTexture, glUniform to upload the texture ID to the fragment shader and sacrifice performance to not have the problems in point 1 and 2.

What I suppose to do in this situation? There are other simple ways to improve the engine? What is it the best choice?
I'm waiting replies.

Share this post


Link to post
Share on other sites
If you're using OpenGL 3.0 or above, 'texture arrays' (search for them) are designed specifically to do this. They're a bit like 3D textures, except that filtering only occurs on the layer you are using.

Share this post


Link to post
Share on other sites
[quote name='mark ds' timestamp='1341863096' post='4957386']
If you're using OpenGL 3.0 or above, 'texture arrays' (search for them) are designed specifically to do this. They're a bit like 3D textures, except that filtering only occurs on the layer you are using.
[/quote]
In this page [url="http://www.opengl.org/wiki/Array_Texture"]http://www.opengl.org/wiki/Array_Texture[/url] it describes that texture2DArray is used for this purpose, but isn't the same z = (1.0f/texDepth*index) without to break the compatibility with OpenGL < 3.0? And the problem about the texture3D with at least 2px of depth remains...

Share this post


Link to post
Share on other sites
Don't forget that you can't mipmap your 2D textures if stored in 3D.

Edit - unless you want to manually calculate the correct mipmap by interpolating across two 3D textures, that is.

Also, you can have one layer in a 3D texture, but depending on your hardware, it may require a power of 2.

What hardware are you running on? Edited by mark ds

Share this post


Link to post
Share on other sites
[quote name='mark ds' timestamp='1341866283' post='4957407']
Don't forget that you can't mipmap your 2D textures if stored in 3D.

Edit - unless you want to manually calculate the correct mipmap by interpolating across two 3D textures, that is.

Also, you can have one layer in a 3D texture, but depending on your hardware, it may require a power of 2.

What hardware are you running on?
[/quote]

I'm using an ATI Radeon 4870 and it supports textures with any size with a maximum of 8192x8192. It is okay, but I want to be sure that the compatibility with other old video cards will not break the executable due to a not-optimization of the code (for example I rewrote part of the shaders because my video card supports bitwise operations but my notebook with an Intel HD 3000 won't compile it). I'm planning to finish the core in OpenGL and next porting the code in OpenGL ES and I want to be sure that a lot of things will remains unchanged.

Share this post


Link to post
Share on other sites
Texture arrays would be the preferred route, yes, but even if you can't use them on account of your minimum specs, it's not all bad news.

Firstly, older hardware isn't that crap - really. The last desktop video card that could only handle 256x256 textures was a 3dfx back in the 1990s - I'm reasonably certain that you don't want to aim [i]that[/i] low!

A more reasonable starting point would be somewhere in the GL 1.5 to 2.x class, which more or less guarantees you a minimum of 2048x2048 (there's one Intel in this class that only does 1024x1024, but I don't know of any other). You say that you're using shaders so that puts you in the guaranteed 2048x2048 bracket.

If you're still in doubt, a good place to jump off from might be this OpenGL capabilities database: [url="http://feedback.wildfiregames.com/report/opengl/"]http://feedback.wildfiregames.com/report/opengl/[/url] - here's the page for GL_MAX_TEXTURE_SIZE: [url="http://feedback.wildfiregames.com/report/opengl/feature/GL_MAX_TEXTURE_SIZE"]http://feedback.wildfiregames.com/report/opengl/feature/GL_MAX_TEXTURE_SIZE[/url].

You'll see that ~2% of their listed hardware supports less than 2048x2048. Jumping down to the actual cards ([url="http://feedback.wildfiregames.com/report/opengl/feature/GL_MAX_TEXTURE_SIZE#1024"]http://feedback.wildfiregames.com/report/opengl/feature/GL_MAX_TEXTURE_SIZE#1024[/url]) you'll see that a huge majority (495 out of 553) of this 2% are "Microsoft GDI Generic" - in other words they don't even have an OpenGL driver installed. Not supporting 2048x2048 textures is going to be the least of their problems. So you can quite confidently set that as a minimum requirement, and be certain to have 100% coverage of the end-users you care about.

For mobile platforms I'm not sure how things work out, so you can probably run with a hybrid solution. Instead of restricting yourself to a choice between a single 2048x2048 texture or 64 256x256 textures, why not create 4 1024x1024 textures? You'll still get much of the benefit of reduced draw calls, but still also fit within the texture budget of the hardware. Likewise, if you've determined that the hardware only supports 512x512 textures then create 16 textures at this resolution.

Specifically for 3D textures as a solution to this, there are some things to watch out for. The max 3D texture size [i]may[/i] be smaller than the max 2D size - I don't [i]think[/i] that OpenGL specifies this, but it can and does happen. Also, you need to be careful of your filtering modes - basically, set up any kind of linear filter and you'll be filtering between different slices in the 3D texture, which is not what you want in this case (a texture array wouldn't have this problem) - it's GL_NEAREST all the way here. Finally, and only if you're [i]really[/i] being ultra-conservative, 3D textures are an OpenGL 1.2 feature, so they're not going be available on some of the more prehistoric hardware (which nobody really has any more so it's a non-issue; just mentioning for info). Are 3D textures available on mobile platforms? Don't know - someone else will have to tell you that.

Share this post


Link to post
Share on other sites
[quote name='mhagain' timestamp='1341879695' post='4957461']
Texture arrays would be the preferred route, yes, but even if you can't use them on account of your minimum specs, it's not all bad news.

Firstly, older hardware isn't that crap - really. The last desktop video card that could only handle 256x256 textures was a 3dfx back in the 1990s - I'm reasonably certain that you don't want to aim [i]that[/i] low!

A more reasonable starting point would be somewhere in the GL 1.5 to 2.x class, which more or less guarantees you a minimum of 2048x2048 (there's one Intel in this class that only does 1024x1024, but I don't know of any other). You say that you're using shaders so that puts you in the guaranteed 2048x2048 bracket.

If you're still in doubt, a good place to jump off from might be this OpenGL capabilities database: [url="http://feedback.wildfiregames.com/report/opengl/"]http://feedback.wild.../report/opengl/[/url] - here's the page for GL_MAX_TEXTURE_SIZE: [url="http://feedback.wildfiregames.com/report/opengl/feature/GL_MAX_TEXTURE_SIZE"]http://feedback.wild...AX_TEXTURE_SIZE[/url].

You'll see that ~2% of their listed hardware supports less than 2048x2048. Jumping down to the actual cards ([url="http://feedback.wildfiregames.com/report/opengl/feature/GL_MAX_TEXTURE_SIZE#1024"]http://feedback.wild...XTURE_SIZE#1024[/url]) you'll see that a huge majority (495 out of 553) of this 2% are "Microsoft GDI Generic" - in other words they don't even have an OpenGL driver installed. Not supporting 2048x2048 textures is going to be the least of their problems. So you can quite confidently set that as a minimum requirement, and be certain to have 100% coverage of the end-users you care about.

For mobile platforms I'm not sure how things work out, so you can probably run with a hybrid solution. Instead of restricting yourself to a choice between a single 2048x2048 texture or 64 256x256 textures, why not create 4 1024x1024 textures? You'll still get much of the benefit of reduced draw calls, but still also fit within the texture budget of the hardware. Likewise, if you've determined that the hardware only supports 512x512 textures then create 16 textures at this resolution.

Specifically for 3D textures as a solution to this, there are some things to watch out for. The max 3D texture size [i]may[/i] be smaller than the max 2D size - I don't [i]think[/i] that OpenGL specifies this, but it can and does happen. Also, you need to be careful of your filtering modes - basically, set up any kind of linear filter and you'll be filtering between different slices in the 3D texture, which is not what you want in this case (a texture array wouldn't have this problem) - it's GL_NEAREST all the way here. Finally, and only if you're [i]really[/i] being ultra-conservative, 3D textures are an OpenGL 1.2 feature, so they're not going be available on some of the more prehistoric hardware (which nobody really has any more so it's a non-issue; just mentioning for info). Are 3D textures available on mobile platforms? Don't know - someone else will have to tell you that.
[/quote]

You open my eyes.
I was in search about a website like that! It's interesting how a lot of stuff was already in 10 years old cards. Maybe I too care about retro-compatibility.
Well, my framework is for a 2D porting where I have almost all the graphic resources that I put on 256x256 textures. Another guy is working on double-size textures so the final textures will be 512x512 (but the player can choice to play with the old megadrive style using internally the 256x256 textures) so I need to be sure that also with quad-size textures my framework will run also on graphic cards with a maximum of 2048x2048 size. My choice to increase the texture sizes only in height is because I need to recreate the texture every 2 frames and I won't to align every time 8 256x256 textures in a single 1024x1024 and I prefer to put them into a 256x2048 texture.
[color=#333333]gl_ext_texture_array seems supported only by OGL 3 > and not by OpenGL ES, so I decided to not use it.[/color]
GL_ARB_texture_non_power_of_two is used by the 91% of videocards, so I decide to use 3D textures with 1 pixel of depth in case that the objects has only 1 texture. Also a lot of current smartphones supports GL_OES_texture_npot.
Good idea to implement an hybrid solution, but at this point the 3D textures are the best solution to get high compatibility. As I replied to [b]mark ds[/b], z = (1.0f/texDepth*index) can't be a replacement for texture2DArray? However I'm using GL_NEAREST in every texture, I don't like filters on a 2D game =P and GL_LINEAR breaks CLUT textures

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

  • Popular Now