Jump to content
  • Advertisement
Sign in to follow this  
billouparis

OpenGL Textures size issue / openGL ES / impossible to generate mipmaps

This topic is 2595 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

I'm rather a newbie with openGL / openGL ES.
I have one application with lots of objects being rendered in the same scene, and with lots of texture maps based on images.

When i'm rendering my scene with 50 objects (all using different textures), the performances are alright (30 ms),
when rendering 1 more object (so 51 objects), then suddenly I get a drop in performances (500ms), only for one more object added to the scene!

I have very large texture images, like 10 textures are 1024x1024, some are 512x512, and very few are 128x128 or 256x256.
If I reduce the size of the very large textures to 512x512 instead of 1024x1024, then the performances for 51 objects are again ok (30ms). Which basically solves my problem,
but I would really like to understand why there is this drop in performances for only one more texture to draw.

I have also tried to use mipmap textures, by calling the following methods:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture( GL_TEXTURE_2D, textures[indexTextureBuffer] );
glGenerateMipmap(GL_TEXTURE_2D);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST_MIPMAP_NEAREST);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

glTexImage2D (GL_TEXTURE_2D, 0,
GL_RGBA, image2.width(), image2.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, image2.bits());

but this previous code though compiles alright, but then I have a crash when running the application. So I removed the line glGenerateMipmap(GL_TEXTURE_2D);
and changed the two following lines back to:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);


I understand I have a problem with large size texture images, but what I don't understand is why it happens so suddenly, with such a drop in performances when I simply add a texture image (even if it is a large one, there are already other large textures in use), and why my issue is not progressive?
Can somebody point me to some article that could explain this issue to me, so I can understand what openGL is doing at this moment that could explain this drop in performance?

Thank you,
Bill

Share this post


Link to post
Share on other sites
Advertisement
If you run the card out of texture memory, your OpenGL driver will start trying to swap textures.

If you try and use one which is currently not on the card, it'll throw one out (the least recently used probably) and upload a copy again.

Performance obviously takes a hit because the upload can take a while. This is why it's fast and then suddenly slower. You'll find it'll probably plateau at that point, because the LRU replacement when you're drawing will nastily tend to throw out the very next needed texture so you'll have to upload all the textures.

Image if have space for 3 textures and you draw A then B then C in a frame, followed by A B C next time and so on. Everything is OK. If you try and add a fourth, D, then your drawing sequence might be A B C... then D -- at which point A will be turfed out, D uploaded. Next frame starts, you request A... which needs uploading. B is the least recently used, despite it being the next one you want...

You could probably help the driver out by noticing textures which are only distantly visible and downsizing them. You can resend them with a larger scale when the objects get closer again.

Share this post


Link to post
Share on other sites
Hello Katie,

from what I read here and there I kinda come to the same conclusion you exposed in your post, thank you so much for detailing the mechanism that may occur.

Now I still need to go further and dig deeper to find answers to the following questions:

1 - how can I know the total amount of texture RAM managed by my GPU,

2 - how can I update the total amound of texture RAM managed by my GPU,

3 - what about texture units, what are they used for exactly, how could they help me in getting better performances,

4 - where is this texture RAM, is it simply a partition of my current board RAM, or has the GPU his own RAM. If it is only a partition, why can't my GPU basically access other part of my RAM, or why does the upload/swapping require so much time, since all my texture images are loaded into RAM in the first place?

5 - why can't I simply use mimaps in openGL ES 2.0?

6 - if I was to use different image sizes for the same texture and use them depending on the distance of the objects on the screen, how can I know this distance to be able to bind the right texture?

Thank you,
Bill

Share this post


Link to post
Share on other sites
5 - why can't I simply use mimaps in openGL ES 2.0?[/quote]
For mipmapping you should be using:glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

The only valid values for GL_TEXTURE_MAG_FILTER are GL_LINEAR or GL_NEAREST; GL_TEXTURE_MIN_FILTER is where you set your mipmapping param.

http://www.opengl.or...exParameter.xml

You're also calling glGenerateMipmap before you specify your texture; move it to after your glTexImage2D call and it should work.

1 - how can I know the total amount of texture RAM managed by my GPU[/quote]
Generally you can't. If texture swapping is a problem for your program (and it might not be if you fix your mipmapping as your driver should only swap in the levels required rather than the full texture) then there are a few other things you can do.

To expand on Katie's example, if you're drawing 16 objects with textures A, B, C, D, depending on the structure of your program, you might be using textures in the following order: A B C D A B C D A B C D A B C D. This will really make the problem a lot worse, so to resolve that you group your objects by texture and draw them in AAAA BBBB CCCC DDDD order. You still get swapping, but you've hugely reduced the number of times per frame that it happens. That's just one example with one solution; other solutions available to you include using texture compression, and breaking really large texture atlasses into smaller ones (swapping in a 256x256 texture will require less overhead and make it less likely that another texture needs to be swapped out to make room for it).

6 - if I was to use different image sizes for the same texture and use them depending on the distance of the objects on the screen, how can I know this distance to be able to bind the right texture[/quote]
I would recommend that you walk away from this idea right now. Fix your mipmapping first (see above) and let the hardware do the job. If nothing else, doing it in software will be horribly inefficient, won't look as good (mipmapping does a bit more than just selecting a downsized texture), and the state changes required will kill your performance.

Share this post


Link to post
Share on other sites
Really? I thought this would be a neat wheeze.

(Not that I've tried it, tho)

Share this post


Link to post
Share on other sites

For mipmapping you should be using:glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

The only valid values for GL_TEXTURE_MAG_FILTER are GL_LINEAR or GL_NEAREST; GL_TEXTURE_MIN_FILTER is where you set your mipmapping param.

http://www.opengl.or...exParameter.xml

You're also calling glGenerateMipmap before you specify your texture; move it to after your glTexImage2D call and it should work.


Hello mhagain, and thank you for your answers.
I tried the solution with mipmapping, unfortunately it really does not work for me.
I changed the position of my glGenerateMipmap function call, which allowed me to get rid of the crash I had, which is a good thing.
I used the glTexParameteri function parameters you suggested too.
But the result is quite messy, my textures are not rendered correctly anylonger, there are part of texture missing, or completely mixed up.
Also the rendering performance is very slow, even when I'm using smaller textures that worked without using mipmap.
Maybe I forgot to set another state or something?

Bill

Share this post


Link to post
Share on other sites
Hidden
I think I found the problem with my mipmaps, apparently I have to call the glGenerateMipmap just after binding the textures when rendering also, am I right?
Bill

Share this post


Link to post
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By nOoNEE
      hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource 
    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631383
    • Total Posts
      2999681
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!