• Advertisement
Sign in to follow this  

OpenGL Expense of modifying/replacing loaded textures

This topic is 1752 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How well does OpenGL (3.2, if it matters) handle modifying or replacing textures that have already been uploaded to the GPU?

 

I always had the impression that messing with textures was always much more expensive than replacing VBO data, even if the texture is far smaller in byte size, but I'm starting to think this might not be the case.

 

Is there perhaps a list somewhere that shows what sort of operations are most expensive in OpenGL?

 

For context, I'm working on a terrain system where the terrain colors can change frequently. I figure I can either do texture mapping, and modify the texture pixels, or put the color in the terrain vertices and rebuild the VBO for each change. I'm accustomed to continually rebuilding dynamic VBOs for other tasks, but in this case I am thinking that a texture would be a lot less data to upload than the block of geometry.

 

Share this post


Link to post
Share on other sites
Advertisement

you may need to test this...

 

for example, I have had reasonably good success doing things like streaming videos into textures (such as 8 video streams at a time), but on some older hardware (~ 10 years ago), this would put some hurt on the performance.

Share this post


Link to post
Share on other sites
This is one of those things that could be reasonably cheap or could be horribly expensive, depending how you need to do the update, what formats you use, and so on.

First thing is to batch up your updates. A single glTexSubImage call updating the entire texture will perform better than lots of calls updating tiny subrects of it.

Try to divide your workload into updating, then drawing. If you need to update/draw/update/draw/etc your performance will tank, especially on Intel or AMD hardware (NV tolerates this pattern better, but it's still a slower path).

Watch your formats; you absolutely must match the data you feed to glTexSubImage with the drivers preferred internal representation. Normally that means using a BGRA format. Don't fall into the trap of using RGB because you think it will "save memory" - the driver will need to unpack and reswizzle the data as a slow software process and again your performance will tank. Experiment with different values for the type param - GL_UNSIGNED_INT_8_8_8_8_REV can work well here.

Finally, and if you can schedule the update for a few frames ahead of when the texture is needed, consider using a PBO to get an asynchronous transfer. If you need to update in the same frame as the texture is used don't bother with the PBO - it's just extra overhead.

Share this post


Link to post
Share on other sites

The OpenGL drivers are one factor, which means it depends on a very wide number of factors, so what works best for you may not for another.

 

One universal truth however is that if you update a texture while it is still being used by the GPU the GPU will perform a synchronous flush in order to allow you to update the texture safely on the CPU side.

 

This will be terrible performance no matter what API you use or hardware you have etc.

 

If you need to update the texture frequently or if your updates always cover the whole body of the texture, you should double-buffer the texture, or possibly triple-buffer it.

 

 

L. Spiro

Share this post


Link to post
Share on other sites

One universal truth however is that if you update a texture while it is still being used by the GPU the GPU will perform a synchronous flush in order to allow you to update the texture safely on the CPU side.

 

I would hope that in this case drivers are at least reasonably intelligent and copy off the data to temp storage for updating at a later time when the resource is no longer in use (similar to the D3D10+ case for UpdateSubresource where there is contention, although unfortunately OpenGL doesn't seem to specify a behaviour here).

Share this post


Link to post
Share on other sites

One universal truth however is that if you update a texture while it is still being used by the GPU the GPU will perform a synchronous flush in order to allow you to update the texture safely on the CPU side.

 

I would hope that in this case drivers are at least reasonably intelligent and copy off the data to temp storage for updating at a later time when the resource is no longer in use (similar to the D3D10+ case for UpdateSubresource where there is contention, although unfortunately OpenGL doesn't seem to specify a behaviour here).

 

when messing around with multithreaded OpenGL and texture uploading (with the rendering and texture-uploading being done in different threads), I observed some interesting behaviors here:

the glTexImage2d() or glCompressedTexImage2d() calls were completing immediately;

within the main render thread, it would often be up to several seconds later until the texture image actually appeared (otherwise, it would be the prior contents and/or garbage).

 

typically, having the uploader threads call glFinish() would cause them to stall temporarily, but results in the next frame in all of the textures being correct.

 

judging by this, I suspect the driver is lazily updating the textures in this case.

 

 

I am less certain what happens in the single-threaded case, but have made another general observation:

if you do something, and try to immediately make use of the results, there will be a stall.

 

this seems to happen with both VBOs and also things like occlusion queries, like there is a delay between when the upload call is made, and when it can be bound or used.

typically, in my case this means "batching" things and doing them in a partly interleaved order (say, all the draw requests for occlusion queries before fetching any of the results, or uploading all the updated VBOs before the rest of the drawing pass begins, ...).

 

granted, I am not sure how much of this is driver/hardware specific. in this case, I am using a GeForce GTX 460...

Edited by cr88192

Share this post


Link to post
Share on other sites

If you need to update the texture frequently or if your updates always cover the whole body of the texture, you should double-buffer the texture, or possibly triple-buffer it.

 

Very interesting, I didn't know textures could be double-buffered. Do you have any links on doing this in OpenGL?

Share this post


Link to post
Share on other sites

If you need to update the texture frequently or if your updates always cover the whole body of the texture, you should double-buffer the texture, or possibly triple-buffer it.

 

Very interesting, I didn't know textures could be double-buffered. Do you have any links on doing this in OpenGL?

 

There's no specific OpenGL technique for this.  Instead you allocate two textures (let's call them 0 and 1), then update texture 0/draw using texture 1 and vice-versa on alternate frames.

Share this post


Link to post
Share on other sites

There's no specific OpenGL technique for this.  Instead you allocate two textures (let's call them 0 and 1), then update texture 0/draw using texture 1 and vice-versa on alternate frames.

 

But I'm assuming you can't have both textures bound for this to work, right? If you updated texture 0, and the shaders knew to only sample from texture 1 this frame, would the GPU really know that texture 0 wasn't accessed? Or would you still end up with a synchronous flush from updating texture 0?

Share this post


Link to post
Share on other sites

There's no specific OpenGL technique for this.  Instead you allocate two textures (let's call them 0 and 1), then update texture 0/draw using texture 1 and vice-versa on alternate frames.

 

But I'm assuming you can't have both textures bound for this to work, right? If you updated texture 0, and the shaders knew to only sample from texture 1 this frame, would the GPU really know that texture 0 wasn't accessed? Or would you still end up with a synchronous flush from updating texture 0?

 

AFAICT, "the magic" happens when you try to bind and draw using the texture or similar.

so, if the texture is updated but never used, there wont really be a stall (and the driver will update it later at some point).

Edited by cr88192

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  • Advertisement