Sign in to follow this  
Ubik

OpenGL Vertices and indices in the same buffer?

Recommended Posts

I'm thinking of having a single Buffer concept against having a concept for each of VertexBuffer, IndexBuffer and so on, mainly from the perspective of type safety (i.e. type "Buffer" vs types "VertexBuffer", "IndexBuffer" and so on). Both OpenGL and Direct3D tell that it is possible to mix different kinds of data in the same buffer, but also say that it might come with a performance penalty.

OpenGL's vertex buffer object extension text says:
 

Note that it is expected that implementations may have different
memory type requirements for efficient storage of indices and
vertices.  For example, some systems may prefer indices in AGP
memory and vertices in video memory, or vice versa; or, on
systems where DMA of index data is not supported, index data must
be stored in (cacheable) system memory for acceptable
performance.  As a result, applications are strongly urged to
put their models' vertex and index data in separate buffers, to
assist drivers in choosing the most efficient locations.

But that is very old text, apparently from 2003, so it might be completely outdated. So I looked into what D3D documentation says. Here's what D3D11_BIND_FLAG's Remarks section tells:
 

In general, binding flags can be combined using a logical OR (except the constant-buffer flag); however, you should use a single flag to allow the device to optimize the resource usage.

D3D11 is from 2008, so it's documentation may be not very relevant anymore in this regard too. I don't know what would be good specific terms to find this out, and don't really have too good chances of just testing on a multitude of platforms, so am asking from you now.

I guess I could ask this in two ways:

1) Is it (more often than not) a bad idea to make heterogeneous buffers (I have no idea if there is an official term for this) today?
2) Is it (more often than not) a good idea to make heterogeneous buffers today?

For more specific context, I'm personally interested in how things are in current hardware (and drivers) in the PC, not mobile nor console world, but also the not so recent HW interests me. (Though I have access to one not exactly new GPU myself when I just boot up my own 'puter, to personally test things out...) My API of choice is OpenGL. But let's not restrict this only to my circumstances, it would be interesting to know how things are in general.

Share this post


Link to post
Share on other sites

I believe that on current (and recent) hardware it's all just "memory", and that the days where a driver may prefer to store different buffer types differently are behind us.  This is definitely confirmed by e.g the Mantle documentation:

 

 

In the Mantle API, specialized vertex and constant buffers are removed in favor of more general buffer support. An application can use regular buffers to store vertex data, shader constants, or any other data.

 

In any event there may be organizational or data reasons to keep the buffers separate, but it appears as though you no longer have to.

 

Under D3D12, and since it has feature levels going down to 9, I would expect at some stage you'll want to keep them separate for performance reasons too, but I haven't looked at the D3D12 documentation in great detail.

Share this post


Link to post
Share on other sites

With GL vertex layout is defined by vertex attrib function call, so you can mix different data type on the same buffer as long as you use correct offset/stride and pay attention to data alignment.

However I don't know if you can bind a buffer to several "type bind point", ie array_element_buffer and array_buffer, or array_element_buffer and shader_storage_buffer, at the same time. I never read it was forbidden by the spec but I don't know the spec by heart.

 

On D3D12 it isdefinitively be possible since you only pass gpu virtual address, gpu doesnt know that they belong to the same buffer from OS or app perspective.

Share this post


Link to post
Share on other sites

Back in the day, GPUs weren't actually able to read indices from GPU memory. They were always streamed from main memory. But that is ancient history. Nowadays, on at least some hardware, I've heard that uniform buffers, texture buffers, and possibly shader storage buffers get special handling. The glBufferStorage API (which you should be using, not BufferData) requires you to pick a specific buffer type, not a mask. However on closer reading this appears to be simply a question of the buffer's binding point, not anything to do with the actual usage (which doesn't kick in until BindBuffer). I don't see anything offhand that says you can't bind the same buffer to more than one binding point, and from there it's just a question of offsets.

 

So yeah, go for it. I would be inclined to keep constant/uniform buffers and texture buffers separate from everything else, though.

Edited by Promit

Share this post


Link to post
Share on other sites

@mhagain: Good point on the new APIs revealing much about the current state of the hardware and them being largely agnostic about the buffer contents (as far as I can tell too).

 

@vlj: Your wording on it being forbidden by the spec somehow brought up one OpenGL-related thought I've had on background - the API may permit things, but may not actually work outside the common ways of use.

 

@unbird: Thanks for testing this out!

 

@Promit: I may very well be reading incorrectly between the lines, but I get the impression that mixing different kinds of data in single buffer (in old or current GL or D3D anyway) is not something you've done or seen done, at least enough to be a somewhat common pattern?

Share this post


Link to post
Share on other sites


I get the impression that mixing different kinds of data in single buffer (in old or current GL or D3D anyway) is not something you've done or seen done

 

unbird is the only one I've heard of. Without a clear indication that performance is drastically enhanced, it would be a bit of a pain to fill the buffer correctly (vertex-data + offset + index-data, or index-data + offset + vertex-data), if a D3D11 DrawIndexed call would be used for rendering. In spite of the advertised behavior, I was surprised when I (also) experimented and found that D3D11 didn't complain when the same buffer was bound to the input as both a vertex and index buffer! I didn't go further than that - i.e., I didn't calc the offset, fill the buffer, etc.

Share this post


Link to post
Share on other sites

On modern OpenGL 4: You can use glBufferStorage to create a single chunk of memory that can be used for: vertex, index, constants, UAV and texture buffers. The first binding point the memory is used with can be used as a hint to the driver for performance, but it's just a hint (and afaik... ignored by modern implementations).

On D3D12: See OpenGL 4.

On D3D11: You can use same buffer for vertex & index buffers, but constants have their own. UAV and texture buffers have their own binding point as well.

 

On WebGL: Every buffer can only be used for one specific type for security reasons (the browser can assume what contents the buffer will contain and perform a lot of validation; as long as you don't modify the buffer again, the validation triggers when you upload data, if WebGL allowed multiple binding points, the validation would have to trigger every time it's bound differently or with a different range which could cripple performance).

Share this post


Link to post
Share on other sites

@vlj: Your wording on it being forbidden by the spec somehow brought up one OpenGL-related thought I've had on background - the API may permit things, but may not actually work outside the common ways of use.

The recommended use (AZDO) encourages one buffer for everything, being bound at multiple binding points with different offsets. This is not "uncommon".
Of course, if you bind the same region (or an overlapping region) as a vertex buffer and as SSDO with write access, don't expect it to run glitch-free (because that's literally a hazard); same if you bind a buffer at an unaligned offset, etc etc.

Share this post


Link to post
Share on other sites


@Promit: I may very well be reading incorrectly between the lines, but I get the impression that mixing different kinds of data in single buffer (in old or current GL or D3D anyway) is not something you've done or seen done, at least enough to be a somewhat common pattern?

Correct, but that's not to suggest there's any issue with it. I've just never seen any of the documentation from the IHVs suggest that you should do it, so it never really crossed my mind to try it. I think it's a good idea if GPU memory fragmentation is a concern. 

 

This was a few years ago: https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/gdc12/Efficient_Buffer_Management_McDonald.pdf

Share this post


Link to post
Share on other sites

Seems kind of curious that using big combined buffers in pre-AZDO world has apparently not really been a thing, considering it could maybe be used to minimize state changes when editing buffer contents, if the "AZDO approach" is to use them as larger memory pools. Like, how hasn't it been a thing earlier? I guess there just isn't that much benefit (with the older GL API at least), when combining buffers of single usage type alone creates large enough blocks of memory, along with keeping things conceptually easier to grasp. Oh well, I'm not advanced enough graphics programmer to make any strong conclusions.

 

Anyway, I think I'll choose to have just a general Buffer type instead of artificially separated VertexBuffer, UniformBuffer et cetera (this approach actually still entices me, I'm a little bent on having strict compile-time sanity and validity checks), though mostly from the viewpoint of there being less code in general.

 

Thanks for your feedback!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628293
    • Total Posts
      2981866
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now