Sign in to follow this  
Funkymunky

OpenGL Set constant buffers every frame?

Recommended Posts

Funkymunky    1413

Do I have to call VSSetConstantBuffers/PSSetConstantBuffers every frame after calling VSSetShader/PSSetShader?  With OpenGL, you can call glBindBufferBase once to set up a binding point and then you don't have to call it again unless you want to bind a different buffer to the program.  But it seems like that's not the case with DirectX...?

Share this post


Link to post
Share on other sites
Hodgman    51220
You're binding the buffer to the device, not binding it to the shader.
If different shaders require different buffers, then yes, you have to rebind them.

The GL idea where the shader program object can have values bound to it is a leftover from the days when shader variables didn't actually exist in the hardware, so setting new values required the driver to recompile the shader.

Share this post


Link to post
Share on other sites
Funkymunky    1413

Okay.  It still seems excessive, since I'm not binding the buffer to the "device", I'm binding it to a program that exists within the device context.  It shouldn't change just because I've bound a different program and different buffers to that program.  The program still exists within the context, and as such any bindings to it should be maintained... but I can rebind them if that's the way DirectX works.

Share this post


Link to post
Share on other sites
Hodgman    51220


Okay.  It still seems excessive, since I'm not binding the buffer to the "device", I'm binding it to a program that exists within the device context.
There's times where each abstraction is more useful.

e.g.

#1 say you've got a prop in a level, and you need to set it's position once, and after that it doesnt move. It's nice for that prop to have it's own "shader instance", which contains shader code, but also contains this positional data. Each time you render the prop, you can just tell GL to use this program.

#2 say you've got a camera, and you need to set it's position every frame. The vertex shader of every object needs to know the camera position. It's nice that you can put this data in a buffer and bind it to a particular slot on the device. Then when rendering every object, they automatically know about the camera, without the object being modified.

 

The first design (the GL2.x design) is fairly easy to emulate in D3D if you want to. Make your own structure that contains shader program pointers and constant buffer pointers. Make a function that accepts this structure and then binds all the resources inside it.

 

The second design (the D3D design) is really hard to emulate in GL2.x -- if you've got some data that is shared between 1000 "shader instances", you have to repeatedly set that same data 1000 times, instead of setting it just once.

 

Because of this, I have to say I prefer the D3D API design, because it lets you quite efficiently write code that works like #1 or #2, whereas GL2.x is horribly inefficient when you try and use use-case #2 with it (data shared between many instances). N.B. with GL3.x, you also have the option of binding shader data in a D3D-like manner happy.png

Share this post


Link to post
Share on other sites
mhagain    13430

With the GL2.x design you can use glVertexAttrib calls to sort-of-kind-of emulate the D3D design; it's not perfect owing to the more limited number of attrib slots and the fact that it's a VS-only solution, but it can be done.

 

With D3D10+ constant buffer bindings belong to the device (context), not the program, as Hodgman has pointed out.  In theory that means that there are only two times you ever need to call *SetConstantBuffers: once during startup, and once again if your display mode changes and you need to chuck the current state/bindings.  Of course, that assumes that you design your cbuffers so that the number of them you use is more limited, i.e. instead of a different cbuffer type for different object types, you design a common cbuffer type for all objects.

 

D3D10+ has absolutely no concept whatsoever of binding a buffer to a "program" (the concept of a "program object" as GL defines it doesn't even exist in any version of D3D).  Constant buffers in D3D10+ work the same way as vertex and index buffers do in both D3D and OpenGL: set them once and they're available to all shaders, and changing the shader doesn't affect that.

 

This is honestly a design problem in your code rather than an API problem.  It reads to me as though the lower-level API-specific stuff has been allowed to bubble-up and influence the design of your higher-level abstraction layer.  Instead of an abstraction layer that models renderable objects, you've probably got one that resembles a wrapper around OpenGL, and now you're trying to shoehorn an API with some different thinking behind it into it.  If that's the case, then you really should go back and fix that first, otherwise other API differences are just going to continue to bite you as you proceed.

Edited by mhagain

Share this post


Link to post
Share on other sites

Hi, in DirectX do you not need to bind constant buffer every frame. The best practise is to create 3 constant buffers types:

A)Updated only when needed (example if you need to pass screen size to shader)

B)Updated every frame (for passing camera and data that change one time for frame)

C)Update every models (for passing models data like materials or world matrix)

 

Ordering rendering in the right way and avoid switching constant buffer, texture, buffer and shader every times improve perfomance.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now