Sign in to follow this  
Funkymunky

OpenGL Set constant buffers every frame?

Recommended Posts

Funkymunky    1413

Do I have to call VSSetConstantBuffers/PSSetConstantBuffers every frame after calling VSSetShader/PSSetShader?  With OpenGL, you can call glBindBufferBase once to set up a binding point and then you don't have to call it again unless you want to bind a different buffer to the program.  But it seems like that's not the case with DirectX...?

Share this post


Link to post
Share on other sites
Hodgman    51234
You're binding the buffer to the device, not binding it to the shader.
If different shaders require different buffers, then yes, you have to rebind them.

The GL idea where the shader program object can have values bound to it is a leftover from the days when shader variables didn't actually exist in the hardware, so setting new values required the driver to recompile the shader.

Share this post


Link to post
Share on other sites
Funkymunky    1413

Okay.  It still seems excessive, since I'm not binding the buffer to the "device", I'm binding it to a program that exists within the device context.  It shouldn't change just because I've bound a different program and different buffers to that program.  The program still exists within the context, and as such any bindings to it should be maintained... but I can rebind them if that's the way DirectX works.

Share this post


Link to post
Share on other sites
Hodgman    51234


Okay.  It still seems excessive, since I'm not binding the buffer to the "device", I'm binding it to a program that exists within the device context.
There's times where each abstraction is more useful.

e.g.

#1 say you've got a prop in a level, and you need to set it's position once, and after that it doesnt move. It's nice for that prop to have it's own "shader instance", which contains shader code, but also contains this positional data. Each time you render the prop, you can just tell GL to use this program.

#2 say you've got a camera, and you need to set it's position every frame. The vertex shader of every object needs to know the camera position. It's nice that you can put this data in a buffer and bind it to a particular slot on the device. Then when rendering every object, they automatically know about the camera, without the object being modified.

 

The first design (the GL2.x design) is fairly easy to emulate in D3D if you want to. Make your own structure that contains shader program pointers and constant buffer pointers. Make a function that accepts this structure and then binds all the resources inside it.

 

The second design (the D3D design) is really hard to emulate in GL2.x -- if you've got some data that is shared between 1000 "shader instances", you have to repeatedly set that same data 1000 times, instead of setting it just once.

 

Because of this, I have to say I prefer the D3D API design, because it lets you quite efficiently write code that works like #1 or #2, whereas GL2.x is horribly inefficient when you try and use use-case #2 with it (data shared between many instances). N.B. with GL3.x, you also have the option of binding shader data in a D3D-like manner happy.png

Share this post


Link to post
Share on other sites
mhagain    13430

With the GL2.x design you can use glVertexAttrib calls to sort-of-kind-of emulate the D3D design; it's not perfect owing to the more limited number of attrib slots and the fact that it's a VS-only solution, but it can be done.

 

With D3D10+ constant buffer bindings belong to the device (context), not the program, as Hodgman has pointed out.  In theory that means that there are only two times you ever need to call *SetConstantBuffers: once during startup, and once again if your display mode changes and you need to chuck the current state/bindings.  Of course, that assumes that you design your cbuffers so that the number of them you use is more limited, i.e. instead of a different cbuffer type for different object types, you design a common cbuffer type for all objects.

 

D3D10+ has absolutely no concept whatsoever of binding a buffer to a "program" (the concept of a "program object" as GL defines it doesn't even exist in any version of D3D).  Constant buffers in D3D10+ work the same way as vertex and index buffers do in both D3D and OpenGL: set them once and they're available to all shaders, and changing the shader doesn't affect that.

 

This is honestly a design problem in your code rather than an API problem.  It reads to me as though the lower-level API-specific stuff has been allowed to bubble-up and influence the design of your higher-level abstraction layer.  Instead of an abstraction layer that models renderable objects, you've probably got one that resembles a wrapper around OpenGL, and now you're trying to shoehorn an API with some different thinking behind it into it.  If that's the case, then you really should go back and fix that first, otherwise other API differences are just going to continue to bite you as you proceed.

Edited by mhagain

Share this post


Link to post
Share on other sites

Hi, in DirectX do you not need to bind constant buffer every frame. The best practise is to create 3 constant buffers types:

A)Updated only when needed (example if you need to pass screen size to shader)

B)Updated every frame (for passing camera and data that change one time for frame)

C)Update every models (for passing models data like materials or world matrix)

 

Ordering rendering in the right way and avoid switching constant buffer, texture, buffer and shader every times improve perfomance.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now