Camera and Shaders Constant Buffers untied

Started by
2 comments, last by MJP 11 years, 8 months ago
How can one make a Camera class untied to shaders? Obviously cameras should be modular and work with any shader(that arent effect specific), but its the shader who specifies wheres/how lies the projections and view matrix...In my case the shaders I use have a changeOnResize buffer with the projection, and a changeOnce buffer with the view, they setted to buffers 1 and 2 on the current shader Im working on:
[source lang="cpp"]cbuffer cbChangeOnce : register(b1){
//camera data--
matrix mView;
//-------------
}
cbuffer cbChangeOnResize: register(b2){
//screen res--------
matrix mProjection;
//--------------------
}[/source]
Im assuming is common/good to make the camera the one responsible for the view, projection and viewport, but how much responsibility a camera class should have?
I was thinking on giving to the constructor 2 ID3D11Buffer so it can update data to shader by itself when need, but them Id need to make a rule to always have 2 buffers for the camera in the shaders. If Im having a rule, why not make the camera itself create the 2 buffers, and the rule now is that b1 and b2 are reserved for those camera buffers..Or I could make the camera just have get methods for view and projection, but in that case I dont know who would handle the responsibility, and how?( in the end I cant see other way than having a "must guide" for the shaders wanting to be part of my render..)

(I think the same question apply for lights too, lights arent tied to materials, but its the shaders who define those..)

How frameworks approach this? Like the nVidia fx conposer or even Unity..Is that solved with rules on documentation?
Advertisement
I don't think a camera should have to care about how your data is laid out in constant buffers. In any renderer I've ever worked with or written, the camera is just responsible for creating the view + projection matrices every frame. You can let some other part of your engine/renderer deal with constant buffers, and then have some mechanism for mapping the camera matrices to the variables in your constant buffer. A complicated way might be to reflect the constant buffers in a shader, and then apply the camera matrices if they match a specific name or annotation. A simpler method would be to just put both camera matrices in one constant buffer that the renderer knows the layout of, and then share that same constant buffer layout in all shaders that need to access them.
If a cbuffer is "shared" between shaders ( if the exaclty same cbuffer is declared on a shader ), is it needed to reupload the ID3DBuffer referent to this cbuffer after setting the new shader?

Theres a VSSetConstantBuffers and a PSSetConstantBuffers, so I believe between VS and PS you do need to re-set the cbuffer, even if it is the same shared (please correct me if Im wrong..I kind dont get why, different memory accessed? )

But what for a new VS? if I call VSSetShader, do I have to call VSSetConstantBuffers again, for the exactly equal cbuffer from the previous VS, already settled?
When you bind a constant buffer to a shader stage, it is available for any shader using that stage. So if you bind to the constant buffer to the vertex shader stage, any vertex shader you set can use that constant buffer. It doesn't matter if you set a new vertex shader. If you also want to use that constant buffer in the pixel shader stage, you need to bind it seperately to that stage. You have to bind it seperate for each stage, because it is possible that the vertex shader and the pixel shader might use a different set of constant buffers even within the same draw call. It is the same way for textures, samplers, etc.

This topic is closed to new replies.

Advertisement