I've done a few toy landscape render programs in the past but until now I always used a "traditional" OOP approach of making models, landsape segments, ui elements etc all implement an IDrawable interface something like this -
class Model : public IDrawable
{
public:
virtual void render(Device& device, Camera& camera);
};
where the render function uses the d3d11 device to directly draw itsself.
After reading the post above I entirely understand how it might be better to instead make drawable objects produce a Mesh to draw which can then be drawn separately by the rendering system. It simplifies the code, and separates concerns much better and potentially makes the code much more reusable without lots of unnecessary abstraction.
Instead of directly drawing the things, the scene node items will instead put the data for a rendering operation onto a queue... Including mesh, camera and so on.
Now the problem I have is this -
Inside my Model class the constructor would previously obtained references to the shaders it used and the textures and so on, and when the render function is called, it knows which shaders it's using and it knows what parameters those shaders require and can fill in a constant buffer and set it directly. Nothing outside of the Model class need have any knowledge of the shaders it uses, or what data they require. Which is a good thing.
Except that it doesn't fit with the new way of drawing things.
I'm thinking I could create a structure for the parameters for each "class" of shader and pass that into the render function as opaque data to give to the shader but that seems ugly. I could create a constant buffer for each class of shader and get my geometry classes to fill that in when they create the render operations but that feels ugly too.
How do people suggest that the part of code that wants to draw a model passes the shader parameters (and the material in general I guess) to the renderer in an elegant way? Although I want my code to be for D1D11 I''d like to keep an eye on doign things in a way that would make it easy to change in future so some form of abstraction is needed here even if it's only passing a handle to a constant buffer or something...
Does this even make any sense?
edit: To explain further, what makes this hard is that there is no real common structure. Each shader requires different numbers of vectors, values, textures etc. And the IRender interface shouldn't really have to know anything about the shader... I could pass a big map of name - value pairs for them all, but doing synamic memory allocation for a map in a renderer doesn't seem like a great idea