My current demos' render loops are typically something like this:
beginscene;set up states;set shader;render model(sets textures, and grabs vertbuffer of mesh needed, then calls dip)set another shaderrender another modelset yet another shaderrender gui treeend scene.
This allows me to do pretty much what i want in each demo, but i have to worry about shaders/states far more then i would like, and things like shadows/reflections need to be handled for each object drawn. One reason for this is that my vertex structures are hard coded. So...
#1 I'd like a little guidance on how the shaders can dictate the vertex format, as opposed to the vertex format dictating what shaders can be used(and requiring the user to set them up each frame)
I'm thinking of composing a queue of rendercalls, perhaps created from a scenegraph, which will allow sorting of calls by shader, material, texture etc, and also build instanced dip calls where possible. so a command would contain a mesh, shader, material params, and a world transform, and flags for whether or not the object is a shadow caster, or emits light, is alphablended, etc
#2 Does anyone have any advice for implementing such a system, or see any pitfalls that i might encounter?
I can't think of anything more to ask atm, but will return when i do... in the mean time, thanks again.
[edited by Phantom: removing code tags to stop long text segments breaking forum layout]
[Edited by - phantom on December 27, 2010 1:36:39 PM]