enderState also doesn't sound like something that should exist in a component, although I'm not sure what it is.
Oops! That one isn't used anymore I had it to differentiate Entities that had been submitted for rendering (it had two booleans, uploaded and visible) but I take care of that differently now.
It seems I should have explained what those components do (or rather, what kind of data they hold). I'll try to explain:
The same could probably be applied to the Orientation/WorldTransform/MVPTransform.
Orientation is part of WorldTransform, and WorldTransform is part of MVPTransform. "View" and "Projection" actually shouldn't exist in a component anywhere, other than one attached to a camera entity.
This is a consequence of having no idea what I'm d... errr, my "design decisions"
Orientation holds 3 floats, yaw, pitch and roll. Those get computed into a rotation matrix and applied to the WorldTransform each frame for Tags.Dynamic objects, only once for Tags.Static ones (or at least it did until I looked at it a minute ago and noticed I forgot to set the 'dirty' flag on the transforms, new bug fix, thanks for that! ).
MVPTransform just holds the result of doing the model * view * projection matrix multiplications for each entity that will be drawn. It's what it gets passed to the renderer and uploaded as an uniform (cbuffer in D3D parlance I believe).
I still don't have any sort of shading now (literally the only two materials there is Material.TEXTURED_NO_SHADING and Material.COLORED_NO_SHADING). It's all in its infancy yet. The "Material" thingy will get revamped eventually with material parameters stored in .yaml files assembling a map of materials at startup.
Now, here is the TL;DR; part in which I'll explain why I didn't went with the "Model/Spatial/etc" component approach, hell, I'll even put titles on it for teh lulz, (I'm starting to think I miss writing in my journal)
Rendering Steps
The renderer has a two step process to draw things, I'll grab the most annoying one, a textured mesh with indexed drawing:
First step: Upload the necessary data.
- id of the Entity (this will identify the instance)
- Geometry (vertices, tex coords, normals)
- GeometryIndices (self explanatory)
- Texture (texture data, width, height, mipmaps)
- Material (maps to a shader)
- MVP matrix (uploaded to the shader)
The renderer uses a "GLResourceManager" to compose a VAO out of the data if it hasn't been uploaded before (otherwise it just grabs a VAO from the Geometry-VAO map) and assembles a "GLRenderable" mapped to that entity id. If there isn't a RenderBatch for that material, one gets created (not queued for rendering just yet). All we have now is a GLRenderable and the data uploaded in the GPU.
Second step: Add the Entity to the render queue, a rough culling system does the job:
- Out of all entities that are in the world, the cull system assembles a visible set.
- For all the visible set, cull system calls "renderer.addToQueue(entityId)"
With that Id, the renderer grabs a GLRenderable from the map, and adds it to a RenderBatch for that material. Renderer adds the RenderBatch to the queue for rendering if it isn't already there.
For each spatial rendering type (Textured, Colored, IndexedTextured, IndexedColored) there is a SpatialSubmitterSystem that grabs the proper components and sends them to the renderer.
Drawbacks
With a single "Model" component, I'd have to put a bunch of logic in there to say "if model.usesTexture() upload as texture" "if model.hasIndices() upload as indexed" and so on. Given that its all pretty much barebones, it will probably get much more complex after I start adding either more ways of rendering (say, instancing) or more complex state settings in the renderer.
Maybe you can help me with this? If I find a clean way to add renderables to the renderer, it wouldn't be complex to put all the components in a single "Spatial" component. I've seen this process in some open source projects and they just put a 'switch' in the renderer with a 'case' for each variant, around 80 to 100 lines of those and other 'ifs', which is simply fugly IMO.
With this method the renderer is quite simpler, here is the upload method for textured indexed model:
public void upload ( int id, Geometry geo, GeometryIndices ind, Texture tex, Material mat, CmpMat4f mvp )
{
// Compose renderable with resources.
GLRenderable renderable = newRenderable( glResources.getFor( geo, ind ), mat, mvp, glResources.getFor( tex ) );
// Set render call with texture logic and indexed draw call.
renderable.renderCall = (r) ->
{
tracker.active( texUnits[0] );
tracker.bind( r.texture );
RenderCalls.indices.accept( r );
};
// Associate entity id with the renderable.
renderMap.put( id, renderable );
// Create batch if material hasn't been passed before.
createBatchIfAbsent( mat );
}
// And then to add them to the queue.
public void addToQueue ( int id )
{
// Get renderable associated with the entity.
GLRenderable glr = renderMap.get( id );
// Get batch for renderable material.
RenderBatch batch = materialBatches[glr.material.ordinal()];
// If no renderable has been added to the batch before, add batch to render batch queue.
if ( batch.renderQueue.isEmpty() ) renderBatches.add( batch );
// Add renderable to batch queue.
batch.renderQueue.add( glr );
}
The bad side is what you saw, I need plenty of components to differentiate each case...