I'm using DX11 but at DX9 feature level. I aim to build the engine around tessellation and displacement maps (I know its better off in DX11 feature level but my machine is only just behind the line of DX11 machines, so I can't use let alone debug the features.) regardless some (not heavy hitting) tesselation is my aim for the LOD effect.
ANYWAY! I have a few questions on the base of my engine, (I've got it functional so all of this is for the sake of expanding it)
1.) Model class layout.
My current layout is along the lines of: (pseudo-ish code)
Model //what is publicly used
{
ModelPart[] //contains model data per mesh (mesh, bones, material)
{
Mesh //contains mesh parts in case of large mesh
{
MeshPart[] //contains buffers and indices of bones relevant to the mesh to be passed to the shader
{
VertexBuffer
IndexBuffer
BoneIndex[]
}
}
Bone[] //contains current matrix and pointers to parent bones (for getting their matrices for full transform)
{
ParentBone*
Matrix
}
Material //contains pointers to textures and an array of parameters relevant to its intended shader
{
Texture*
Specular*
Normal*
Parameters[]
}
}
}
What else should I be taking into account? What may I have overlooked or got wrong?
2.) I'd like to write my own converter to change models to my own format. Does anyone know what input formats are worth taking into account or if there is anything I should be aware of while doing this?
3.) I've got a fair plan of how I'm going to lay out texture channels for use (Specular Alpha acting as specular intensity, Normal map spherical normals in XY, Z Displacement, A Shader Parameter (E.G. glow)) is it worth making a program to merge textures into these formats? or should I just do it as they load?
Any and all help is much appreciated,
Thanks in advanced,
Bombshell