Design portable vertex declaration

Started by
6 comments, last by rocklobster 10 years, 9 months ago

Hi all,

I am a bit stuck I my project and I thought i would come by and seek internet wisdom :)

Here is where I am. I am at the point where I have a generic Mesh. It contain a generic VertexBuffer that have different vectors for positions, normals, texcoords, ... As you can see this is platform agnostic.

So I am at the junction between the agnostic code and platform specific. I am wondering if I should just give the generic vertex buffer to Ogl/D3D mesh ( via a common interface, aka virtual function) of if I should construct some platform agnostic vertex declaration to give to the specific implementation with the vertex buffer.

Anybody as ideas on this?

Thanks

Emmanuel

Advertisement

or if I should construct some platform agnostic vertex declaration to give to the specific implementation with the vertex buffer

I use this approach. My model import tools read in config files like below (actually Lua code), which are used to describe the input-layouts/vertex-declarations/etc that will be required, in a platform-agnostic way.

--how the data is stored in the vertex buffers
StreamFormat("standardStream",
{
    {--stream 0: just positions
        { Float, 3, Position },
    },
    {--stream 1: normal/tangent/texcoords interleaved
        { Float, 3, Normal },
        { Float, 3, Tangent },
        { Float, 2, TexCoord, 0 },
    },
})

--the input structure to the vertex shader
VertexFormat("standardVertex",
{
    { "position", float3, Position },
    { "texcoord", float2, TexCoord, 0 },
    { "normal",   float3, Normal },
    { "tangent",  float3, Tangent },
})

--a statement that the above stream format can be used with the above vertex-shader structure
InputLayout( "standardStream", "standardVertex" )
When importing a particular mesh, I look at which shader is assigned to it's material, which tells me which vertex-shader input structure will be required. From there I can generate a list of compatible stream formats, and then pick the best one depending on which attributes the artist has exported on that mesh.

Yes that is what I am juggling with. I am trying to have most a good generic representation but that will allow me to still have some platform specific optimization. For example in the case of the index buffer OpenGl can have index buffer of uint8 where the smallest for D3D is uint16. I know this is for the index buffer but it would be similar for the boneindex where you might want to have a smaller size type.

I don't think your system can handle that? Or Am-I wrong?

My model importing system runs at build-time, not runtime (and isn't shipped to the user), so I can do as much platform specific optimizations as I like in it ;)
The flip side is that the data files that I ship for my MacOS build will be different to the data files that I ship for my Windows build.

If some feature is available on one platform but not others, you can have built in fallbacks.
E.g. If you specify 11_11_10 format for normals, but it's not available on the target platform, you could fallback to 16_16_16_16...
Alternatively if you want to make very specific optimizations by hand, you could specify a generic format, but also provide hand written overides for certain platforms.

P.s. you have to be careful with OpenGL seemingly supporting features, when the GPU doesn't actually support them. With something like 8-bit indices, if the GPU doesn't support them, the driver will perform 8->16bit conversion itself when you send the data to GL... In the worst case, you can be attempting to perform some unavailable operation per-pixel, which results in the driver executing your pixel shader on the CPU!


My model importing system runs at build-time, not runtime (and isn't shipped to the user), so I can do as much platform specific optimizations as I like in it ;)

That is interesting. I guess you use lua to generate c++ code? Are you explaining you system somewhere?

Thanks for the warning about OpenGl. I am more used to DirectX. But I guess Ogl have a way to test those capabilities?

I would assume the C++ code is already written, and the lua function is bound to it.

That is interesting. I guess you use lua to generate c++ code? Are you explaining you system somewhere?

No, the Lua/Tool code generates data files. I write my tools mostly in C#, using Lua for data/config files.

For example, to create a Vertex Declaration object in D3D9, you pass an array of D3DVERTEXELEMENT9 entries to the device. The Lua/Tool code can generate this array and save it to a file. The C++ game code can then load this file, and pass the contents of the file to the device in order to create a IDirect3DVertexDeclaration9. The C++ code is very simple and never needs to change, no matter what kind of vertex declaration is being created. The tool code also just gets written once and never changes. If I want to use a new type of vertex declaration, the only thing that changes is data.

The pipeline from artist/designers generating raw content, to data being loaded into the game looks like this:

Content files (Collada, PNG, simple Lua data fiels like in my example, etc)
  ||
  \/
Data compilation tools (C#/Lua)
  ||
  \/
Processed data files (custom binary formats)
  ||
  \/
Game/Engine runtime (C++)
The data compiler tools are run for a specific platform -- i.e. they might output different results depending on whether you specify that you're performing a build for Windows/DX, or Xbox360, or MacOS/GL, etc... The C++ code would also be different for each platform (e.g. a DX renderer for the Windows version or a GL renderer for the MacOS version).

Just thought I'd link this http://www.gamedev.net/topic/634564-catering-for-multiple-vertex-definitions/#entry5002042

A similar thread I started quite a while ago. Roughly the same stuff, but maybe you'll find some more useful info in there.

This topic is closed to new replies.

Advertisement