Jump to content

  • Log In with Google      Sign In   
  • Create Account

Design portable vertex declaration


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 monamimani   Members   -  Reputation: 134

Like
0Likes
Like

Posted 15 July 2013 - 08:32 PM

Hi all,

 

I am a bit stuck I my project and I thought i would come by and seek internet wisdom :)

 

Here is where I am. I am at the point where I have a generic Mesh. It contain a generic VertexBuffer that have different vectors for positions, normals, texcoords, ... As you can see this is platform agnostic.

So I am at the junction between the agnostic code and platform specific. I am wondering if I should just give the generic vertex buffer to Ogl/D3D mesh ( via a common interface, aka virtual function) of if I should construct some platform agnostic vertex declaration to give to the specific implementation with the vertex buffer.

 

Anybody as ideas on this?

 

Thanks

Emmanuel



Sponsor:

#2 Hodgman   Moderators   -  Reputation: 31800

Like
2Likes
Like

Posted 15 July 2013 - 09:55 PM

or if I should construct some platform agnostic vertex declaration to give to the specific implementation with the vertex buffer

I use this approach. My model import tools read in config files like below (actually Lua code), which are used to describe the input-layouts/vertex-declarations/etc that will be required, in a platform-agnostic way.
--how the data is stored in the vertex buffers
StreamFormat("standardStream",
{
    {--stream 0: just positions
        { Float, 3, Position },
    },
    {--stream 1: normal/tangent/texcoords interleaved
        { Float, 3, Normal },
        { Float, 3, Tangent },
        { Float, 2, TexCoord, 0 },
    },
})

--the input structure to the vertex shader
VertexFormat("standardVertex",
{
    { "position", float3, Position },
    { "texcoord", float2, TexCoord, 0 },
    { "normal",   float3, Normal },
    { "tangent",  float3, Tangent },
})

--a statement that the above stream format can be used with the above vertex-shader structure
InputLayout( "standardStream", "standardVertex" )
When importing a particular mesh, I look at which shader is assigned to it's material, which tells me which vertex-shader input structure will be required. From there I can generate a list of compatible stream formats, and then pick the best one depending on which attributes the artist has exported on that mesh.

#3 monamimani   Members   -  Reputation: 134

Like
0Likes
Like

Posted 16 July 2013 - 06:53 PM

Yes that is what I am juggling with. I am trying to have most a good generic representation but that will allow me to still have some platform specific optimization. For example in the case of the index buffer OpenGl can have index buffer of uint8 where the smallest for D3D is uint16. I know this is for the index buffer but it would be similar for the boneindex where you might want to have a smaller size type.

 

I don't think your system can handle that? Or Am-I wrong?



#4 Hodgman   Moderators   -  Reputation: 31800

Like
0Likes
Like

Posted 16 July 2013 - 08:19 PM

My model importing system runs at build-time, not runtime (and isn't shipped to the user), so I can do as much platform specific optimizations as I like in it ;)
The flip side is that the data files that I ship for my MacOS build will be different to the data files that I ship for my Windows build.

If some feature is available on one platform but not others, you can have built in fallbacks.
E.g. If you specify 11_11_10 format for normals, but it's not available on the target platform, you could fallback to 16_16_16_16...
Alternatively if you want to make very specific optimizations by hand, you could specify a generic format, but also provide hand written overides for certain platforms.

P.s. you have to be careful with OpenGL seemingly supporting features, when the GPU doesn't actually support them. With something like 8-bit indices, if the GPU doesn't support them, the driver will perform 8->16bit conversion itself when you send the data to GL... In the worst case, you can be attempting to perform some unavailable operation per-pixel, which results in the driver executing your pixel shader on the CPU!

#5 monamimani   Members   -  Reputation: 134

Like
0Likes
Like

Posted 17 July 2013 - 06:12 PM


My model importing system runs at build-time, not runtime (and isn't shipped to the user), so I can do as much platform specific optimizations as I like in it ;)

 

That is interesting. I guess you use lua to generate c++ code? Are you explaining you system somewhere?

 

Thanks for the warning about OpenGl. I am more used to DirectX. But I guess Ogl  have a way to test those capabilities?



#6 Burnt_Fyr   Members   -  Reputation: 1248

Like
0Likes
Like

Posted 17 July 2013 - 09:48 PM

I would assume the C++ code is already written, and the lua function is bound to it.



#7 Hodgman   Moderators   -  Reputation: 31800

Like
0Likes
Like

Posted 17 July 2013 - 11:46 PM

That is interesting. I guess you use lua to generate c++ code? Are you explaining you system somewhere?

No, the Lua/Tool code generates data files. I write my tools mostly in C#, using Lua for data/config files.
 
For example, to create a Vertex Declaration object in D3D9, you pass an array of D3DVERTEXELEMENT9 entries to the device. The Lua/Tool code can generate this array and save it to a file. The C++ game code can then load this file, and pass the contents of the file to the device in order to create a IDirect3DVertexDeclaration9. The C++ code is very simple and never needs to change, no matter what kind of vertex declaration is being created. The tool code also just gets written once and never changes. If I want to use a new type of vertex declaration, the only thing that changes is data.

The pipeline from artist/designers generating raw content, to data being loaded into the game looks like this:
Content files (Collada, PNG, simple Lua data fiels like in my example, etc)
  ||
  \/
Data compilation tools (C#/Lua)
  ||
  \/
Processed data files (custom binary formats)
  ||
  \/
Game/Engine runtime (C++)
The data compiler tools are run for a specific platform -- i.e. they might output different results depending on whether you specify that you're performing a build for Windows/DX, or Xbox360, or MacOS/GL, etc... The C++ code would also be different for each platform (e.g. a DX renderer for the Windows version or a GL renderer for the MacOS version).

#8 rocklobster   Members   -  Reputation: 415

Like
0Likes
Like

Posted 18 July 2013 - 05:10 AM

Just thought I'd link this http://www.gamedev.net/topic/634564-catering-for-multiple-vertex-definitions/#entry5002042

 

A similar thread I started quite a while ago. Roughly the same stuff, but maybe you'll find some more useful info in there.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS