vertex declaration in a mesh chunk for new graphics engines

Started by
4 comments, last by andur 15 years, 11 months ago
hi there. i have been looking at some ideas from James Long, where he defines a Meshchunk or basically mesh data (vertices, diffuse, texcoords, tangents) and works out a way that i believe D3D and GL uses which is Vertex Data Elements or Descriptions. it basically defines what data a single vertex holds. i would like to know what you people have been using. are you using seperate lists to define a mesh or assigning descriptions for a vertex and then when adding data it seperates it oldschool way: define a mesh object in object space: class Mesh { vector3 *vertList; vector4 *diffuse; uvcoord *uvCoords; vector3 *normals; vector3 *tangent; } now. if i want an object to only have position and color should i really keep all that definitions. even if they are NULL. or how would i pass the used info to the API. using a Flags system that would tell me what attributes the mesh would have ? description way: mesh *mesh = new mesh; mesh->AddVertexDescription( POSITION, FLOAT3 ); mesh->AddVertexDescription( DIFFUSE, FLOAT4 ); mesh->AddVertexDescription( TEXCOORD0, FLOAT2 ); mesh->AddData( verticesData ); mesh->AddData( diffuseData ); mesh->AddData( uvData ); im not sure this would be the best/robust way or even flexible, but i am willing to hear about your ideas on this one i think this is a important issue to solve before going deep into other graphics rendering programming. thanks in advance
Advertisement

I think most folks just stick to the vertex declaration mechanism of the platform they're using, be it VertexElements or Descriptions. Unless you're 100% certain you're going to need the OpenGL/DirectX crossplatform approach, the flexibility you described isn't that important. In practice, you'll typically depend on one or two vertex formats that the importer dictates for 99% of your mesh art assets anyway.
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
Well, generally I keep a large-ish Vertex struct that contains all of the information needed for a vertex. I've been tempted recently by the thought of splitting the texture coordinates out of it, so I can have an arbitrary number of different texture coordinates per vertex, but I haven't had any need for that yet.

I use an array of those Vertex structs for the cpu.

Gpu-wise, I have in the past, just reflected into my shaders to figure out what information they need, and built a single vertex buffer with only the information that it needs. However, I've found that this starts getting wastefull once as you start doing things like shadow mapping or deferred rendering where you want to render the same mesh but with a bunch of different shaders.

So, in my current in-dev engine, I've split each vertex element into a seperate buffer. Then each shader has a vertex declaration associated with it. Then the vertex buffers are constructed for a mesh, if they don't already exist, that the shader's assigned to it need (so if nothing needs say uv2 coordinates, that buffer doesn't get created ever). The renderer then uses these declarations to set up multiple vertex streams.

If I change shaders or anything on a mesh, then I get any missing information from that Vertex array sitting in regular memory.

I find this is nice, as its always video memory optimal, its flexible, and doesn't waste bandwidth on the gpu sending information like normals to a depth-only pass.
thanks for your answers.

remi:
well i dont have that much interest in cross platform since i basically use windows/d3d but still i would like to keep it flexible enough at this level.

andur:
so basically you're using the oldschool way Mesh declaration, seperate buffers for each vertex element and keep a shader ID for each mesh that tells you what info to feed down the pipeline for the shaders?

lets say a simple diffuse perpixel lighting shader you just pick *vertexList, *vertexNormals, *diffuseColor buffers and create the d3d buffers and send it down to the vcard ?

the idea i got when looking at John's code was that he for instance creates a vertex buffer and index buffer per object with the VertexDeclaration needed, but like you said, in case you want to render one object with different shaders you would have to duplicate it right ?

thanks for your answers.

remi:
well i dont have that much interest in cross platform since i basically use windows/d3d but still i would like to keep it flexible enough at this level.

andur:
so basically you're using the oldschool way Mesh declaration, seperate buffers for each vertex element and keep a shader ID for each mesh that tells you what info to feed down the pipeline for the shaders?

lets say a simple diffuse perpixel lighting shader you just pick *vertexList, *vertexNormals, *diffuseColor buffers and create the d3d buffers and send it down to the vcard ?

the idea i got when looking at John's code was that he for instance creates a vertex buffer and index buffer per object with the VertexDeclaration needed, but like you said, in case you want to render one object with different shaders you would have to duplicate it right ?

Yeah that's essentially what I do.

You can do combined vertex buffers where you toss everything into one. If you never change shaders on a mesh, or all of your shaders that you use, require the same inputs, than a single combined buffer is the best approach.

If you change your shaders around, you either end up creating multiple vertex buffers with duplicated information in them (which is just plain bad from a memory point of view), or you end up making a vertex buffer that contains all of the inputs that all of your shaders that you want to use on that mesh need (which is also inefficient).

Course, my approach is tailored for a system, with the requirements of:
-Shaders on a mesh can be changed at any time by the end user, which can require changing the vertex buffers
-My scenes just barely fit in a 512mb video card, making memory minimization important.
-I do shadow mapping, which a position-only buffer suits nicely.

This topic is closed to new replies.

Advertisement