File format of custom mesh

This topic is 2510 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hello

What is a common way of storing vertices, indices, UV maps etc to a file?

I am looking over some old code from 2001 and they have a UV map with indices in its own block, and then afterwards comes a really funky vertex layout. A vertex struct is basically "split" between two areas of the file, so I have to add (+) them together to get the final vertex.
 FinalVertex = Vert1 + Vert2; 

Why would someone make a layout like that? They also do some transforms before adding them together in some examples.

Was/is there some standard in file formats you see across the board?

Share on other sites

Hello

What is a common way of storing vertices, indices, UV maps etc to a file?

I am looking over some old code from 2001 and they have a UV map with indices in its own block, and then afterwards comes a really funky vertex layout. A vertex struct is basically "split" between two areas of the file, so I have to add (+) them together to get the final vertex.
 FinalVertex = Vert1 + Vert2; 

Why would someone make a layout like that? They also do some transforms before adding them together in some examples.

Was/is there some standard in file formats you see across the board?

Every file format will likely have it's own quirks to deal with. But overall the description doesn't sound all that unusual given that you can get nice optimization's in vertex buffers by splitting up the streams in certain circumstances. For instance, say you have a character with different customization areas on the mesh. So, you could say this section uses material a, this other section uses b, etc. If you end up with a lot of such items it often makes sense to go with an indexed format instead of a flat stream. What this means is you split the mesh data into global mesh information and per material information. Along the lines of the following:

Stream0
{
Vector3 Position;
Color3 Color;
};

Stream1
{
Vector3 Normal;
Vector2 Uv0;
}

When submitting a triangle you actually submit a pair of indexes, the first index is the position/color and the second is the per material normal/uv0 portion. This is one possible reason they could have split things up, others range from supporting both cpu and gpu skinning where you keep the bits which are dynamically updated on the CPU (normal/position) separate from the static bits such as uv/color/etc and then you use a static VB for one and a dynamic (probably double buffered) for the other.

There are many many reasons to split things up in various ways, it may not always make sense when you finally ship but by that point you probably don't want to mess with the old formats and update everything.

Share on other sites
Hmm. But in this example the position itself is split:
 FinalVertex.x = Vert1.x + Vert2.x; 

When I extract this code from the example and use it in my own project most of my triangles are in the center (origin), which is really wierd. Have you had any experiance which resulted in that most triangles are centered or "mixed up"?

Share on other sites

Hmm. But in this example the position itself is split:
 FinalVertex.x = Vert1.x + Vert2.x; 

When I extract this code from the example and use it in my own project most of my triangles are in the center (origin), which is really wierd. Have you had any experiance which resulted in that most triangles are centered or "mixed up"?

Well, there are possible reasons such as morph targets, subsection displacement morphs etc which could be involved. It would help to have an example of the format because I can think of plenty reasons this may be the case. Heck, in actual "editor" formats, every channel can be unique and by channel I mean a "primitive type" such as int/float/double/etc such that you get the X position in one stream, X normal in another stream etc and the final bit of the file say's which pieces go where and how to properly combine things into a usable result.

Share on other sites
I think I found the source of the problem. This old program ( dx8 ) uses only two bones for each vertex in the skinning process, thus the sum of two weighted positions. And somehow most faces are stored in their own local space. Is it common that a hand mesh is deformed like that before skinning?

Like a hand, the bracelet, palm, fingers etc all are centered around zero. I did manage to find the bone hierarchy and will do the translations shortly. The offset transforms is for something else methinks...

I thought it was common that a complete hand mesh was in its own local space, and had bones applied to different vertices afterwards during animation.

Share on other sites

I think I found the source of the problem. This old program ( dx8 ) uses only two bones for each vertex in the skinning process, thus the sum of two weighted positions. And somehow most faces are stored in their own local space. Is it common that a hand mesh is deformed like that before skinning?

Like a hand, the bracelet, palm, fingers etc all are centered around zero. I did manage to find the bone hierarchy and will do the translations shortly. The offset transforms is for something else methinks...

I thought it was common that a complete hand mesh was in its own local space, and had bones applied to different vertices afterwards during animation.

Well, not to sound like a broken record but there really is no "standard". There are common forms and uncommon forms based on what the engine needs. In the case of skinning, there are possible reasons that they stored it in local space involving possible matrix optimizations done in vertex shaders, or they simply goofed when doing load/transform and never got around to fixing the assets and reading code to avoid that additional step. Whatever the reason, at least you understand it now, that's a pretty tricky thing to figure out sometimes.

1. 1
2. 2
Rutin
16
3. 3
4. 4
5. 5

• 26
• 11
• 9
• 9
• 11
• Forum Statistics

• Total Topics
633709
• Total Posts
3013481
×