Jump to content

  • Log In with Google      Sign In   
  • Create Account

3D file efficiency


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 romomo   Members   -  Reputation: 103

Like
0Likes
Like

Posted 25 June 2012 - 10:55 PM

Hello,
So I've created a custom binary file export format for 3DS. Everything works fine, but I would now like to make it a bit more compressed. I'm considering using one or more of the following; Shared vertices (currently every face exports 3 floats, shared verts are not taken into account), converting verts to whole numbers, and exporting as a short... not sure if this will have too much impact on low poly model quality), and last but not least... exporting ONLY changed verts in each frame (though it would seem I would then need to key it by indicies, meaning it will help in SOME cases... but make things worse in most).

I'm interestes in hearing your suggestions, and if possible, an example of implementation.

Thanks!
Rob

Sponsor:

#2 szecs   Members   -  Reputation: 2142

Like
0Likes
Like

Posted 25 June 2012 - 11:02 PM

"converting verts to unit length"?

That means you will export unit length spheres.

#3 romomo   Members   -  Reputation: 103

Like
0Likes
Like

Posted 25 June 2012 - 11:10 PM

I'm sorry, typo on my part. What I meant was converting them to whole numbers in short range... eg: 26.013 becomes 26. (fixed typo)

#4 Krohm   Crossbones+   -  Reputation: 3119

Like
0Likes
Like

Posted 26 June 2012 - 03:20 AM

How much do you want to save?
I have seen something apparently like what you need in OpenCTM. Guess what? I'm not suggesting to do that. In my case, it would save about a meg. Not worth it IMHO.
Sharing common vertex blobs is a must. This is often useful to build better batches as well. I suggest to do that only.

#5 L. Spiro   Crossbones+   -  Reputation: 13600

Like
3Likes
Like

Posted 26 June 2012 - 05:45 AM

My engine uses a proprietary model format with the extension .LSM. To satisfy the needs of mobile devices and next-generation data sets, it has been aggressively compacted.

I have documented it here.

For any model format, you need to merge shared vertices instead of exporting them twice or more. This is standard.

Normals should be exported using only 65 bits each instead of 96. You could even get away with exporting them using 16-bit floats, at 33 bits per normal.

One of the major areas in which I save the most space is with index compression. On average, my entire index buffer is ~1.5% of a raw 32-bit index buffer. The routine is explained in my article.

LZW-based compression routines work best on repeating data, so if you are using one, my article also explains how you can get face data down to about the same size (~1.5%).

For an example of the results, this Chevy has 20,347 triangles, 18,905 vertices, and is fully textured with normal-maps.
Posted Image
The (binary) .FBX file is 33.406 megabytes in size, including all textures, 6.106 megabytes without textures.
The .LSM file is 6.279 megabytes with embedded textures, about 1.2 megabytes without textures.

The way in which I was able to pack 27.3 megabytes of textures into 5 megabytes is also documented on my site.


L. Spiro

Edited by L. Spiro, 26 June 2012 - 05:45 AM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#6 romomo   Members   -  Reputation: 103

Like
0Likes
Like

Posted 26 June 2012 - 07:55 AM

Krohm and L. Spiro,
Thank you both, this was exactly the info I was looking for. I'm curious, as you said for mobile devices (This is precisely what I'm developing for; Android and IOS)... How are you storing animation for models with such high tri-counts?

#7 L. Spiro   Crossbones+   -  Reputation: 13600

Like
2Likes
Like

Posted 26 June 2012 - 08:12 AM

You mentioned storing only changed vertices etc. which means you are basically already off to a bad start.
Animations contain no vertex data; they are bones and joints with keyframes. Already this is about 1% the size of storing the changed vertex data. While there is a place and time to use per-vertex animation, I highly doubt this is your case.

Look into skinning.

But even joint animation with keyframes can add up, so you still need a few tricks to keep the sizes down.
One is redundant joint elimination. Firstly get all the key-frame data. Whether your are working with Maya or the FBX SDK, each provides a feature to allow you to get the joints at any time during the animation.
Run your own simulation along with the SDK’s simulation, and if your joints are not correct, add your own key-frame there. This is necessary because Maya allows rotations from 0 to 360 to 720, etc., whereas your data will consist of radians which are clamped in range. In the world of radians, 0, 360, and 720 are the exact same value, so interpolating a rotation from 0 degrees to 720 degrees, with no key-frames between, would result in no rotation in your radian-based implementation.
Then run the simulation again between every second keyframe. If, between those key-frames (interpolating between only them, excluding the middle key-frame), the joints at the time of the middle key-frame are a match, and you can eliminate the middle key-frame.

This is the most major way of reducing animation data, but can of course be further enhanced by clever compression methods. You can get inspiration for that kind of compression from my article.
And again, joints, not per-vertex animations. Research “skinning”.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#8 romomo   Members   -  Reputation: 103

Like
0Likes
Like

Posted 26 June 2012 - 09:34 AM

I'm aware of skinning, I have a seperate implementation for that. The purpose of the per-vertex key-frames is to accomidate older/less powerful mobile devices. Specifically those with minimal to no GPU. Simple interpolation between vertices seems to produce much faster results (for better looking models, I might add), and as I don't intend to use this particular implementation for real-time physics/animation, I can produce much nicer deforms. This is the whole reason I'm looking into ways to minimize the size of the output file.

#9 samoth   Crossbones+   -  Reputation: 4783

Like
2Likes
Like

Posted 26 June 2012 - 12:39 PM

What I meant was converting them to whole numbers in short range... eg: 26.013 becomes 26. (fixed typo)

A much more high quality solution would be to find the middle point of the point cloud, and then the most distant point.
Normalize your data set so this most remote point has a distance from the center of exactly 1.0 -- this turns all distances into numbers between 0.0 and 1.0 (and, since the components are at most as long as the hypothenusis, the components as well).
Now convert to short, by multiplying with 65535.0 and truncating, this yields 0.0 = 0 and 1.0 = 65535. You must save the scale factor (a single number) with the model for being able to to restore it. As an optimization, find the maximum squared distance rather than the distance (saving a square root per vertex), and only take the square root at the very end.

Alternatively (and computionally cheaper since there is no horizontal add, and probably even more accurate), iterate once over all vertices, calculating the maximum value of all components (as measured from the center point). Then scale using the total maximum number and proceed as above.

If you just truncate to the next integer, this works well only if your coordinates are large. It does not matter if all coordinates are in the hundreds or thousands, but it makes a big difference if they're all in the lower tens. 4 and 4.4 is a 10% error...

#10 romomo   Members   -  Reputation: 103

Like
0Likes
Like

Posted 26 June 2012 - 06:54 PM

@Samoth
This is perfect, exactly what I was thinking... I just couldn't think how to go about it.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS