#### Archived

This topic is now archived and is closed to further replies.

# The Strongest Graphics Compression algorithim

This topic is 5224 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello, the name says it all. What is the format or algorithim that is the most compact storage wise? I want to find a good format for my character models, or a good springboard.

##### Share on other sites
The thing about compression of data (any data, not just graphics) is the more you know about that data, the better your choice/customisation of algorithm will be.

So really, which algorithm is "most compact" depends entirely on your data.

The most compact overall would be to define the geometry procedurally or as curved surfaces with detail in a displacement map or bump map.

However, if you're talking about taking an arbitrary set of vertices and triangles (even a total polygon soup) and making that take less space on disk or in memory, then a common approach taken by systems such as java3d is:

1) use look up tables and octant tricks for things such as normals. They only ever represent a simple direction - you rarely _really_ need 3 32bit floats to do that.

2) split each vertex up into its components and have an array of each - you get better locality of reference, and can even use cheap RLE style schemes for things such as vertex colours.

3) store the vertex positions as deltas from neighbouring vertices - this increases the likelihood that data in the input stream will be duplicated and so improves "compressibility".

4) round, scale and quantize vertex positions into integer and drop (say) 16 bits per element. For individual models, the whole range available in a float is rarely needed for the same model so the exponent and much of the mantissa can be dropped.

5) compress the resultant data with a standard data compression algorithm such as LZ77. Plain "float" data doesn't compress well on its own, because the exponent tends to vary from float to float and so the mantissas of two neighbouring values can be very different (i.e. "noisy") and so don't compress at all well with standard compression algorithms. Deltas that have been converted to integer work *much* better with standard compression algorithms.

There are of course some very different methods of geometry compression - one of the most interesting "alternative" techniques I've seen in recent years has been Hugues Hoppe's "Geometry Images": http://research.microsoft.com/%7Ehhoppe/

BTW: re-ordering the vertices of your mesh for good triangle strip efficiency increases locality of reference of the data stream too and so can improve compression for things like LZ77 since there are more likely to be "similar" vertices in the compression window.

http://www.3dcompression.com/

http://164.195.100.11/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/srchnum.htm&r=1&f=G&l=50&s1='5,793,371'.WKU.&OS=PN/5,793,371

Do beware though that data compression is a patent minefield.

Simon O'Connor
Game Programmer &
Microsoft DirectX MVP

[edited by - s1ca on June 5, 2004 9:27:40 PM]

[edited by - s1ca on June 5, 2004 10:58:59 PM]

##### Share on other sites
You mean to tell me there''s a whole website on this? thank you.

##### Share on other sites
Just wanted to say from my own experience, that for character models, and especially key-frame animation, where you store 10-25 frames of animation, you get fantastic compression ratios when you pack the Position data into Integers. Then you can pack texture coordinates same way and the same about normals. Im assuring you that you wont spot a difference on a regular model with 500-2500 triangles.
Then, you can start experimenting with bit packing, i.e. why storing 16 bits per coordinate when we might make do with only 10 bits. Youll have to experiment a bit to find the ratio when the model isnt distorted.

Another way how to easily pack the data is to scale the original model so that the data is in given range. If you scale the model into range 0-255, you could get away with only 1
Byte per each coordinate ! If you consider that originally the xyz triple took 3*6=18 Bytes, after packing you take only 3*1 Bytes ! Now that`s a ratio 6:1 and for me that is pretty enough !
Sure some models get distorted but all of my models in Avenger game got away with 1 Byte per coordinate, so I was able to fit all the key-frame animations with game under 10 MB download.

As for texture coordinates, they are good candidates for bit-packing. 8 bits are usually enough for 256x256 textures. For 512x512 you need 9 bits with given precision. If you need more precision, put 2 bytes per u/v coordinate and you still take just 2*2 bytes per uv double compared to 2*6 uv double originally. Still a good deal, huh ?

If your models still look good with 1 Byte per coordinate, you could still squeeze more - just divide your model into 2 halves, and voila you get either double precision on each half of the model, or you can get away with 4 bits per coordinate!

What is even greater, it takes you less than 1 day to implement it but you gain enormous gains in storage space and loading times are a fraction of what it used to be !

Avenger 3D game (Last update MAR-26)

##### Share on other sites
Sorry, I forgot to mention the last greatest gain of previous arithmetic compression : Performance.

If you feed this compressed data into Vertex Shader, decompress them inside, You are using just a fraction of original Bandwith which can be used for more bumpmapping or another Great effects !

Avenger 3D game (Last update MAR-26)

1. 1
2. 2
3. 3
Rutin
16
4. 4
5. 5

• 10
• 11
• 14
• 10
• 25
• ### Forum Statistics

• Total Topics
632649
• Total Posts
3007644
• ### Who's Online (See full list)

There are no registered users currently online

×