Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


compression schemes

This topic is 5467 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Say I''ve got the following structure: somestruct { float [4][3] }; Does anyone know how I would compress this kind of data if there was say: somestruct somearray[12000] or something along those lines. thankz in advance guyz,

Share this post

Link to post
Share on other sites
Compress it for what purpose? to save to disk? to take up less space in memory while still having access to each matrix? To upload it to a video card?

How appropriate. You fight like a cow.

Share this post

Link to post
Share on other sites
RLE is the easiest compression to use.

For your example you''d allocate two one-dimensional arrays of chars equal to the size of the data you want to compress. Into the first one you''d memcpy your array of structs.

You now have a array of bytes.

You start at the beginning and see what the value of the byte is. You then progress through the array and count how many of the value appear in a row.

In the second array you then put array[0]=value of byte and array[1]=number of bytes

If you have more than 256 in a row then you just put the remainder in the next pair of bytes.

Keep track of how many bytes you use in the array holding the compressed data and then when you''re done copy it to wherever and delete the two arrays.

To uncompress you just read array[0] and then write array[1] bytes of that value. And so on.

If the size of your array of structs can vary then you''d want to use a header to tell how many bytes will be in the uncompressed version.

When you''re done uncompressing you simply memcpy from the array of chars to the array of structs.


[ IcarusIndie.com | recycledrussianbrides.com | Got Linux? ]

Will Post For Food

Share this post

Link to post
Share on other sites
RLE for floating point data? Beyond runs of zeroed floats, that''s likely to produce very little compression. But it might be improved by taking the atom size for the algorithm to be more than a byte... either the size of a float, or the size of twelve floats.

How appropriate. You fight like a cow.

Share this post

Link to post
Share on other sites
RLE wouldn''t work on the floating point data would it?? or not very well??

Basically I want to keep the data in memory, so I need it to be as small as possible.

Any other ideas guyz?? Thanks again.

One way could be to check the previous data, if its the same then reference that array at that point. But the problem remains that the array is already allocated so that wouldn''t help, unless I was to allocate the [4][3] only if required, but what kind of processing cost would that have??

Share this post

Link to post
Share on other sites
I want to do a replay in a game, at the moment what I''m doing is storing the matrix every frame that contains the Up, At, Right and Pos values. Then just play them back, hence the

float m[4][3]

however this is taking up a lot of memory, so I want to compress, I had thought of only storing changing values, and referencing previously used ones, but thats gonna mean lots of re-allocating memory on the fly.

Share this post

Link to post
Share on other sites
Okeydoke, now we''re getting somewhere. First off, matrices are a stupid way to represent the data. There''s too much redundancy in them, and it would be a pain in the butt to compress it out in one step.

Ask yourself: What do the matrices represent? If the only matrices multiplied into the destination matrices are transformation and rotation matrices, that''s a good thing. Break out each matrix into a vector and a quaternion, and store them that way... already we''re down to 7 floats. (six plus a bool if you want to do cute things with the quaternion... but let''s leave that alone for now). From there, it''s up to you whether you want to do lossy compression by converting the quaternions into slerps and squads, and the vectors into linear keyframes or splines, or just store it as is. If you don''t use the lossy compression, you could use the RLE method KalvinB mentioned... just use the entire quaternion+vector as a unit, and don''t expect too much more from it.

How appropriate. You fight like a cow.

Share this post

Link to post
Share on other sites
a. Convert the rotations into unit quaternions. These can quantise reasonably accurately down to as little as 4 bytes.

b. So assuming your matrices only have rotations and translations, with the translations taking a full 4 byte float for each component, the equivilent transformation could be stored as 4 floats (translation x,y,z & quaternion).

c. You''ll find details of compressing rotations (inc. using quaternions) in resources that present compressed and/or tabled normals

a. If you don''t have too serious cache coherency worries, I''d split the rotation part and the translation part into two separate arrays.

b. If you''re sampling at a high rate, then store each record as a delta (i.e. the difference with the previous record).

c. I''d even take a look at storing each component in the rotation and each in the translation as a separate array (i.e. 7 separate arrays).

a. Try to get as much of your data as possible into normalised floats rather than full range floats.

b. You can drop the exponent from each normalised float - so you save 8 bits for each.

c. Deltas and scaling by bounding volume sizes can get a lot of your data normalised at decent sample rates.

d. The exponent of a normalised float is a fixed point integer if done right.

e. This means lots most of the fixed point values have a very similar range rather than each being scaled by it''s own exponent.

f. Each mantissa being scaled by its exponent is what makes floating point values in a binary stream look "noisy", and what normally makes it difficult to compress with traditional bitwise data compression methods. So having the values essentially as integer is GOOD for traditional compression.

a. An advantage of things being in separate arrays ("streams") is values in a similar range are near (and often next to) each other in memory.

b. So you can now use RLE schemes on each of the arrays. If you went for 7 separate arrays, then say an object only translated in the X axis for 20 frames (samples) and had no other movement/rotation, then the other 6 arrays (streams) would have the *same* value in their stream for 20 frames. So for those 6 arrays you could simply store an RLE repeat marker.

c. Having stuff that''s very similar near in memory is GOOD for tranditional "window" based compression schemes.

a. Analyse what precision you want out of your floats based on the physics modelling and maximum frame rate in your engine. You probably don''t need all of those 23bits of precision if you really think about it...

b. Rescale to the units of your minimum amount of movement. i.e. if your objects always move at least a world centimetre every frame, then there''s no point in storing the nanometres or even millimetres.

c. Without that rescaling the lower order bits of that mantissa in that case is just useless noise you don''t need.

d. That useless noise also has a higher frequency.

e. The noise is often left over from computations performed in the modelling package the original 3D data was created in; the order of artist operations, manually placed vertices and frames etc.

6) After the above, compress each stream with a traditional compressor for pretty lightweight data.

a. The more you know about where your input data came from, the more you can do to compress it.

b. Determine how much you can feed back into the original creation routines rather than storing and trying to compress the full fat output. For example your physics may be taking some essentially 1D inputs in a few places (i.e. less than every frame) that you could store along with timestep values to avoid storing the result.

Simon O''Connor
3D Game Programmer &
Microsoft DirectX MVP

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
There was a tutorial somewhere (here or over at FlipCode?) which cut down the size of a float to 12 bits. I''m no math expert and don''t know anything about it, but you probably "just" lose some precision. With game playbacks that shouldn''t matter that much.

Depending which kind of game it is, you could also just store the user input instead of the data. I.e. in the real game the player presses "up" then you store that as an "opcode" in your playback list. When the you want to playback that you just read the "user input" from there. If your game uses mouse input you could just store the relative position changes and put each one into a signed char (for x/y). Then you go on with your game almost as if played for real, but you block user input other than cancel etc.

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!