Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Grizwald

Lossy Vertex Compression

This topic is 5216 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok, if I have a buffer of vertices, and I clamp all the values between 0.0 and 1.0 (or something similar) would it be possible for me to form a 2D array from the data (as close to perfectly square as possible) and then applying a jpeg-esqe lossy compression algo on it? If so, Is there a chance of severe vertex distortion? Or is there just a better way to do it all together? I could even export all the vertex data to raw file with dimensions sqrt(buffersize) x sqrt(buffersize) and then compress it with photoshop or something. But then i don''t know if I could decompress it properly in a vertex shader. I know i could create a CompressedVertexStream, and feed it to a jpeg decoder, but I''d rather perform all this in the GPU. Thanky!

Share this post


Link to post
Share on other sites
Advertisement
I doubt you can do it with jpeg or similar. The theory behind jpeg compression is to adjust pixels so that the image still looks enough like the original so that you don't notice. If you try to apply it to 3 dimensional data, youll probably get some wierd distortions when you decompress. Its an interesting idea, but you might have to tweak the algo to get it to work properly.

Edit: if you get it working, you could probably do something with a progressive scan and use different miplevels/detail levels easily.

[edited by - DukeAtreides076 on June 7, 2004 12:16:19 PM]

Share this post


Link to post
Share on other sites
http://research.microsoft.com/~hhoppe/

Look under "Geometry images". Not the same thing and your idea would not really work (your edges wouldn't match due to compression artifacts), but very similar.

[edited by - GameCat on June 7, 2004 1:23:08 PM]

Share this post


Link to post
Share on other sites
Yeah, I just realized that with jpeg. Plus i don''t think jpeg could be done in a shader quickly enough.

Share this post


Link to post
Share on other sites
How good would the compression be on a qunatized vertex buffer using a huffman-like scheme? Is there a way to decode that in the shader fast enough?

Share this post


Link to post
Share on other sites
How many vertices are you planning on having per ''vertex image''?
Instead of jpeg, I have another idea. Many vertices may use the same numbers in their coordinates. So how about:

identify all the floating point numbers used for x, y, and z, and put them in an array

replace each (x,y,z) floating point triple with 3 indices to the float array

To save more space, very similar numbers (within a threshold) can be fused into one number. (like .06 and .07 could both be replaced with .065) Numbers can also be encoded in 16-bit floating point, or in fixed point.

When it''s all over, this resulting structure can be losslessly compressed using a huffman coder (or whatever else you like).

I''m not sure if this can be done with shaders though. Does it have to happen in the GPU? Are you planning on sending many of these compressed vertex data to the GPU during runtime, or it is just a file format that you load once?

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!