Compression Technologies

Started by
64 comments, last by Prozak 21 years, 10 months ago
...also i noticed... my solution
is 36 bits with the header in it. (33+3)
so we have a tie.

[Hugo Ferreira][Positronic Dreams][Stick Soldiers]
"Redundant book title: "Windows For Dummies".
"Camouflage condoms: So they won''t see you coming".

Advertisement
Firstly JPEG compression is not used to compress strings of tex, mainly because it''s lossless which isnt a good thing when you are dealing with text.Well I suppose it could be, but thats just silly any u''d end up with loads of typos so lets not go there.

Secondly, and I don''t mean to be cynical in any way but I seriously doubt that y''d have come up with a better method for lossless compression. Why? Well you seem lack an understanding of some of the fundamental concepts of compression, for a start not to mention there have been better, and smarter people than you and me work on compression algorythms even before computers were invented.

Of course this doesnt mean that you havent done it just that it''s doubtful, feel free to prove me wrong.

Also I didnt see Relative encoding mentioned, and although it probably isnt very usefull for text files it is a form of lossless encoding that is fairly widely used, useually on data that is transmitted over time and it simply involves only transmitting the changes between ''frames''. Video transmition is an immediately obvious application.

On another note RLE can be performed at the byte level as well as at the bit level. So you could compress the characters using Huffman and then compress the resulting bitstring using RLE. Then using the resulting bitstream compress using Lempel-Ziv etc
On a large file this may give better results... compressing BCBDAAAA or whatever isnt a very good test of a compression algo youll be needing some chunky files to really see if it''s any good.
Dont you get it there is no remote
quote:Original post by pentium3id
and if you stuffed that sequence of bits into a file, that
would be all you would need to later decompress it?

Are you sure? Don''t you have to define something before or
after?

This isn''t standard JPEG, remember I said this was ad-hoc. I''m just borrowing JPEG''s method for encoding the DC values of each block. But yes, there''s no code book before or after with this solution.

The encoding algorithm is fully described by:
- encode first byte directly.
- encode difference between each subsequent byte using JPEG''s truncated Huffman code.

The decoder is fully described by:
- decode first 8 bits in stream as the first byte directly.
- the following bitstream is the JPEG Huffman code for differences; simply add the decode difference to each prior decoded byte to form each new byte.

Since each half knows the JPEG Huffman codes, there''s no need to transmit a dictionary. Huffman codes are instantaneous so there''s no need for header or footer bits.
Deja vu? (or possibly reja-du, depending on how cynical you''re feeling )

Search the forums for previous threads on this, and untill you have a working system i''d hold off looking into patenting etc.
YAICAGCAITBHNWD.

Yet another "I''ve created a great compression algorithm in theory but have no working demo".

These show up all over the net and invaribly they either don''t work, or work less well than existing solutions when actually tested. I''m not trying to flame here, just trying to put things in perspective. If you aren''t aware of the current state of the art of compression, have no idea on information theory, entropy, etc, the chances of your solution actually working and being useful are slim to none..And why no working beta? Is it really that hard to generate the code? If so there''s probably tons of things you are overlooking.

Sorry, sad truth.
you cant even understand the terminology of compression alogorithms, yet you been working on and off for 6 years? seems highy unlikly that you just pulled an algo that works well out of your seat. i suggest doing actual research into compression.

also, you say you are doing this for compression of images only. yet dont even know what png uses (i would have hoped you would have looked at all formats that are popular, before assuming you could do better then their research). png is currently the best general lossless compressed image format availible.

if you want uber compression, look into generating your textures using fractal methods. this requires computing the actual image using mathmatical forumals and transformations. this is how 64kb demos do their magic. granted you are limited to what you can produce (since afterall the texture is generated procedurally), but it should suffice for most generic textures (ie bricks, grass, wood, marble, concrete, etc). other images should be stored using the png format.

pentium3id, you MUST include the entire compressed file (header and data) otherwise the size is useless. i also highly doubt you have anything special.

try compressing a real world image instead of fake test cases that are no where near as complex or represenetive of real image data. one person who knows nothing about compression is going to come up with an algo better then a group of ppl who konw compression quite well? i highly doubt it.

to test compression. just download software that supports png, currently the best truecolor lossly compressed image format.
This guy is actually trying to do something, and wants a little feedback. He got feedback alright, but not the kind all of you could have given...

Anyway, I suggest you, Pentium3id, write your demo and try it out on some files, THEN tell us about the results. Don''t fool around with useless small strings like abracadabra, people can''t give much help there. And the rest waits for it...
quote:Original post by coelurus
This guy is actually trying to do something, and wants a little feedback. He got feedback alright, but not the kind all of you could have given...

Note that the feedback didn't get negative until this comment:
"We'll, i don't want to sound arrogant or anything, but those compression schemes seem to me quite weak."

It'd be like me strolling into a physics newsgroup (cause you know I took it for 3 semesters in college) and decided to say that Einstein's theories were weak, I could do better. I would deserve every flame I got. And then some. And I'd fully expect some people to just laugh at me. Look around, and that's basically what's happening here.

I only chuckled at the original poster in my first response; since then I've merely shown him metrics by which he can evaluate his compression and compared his only example with one of my own that matches his efficiency. Believe me, I've been kinder than I could have been.

[edited by - Stoffel on June 6, 2002 4:26:13 PM]
There isn''t anything more volatile than stating "I
can do something incredible", then go off and offer no
proof. I''m a realist person, the fisrt in the room to say
"you''re a fucking asshole! that will never work!"

If i believe i have something here, it''s because it has roots
to grow.

About the png comment... Read carefully, i said i re-started
work on ANT (my compression name), after discovering i needed
some sort of compression for my textures... I never said it
was imgae-compression specific, in fact i said various times
that this is generalistic (any file) lossless compression,
with a 2nd version in the hoven, to compress 24bit true color
bmps (one of the few image file formats i know internally).

I am what some would call an "idiot-genious" because i shield
myself from outside information, yet i develop some strange
stuff in my spare time. That explains why i don''t fully
understand other compression algorithms, although i know
i can match them ocasionally.

I have the core compressor working now, in vc++6. soon
(1 month), i''ll be able to show some results.

Bottom-line, i don''t blame anyone for finding me a cinic person,
how can anyone believe miracoulous claims? especially in
compression, where teams of 20 fail to produce something new?

Thats my hobby, discover something unique and work upon it.
i like math. i dare you guys to a friendly game.
a function that returns the square root of a given number,
using only one internal variable, like so:

double my_sqrt(double n)
{
double t;
algorithm here;
return t;
}

"n" cannot be used inside the function, its value cannot change.
i camme up with this a few months back, working on prime
numbers, and the Goldbach theorem.

this is goodbye for me. I''l retire in grace now, and come back
in a month (or sooner, if answers to the puzzle appear, or some
other post seems good enough to be replied...)

Cya all soon, and thanx,

[Hugo Ferreira][Positronic Dreams][Stick Soldiers]
"Redundant book title: "Windows For Dummies".
"Camouflage condoms: So they won''t see you coming".

You're kidding, right?


    double my_sqrt(double n){    const double tolerance = 0.0001;    double t = n;    do { t = (t + n / t) / 2; } while(fabs(t * t - n) > tolerance);    return t;}    


Gosh, that was beyond easy. I could probably do better, but why waste my time here? I should be using my great math skills to come up with compression algorithms.

--TheMuuj

[edited by - themuuj on June 7, 2002 4:41:33 PM]
--TheMuuj

This topic is closed to new replies.

Advertisement