Holographic prinicples for file compression?

Started by
3 comments, last by TheAdmiral 17 years, 2 months ago
I posted this prior to the site getting whacked and people didn't quite understand so perhaps this go round will be better. Ok so holograms store large amounts of image data in small physical space. This is achieved by splitting a laser and having them intersect each other after passing through a binary representation (on an SLM) of an image at given angles, creating interfence patterns that are then saved to a crystal. To read the hologram you then shine a laser through the crystal at given angles ( the same exact angle that was used to make the interfence pattern). What I was pondering is if we can replicate this process using only digital means or if the process is dependant on the physical tools ( the split laser and the cyrstal). Instead of storing the "interfence patterns" in a crystal we'd potentially just use a arbitary data matrix, and we could use vectors instead of lasers. If we could simulate the mathimatical process of the two lasers and how they interact with the SLM, then we could in effect create a 100% digital hologram. I just don't know how to achieve this, but it could prove very useful I think. And before people try and say stuff about how this can't be a good replacement for RAM...it isn't supposed to be a physical thing this would just be a new compressed file type.
Advertisement
You can certainly simulate any physical system you want, to an arbitrary degree of precision, given sufficient computing power. Nevertheless, this idea is a non-starter. Consider which number is greater: the number of bits in the file you would like to compress, or the number of atoms in the hologram you would like to simulate.
Like I said I'm not sure if this can work or if it is limited by the physical tools used. I'm not sure what mumbo jumbo goes on to actually create the interference patterns nor how they get stored on the cyrstal.

Anybody know the specifics on those things ?
there are 3 ways to look at this:

1. As a magic data compression algorithm like a million others before it, doomed to failure because of the fundamental nature that data cannot be compressed beyond a certain point, period (when you take into account both the data size of the compressed data and the size of the knowledge / device / code that is used to decompress it).

2. As an encoding tool whose strength lies in the fact that the knoweledge being used to doing the encoding / decoding is allowed to be huge, because it is not duplicated for each data unit, or the amount of data using it is huge ... or perhaps one whose computational cost is very great, but it is dwarfed by the potential data gains.

3. As an encoding tool whose strength lies in the peculiar ability to compress certain types of data particularly well, but not all types ... such as the way RLE (gif / png) is awesome simple artificial graphics, jpg for photo realistic graphics, mp3 for wave form data, ...

That said, I doubt it would help at all - although it never hurts to have more mathematical algorithms that are reversible ... when trying to process data.

My idea of the future of compression - a very very computationally intense process whereby the data is fed to multiple processors specializing in different known encoding schemes whose main criteria be that (a. they work well for some data set, b. they can be performed upon a flowing stream, c. they require few pieces of control data), and then resubmitted, perhaps a limited number of times (say 3) ... probably using a greedy algorithm so that at each pass the optimal version is chosen and submitted to the next pass. Then the header would just send the algorithm id and control data .... followed by the data of course.

The beauty of this is that it could be implemented as a universal compression system, appropriate to all known data forms (by virture of the fact that each necessary compression system would be included under its umbrella of choices) (of course it would have to be versioned like anything else, to support new ideas / algorithms ... much like having USB 1.1, or XML 2.0, or DirectX 9.0c). The "software" algorithm could be implemented in hardware, just like graphics are to a GPU.

maybe it wouldn't work anytime soon .. but someday.
It's good that you're using imagination, but I'm afraid the task is impossible. The technique you describe is entirely dependent on the technology, in particular the quantum nature of the interaction between the laser and crystal and its ability to handle huge amounts of information.

Digital processes exhibit no quantum phenomena and so simulating it would require a huge (and exponentially growing) number of bits, so you'd be losing out by a long shot. If you're not satisfied with this, take a look at Wikipedia's articles on information theory, information entropy and lossless compression. In particular, understand that what you propose is in violation of the principle of maximum entropy.

Admiral
Ring3 Circus - Diary of a programmer, journal of a hacker.

This topic is closed to new replies.

Advertisement