Separate Functions to Compress From Memory and From a File

This topic is 2002 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

In my compression module, I had reworked the whole system to accept a stream as input, and a stream as output, however I want to take advantage of the raw speed and simplicity of compressing from memory to memory, as well. I can make a stream from an array of bytes, and easily pass one array for input, and one for output, but it would be slow, while accessing it in the same way that I access a file. For instance, it would be fast and easy to read through an array with a simple for loop and incrementing an index, as opposed to getting data from a stream a byte (or a few) at a time through function calls, and making sure that we haven't prematurely hit the end of the stream, or any stream error codes.

My point is, all of my algorithms have been rewritten to take stream sources and destinations, and I could make a stream from a buffer in memory. However, this would be much slower than accessing the memory directly in the special case of memory to memory compression/decompression. My problem is that to implement this specialization, I would need to write two versions of each function, one that takes streams, one that takes a pointer to a block of memory. As my compression module would easily double in size to accommodate this specialization, does anyone have a suggestion as to how I implement this duality?

Share on other sites
You could implement the compression from memory buffer only, and have the file-variant load from file into memory and pass it to the compression function. Depending on if you can do compression of partial buffers or not, you either load the whole file and compress it or load part and compress it in parts if you don't want to load the entire file at once.

Share on other sites

You could implement the compression from memory buffer only, and have the file-variant load from file into memory and pass it to the compression function. Depending on if you can do compression of partial buffers or not, you either load the whole file and compress it or load part and compress it in parts if you don't want to load the entire file at once.

Yup; this, or else consider memory-mapping the file.

Share on other sites

You could implement the compression from memory buffer only, and have the file-variant load from file into memory and pass it to the compression function.

This is what I used to have, but it had the issues that it took time to allocate a large buffer, read into it, then write and free it, on top of the fact that it restricts the largest file i can compress to the amount of free memory that I have and my address space. So, I lifted the requirement that it had to reside in memory, allowing me to do things like compress directly from one file to another with low memory overhead.

Depending on if you can do compression of partial buffers or not, you either load the whole file and compress it or load part and compress it in parts if you don't want to load the entire file at once.

I currently don't have an algorithm that arbitrarily divide the input and compress the parts separately, while resulting in a contiguous block that the decompression expects.

These are all good points, but I made the trade-off of low overhead (streams) versus speed (array) for the general compression functions. For me, it is worth it to be a little slower than to chance failing to allocate and never being able to do it at all.

However, when I don't have to make this trade-off (buffers allocated elsewhere), then compressing from memory to memory would be desirable, like if I am transmitting positions of entities, and I Huffman compress them into a logical packet in memory to send over the network.

Share on other sites
For now, I made memory to memory functions that actually make streams from the two and call the stream to stream functions. As I hone and develop the algorithms, if any are small enough to be duplicated for speed, I add another special case for algorithms that support memory to memory. This way, if the module supports memory to memory for that algorithm, it will do it, and if it doesn't, it will simply use the slower stream version.

I feel that this way will make the use of temporary streams more easily carried out, by having them being used behind the scenes, and if such an optimization comes into existence, the same code will run with the faster algorithm, otherwise it will take the compatibility route and use the existing algorithms.

Share on other sites
You can make the guts of your compression code a template function that is parametrized on the type of input and pass it functor(s), also parametrized on the input type, that do whatever low-level access you need with the type i.e. file reads and writes in the case of a file stream and memory accesses in the case of a memory buffer.

Share on other sites

You can make the guts of your compression code a template function that is parametrized on the type of input and pass it functor(s), also parametrized on the input type, that do whatever low-level access you need with the type i.e. file reads and writes in the case of a file stream and memory accesses in the case of a memory buffer.

I wish I had templates, some times. Also, that sounds more or less like what I'm doing with the streams, just moving the stream functionality directly into the compression module.

Share on other sites
What is wrong with the suggestion to use memory mapping? It sounds like the perfect solution to me...

Share on other sites

In my compression module, I had reworked the whole system to accept a stream as input, and a stream as output, however I want to take advantage of the raw speed and simplicity of compressing from memory to memory, as well. I can make a stream from an array of bytes, and easily pass one array for input, and one for output, but it would be slow, while accessing it in the same way that I access a file.

Turn the input interface on its head. Rather than having an abstraction for buffer filling, have an abstraction for returning pointers to successive ranges of bytes. In other words, not this (which I'm guessing is somewhat similar to what you have now):

 class byte_source : uncopyable { public: virtual ~byte_source() { } virtual std::size_t read_bytes(unsigned char *outbuf, std::size_t outbuf_size) = 0; }; 

 struct byte_range { byte_range() : begin(0), end(0) { } byte_range(const unsigned char *begin, const unsigned char *end) : begin(begin), end(end) { } bool empty() const { return begin == end; } const unsigned char * const begin; const unsigned char * const end; }; class byte_source : uncopyable { public: virtual ~byte_source() { } virtual byte_range next_range() = 0; }; 

This second byte_source interface can be implemented trivially to return the range for a memory buffer. It can also be implemented in terms of a memory-mapped file, even when the OS imposes a fixed-size window (I think Windows restricts you to 4Gb at a time).

You can also implement a source that does buffered file reads as well, returning the range corresponding to its internal buffer each time, after refilling it. Edited by e?dd

Share on other sites

What is wrong with the suggestion to use memory mapping? It sounds like the perfect solution to me...

These are all good points, but I made the trade-off of low overhead (streams) versus speed (array) for the general compression functions. For me, it is worth it to be a little slower than to chance failing to allocate and never being able to do it at all.

This is what I used to have, but it had the issues that it took time to allocate a large buffer, read into it, then write and free it, on top of the fact that it restricts the largest file i can compress to the amount of free memory that I have and my address space. So, I lifted the requirement that it had to reside in memory, allowing me to do things like compress directly from one file to another with low memory overhead.

For OS's that do support memory mapping a file without actually having it reside in memory, I'm still limited by address space. If I can't map even 2gb of contiguous memory on a 32bit machine, then there is no way that I can handle the maximum file size if I were to compress an entire file at once.

For OS's that do not support such mapping, I'm not only limited by the address space, but also the amount of memory available. If I don't need to limit my resources, than I see no need to.

Turn the input interface on its head. Rather than having an abstraction for buffer filling,

I don't understand. I have an abstraction for buffer filling? I have a function that compresses input as it reads from a stream; if I already have the buffer, then I'm basically memory mapping. If I'm reading into a buffer through an abstract interface, how is this different than using a buffered stream? If I use an abstract interface that could be a memory buffer, or a file, and I use functions to read from it, how is this different than the streams I already have?

• 9
• 23
• 10
• 19