Jump to content

  • Log In with Google      Sign In   
  • Create Account


Separate Functions to Compress From Memory and From a File


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
21 replies to this topic

#1 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 10:18 AM

In my compression module, I had reworked the whole system to accept a stream as input, and a stream as output, however I want to take advantage of the raw speed and simplicity of compressing from memory to memory, as well. I can make a stream from an array of bytes, and easily pass one array for input, and one for output, but it would be slow, while accessing it in the same way that I access a file. For instance, it would be fast and easy to read through an array with a simple for loop and incrementing an index, as opposed to getting data from a stream a byte (or a few) at a time through function calls, and making sure that we haven't prematurely hit the end of the stream, or any stream error codes.

My point is, all of my algorithms have been rewritten to take stream sources and destinations, and I could make a stream from a buffer in memory. However, this would be much slower than accessing the memory directly in the special case of memory to memory compression/decompression. My problem is that to implement this specialization, I would need to write two versions of each function, one that takes streams, one that takes a pointer to a block of memory. As my compression module would easily double in size to accommodate this specialization, does anyone have a suggestion as to how I implement this duality?

Sponsor:

#2 Brother Bob   Moderators   -  Reputation: 7774

Like
3Likes
Like

Posted 27 November 2012 - 10:30 AM

You could implement the compression from memory buffer only, and have the file-variant load from file into memory and pass it to the compression function. Depending on if you can do compression of partial buffers or not, you either load the whole file and compress it or load part and compress it in parts if you don't want to load the entire file at once.

#3 mhagain   Crossbones+   -  Reputation: 7413

Like
1Likes
Like

Posted 27 November 2012 - 10:44 AM

You could implement the compression from memory buffer only, and have the file-variant load from file into memory and pass it to the compression function. Depending on if you can do compression of partial buffers or not, you either load the whole file and compress it or load part and compress it in parts if you don't want to load the entire file at once.


Yup; this, or else consider memory-mapping the file.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#4 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 10:51 AM

You could implement the compression from memory buffer only, and have the file-variant load from file into memory and pass it to the compression function.


This is what I used to have, but it had the issues that it took time to allocate a large buffer, read into it, then write and free it, on top of the fact that it restricts the largest file i can compress to the amount of free memory that I have and my address space. So, I lifted the requirement that it had to reside in memory, allowing me to do things like compress directly from one file to another with low memory overhead.

Depending on if you can do compression of partial buffers or not, you either load the whole file and compress it or load part and compress it in parts if you don't want to load the entire file at once.


I currently don't have an algorithm that arbitrarily divide the input and compress the parts separately, while resulting in a contiguous block that the decompression expects.

These are all good points, but I made the trade-off of low overhead (streams) versus speed (array) for the general compression functions. For me, it is worth it to be a little slower than to chance failing to allocate and never being able to do it at all.

However, when I don't have to make this trade-off (buffers allocated elsewhere), then compressing from memory to memory would be desirable, like if I am transmitting positions of entities, and I Huffman compress them into a logical packet in memory to send over the network.

#5 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 12:14 PM

For now, I made memory to memory functions that actually make streams from the two and call the stream to stream functions. As I hone and develop the algorithms, if any are small enough to be duplicated for speed, I add another special case for algorithms that support memory to memory. This way, if the module supports memory to memory for that algorithm, it will do it, and if it doesn't, it will simply use the slower stream version.

I feel that this way will make the use of temporary streams more easily carried out, by having them being used behind the scenes, and if such an optimization comes into existence, the same code will run with the faster algorithm, otherwise it will take the compatibility route and use the existing algorithms.

#6 jwezorek   Crossbones+   -  Reputation: 1606

Like
0Likes
Like

Posted 27 November 2012 - 01:31 PM

You can make the guts of your compression code a template function that is parametrized on the type of input and pass it functor(s), also parametrized on the input type, that do whatever low-level access you need with the type i.e. file reads and writes in the case of a file stream and memory accesses in the case of a memory buffer.

#7 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 01:37 PM

You can make the guts of your compression code a template function that is parametrized on the type of input and pass it functor(s), also parametrized on the input type, that do whatever low-level access you need with the type i.e. file reads and writes in the case of a file stream and memory accesses in the case of a memory buffer.


I wish I had templates, some times. Also, that sounds more or less like what I'm doing with the streams, just moving the stream functionality directly into the compression module.

#8 Álvaro   Crossbones+   -  Reputation: 11859

Like
0Likes
Like

Posted 27 November 2012 - 01:40 PM

What is wrong with the suggestion to use memory mapping? It sounds like the perfect solution to me...

#9 e‍dd   Members   -  Reputation: 2105

Like
0Likes
Like

Posted 27 November 2012 - 03:01 PM

In my compression module, I had reworked the whole system to accept a stream as input, and a stream as output, however I want to take advantage of the raw speed and simplicity of compressing from memory to memory, as well. I can make a stream from an array of bytes, and easily pass one array for input, and one for output, but it would be slow, while accessing it in the same way that I access a file.


Turn the input interface on its head. Rather than having an abstraction for buffer filling, have an abstraction for returning pointers to successive ranges of bytes. In other words, not this (which I'm guessing is somewhat similar to what you have now):

class byte_source : uncopyable
{
    public:
        virtual ~byte_source() { }
        virtual std::size_t read_bytes(unsigned char *outbuf, std::size_t outbuf_size) = 0;
};

But this, instead:

struct byte_range
{
    byte_range() : begin(0), end(0) { }
    byte_range(const unsigned char *begin, const unsigned char *end) : begin(begin), end(end) { }

    bool empty() const { return begin == end; }

    const unsigned char * const begin;
    const unsigned char * const end;
};

class byte_source : uncopyable
{
    public:
        virtual ~byte_source() { }
        virtual byte_range next_range() = 0;
};

This second byte_source interface can be implemented trivially to return the range for a memory buffer. It can also be implemented in terms of a memory-mapped file, even when the OS imposes a fixed-size window (I think Windows restricts you to 4Gb at a time).

You can also implement a source that does buffered file reads as well, returning the range corresponding to its internal buffer each time, after refilling it.

Edited by e‍dd, 27 November 2012 - 03:02 PM.


#10 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 03:49 PM

What is wrong with the suggestion to use memory mapping? It sounds like the perfect solution to me...

These are all good points, but I made the trade-off of low overhead (streams) versus speed (array) for the general compression functions. For me, it is worth it to be a little slower than to chance failing to allocate and never being able to do it at all.

This is what I used to have, but it had the issues that it took time to allocate a large buffer, read into it, then write and free it, on top of the fact that it restricts the largest file i can compress to the amount of free memory that I have and my address space. So, I lifted the requirement that it had to reside in memory, allowing me to do things like compress directly from one file to another with low memory overhead.

For OS's that do support memory mapping a file without actually having it reside in memory, I'm still limited by address space. If I can't map even 2gb of contiguous memory on a 32bit machine, then there is no way that I can handle the maximum file size if I were to compress an entire file at once.

For OS's that do not support such mapping, I'm not only limited by the address space, but also the amount of memory available. If I don't need to limit my resources, than I see no need to.

Turn the input interface on its head. Rather than having an abstraction for buffer filling,

I don't understand. I have an abstraction for buffer filling? I have a function that compresses input as it reads from a stream; if I already have the buffer, then I'm basically memory mapping. If I'm reading into a buffer through an abstract interface, how is this different than using a buffered stream? If I use an abstract interface that could be a memory buffer, or a file, and I use functions to read from it, how is this different than the streams I already have?

#11 e‍dd   Members   -  Reputation: 2105

Like
0Likes
Like

Posted 27 November 2012 - 04:26 PM

First, if my assumption about you doing buffering for files or anything else was wrong, please ignore the first 'byte_source' interface I mentioned. There was no code to analyse, so I had to guess. Concentrate on the second byte_source interface.


Turn the input interface on its head. Rather than having an abstraction for buffer filling,

I don't understand. I have an abstraction for buffer filling?

I was asking that, really (though this wasn't obvious -- apologies). Again, had to guess.

I have a function that compresses input as it reads from a stream; if I already have the buffer, then I'm basically memory mapping. If I'm reading into a buffer through an abstract interface,

I'm saying that you don't have to (manually) read in to a buffer at all. You could argue memory mapping a file is doing this, but it should be doing it (near) optimally and it's handled for you, for the most part.

how is this different than using a buffered stream?

There's no need for buffers! You can implement a buffered stream derived from the second interface if you want, but it's not necessary for (memory-mapped) files or raw memory, which seem to be your primary data sources.

If I use an abstract interface that could be a memory buffer, or a file, and I use functions to read from it, how is this different than the streams I already have?


Let me attempt to clarify by implementing the byte_source interface for a region of memory:

class memory_byte_source : public byte_source
{
    public:
        memory_byte_source(const unsigned char *begin, const unsigned char *end) : begin(begin) end(end) { }

        byte_range next_range()
        {
            byte_range ret(begin, end);
            begin = end;
            return ret;
        }

    private:
        const unsigned char *begin;
        const unsigned char *end;
};

The first call to next_range() gives you the first and only byte_range (no copies, the range is the memory). The second call returns an empty range, telling you that the underlying data source is exhausted. The implementation for a memory mapped file will look very similar, except for the part where you create the mapping (in the constructor, perhaps).

The compression algorithm may of course have to be adapted for the new interface, but I don't imagine that would be too hard as you're now reading bytes through a pointer, regardless of the interface's implementation.

#12 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 04:54 PM

I'm saying that you don't have to (manually) read in to a buffer at all. You could argue memory mapping a file is doing this, but it should be doing it (near) optimally and it's handled for you, for the most part.

I already don't read into a buffer. I pass the stream to a compression function, and it reads from it directly, and writes to an output stream directly.

There's no need for buffers! You can implement a buffered stream derived from the second interface if you want, but it's not necessary for (memory-mapped) files or raw memory, which seem to be your primary data sources.

I have a buffered stream interface, but I'm not using it in this case. I'm also not memory mapping any files; I've decided that I will not be doing that, because it unnecessarily restricts the file size, and if I have to allocate it the naive way due to having no OS support for it, I chance not having enough memory.

The compression algorithm may of course have to be adapted for the new interface, but I don't imagine that would be too hard as you're now reading bytes through a pointer, regardless of the interface's implementation.

How is this different than using streams? Through my way, both are represented as files. Through this way, both are represented as an array of bytes in memory. It seems to be more or less how I already did it, except type A is converted to type B before compression in mine, and type B is converted to type A in the method presented. Through my way, the data is read from the array through function calls like a file, creating a bottleneck. Through this way, the data is read into an array either beforehand, or during the read somehow, creating a bottleneck. Both methods have one key flaw, and since I feel that compressing from memory to memory is less likely than any stream (including memory) to any stream (including memory), I choose the flaw that is least likely to be encountered.

#13 e‍dd   Members   -  Reputation: 2105

Like
0Likes
Like

Posted 27 November 2012 - 05:35 PM

I already don't read into a buffer. I pass the stream to a compression function, and it reads from it directly, and writes to an output stream directly.


The interface I've provided enables optimal access to both memory and files (note when you memory-map a file, you can do it, say, 16mb at a time or something. You don't have to do the whole lot at once).

Specifically, what I've presented solves this problem of yours:

My point is, all of my algorithms have been rewritten to take stream sources and destinations, and I could make a stream from a buffer in memory. However, this would be much slower than accessing the memory directly in the special case of memory to memory compression/decompression.


It solves this problem because the interface still allows direct access from memory, where it can be provided. For file access, it also allows either manual reads in to a buffer, or memory-mapping, which ever works best or is faster. In either case, it's the thinnest and fastest interface providing access to a range of bytes.

If that's not what you were asking for, I'm afraid I don't understand what you're after.

Or phrased another way: how about you present your actual interface and point to what you specifically don't like about it? Otherwise I fear we'll continue to talk past each other.

Through this way, the data is read into an array either beforehand, or during the read somehow, creating a bottleneck.

There's no bottleneck with my method (beyond the unavoidable cost of disk access).

#14 mhagain   Crossbones+   -  Reputation: 7413

Like
0Likes
Like

Posted 27 November 2012 - 07:45 PM

For OS's that do support memory mapping a file without actually having it reside in memory, I'm still limited by address space. If I can't map even 2gb of contiguous memory on a 32bit machine, then there is no way that I can handle the maximum file size if I were to compress an entire file at once.


Any POSIX system supports memory mapping. Windows supports memory mapping.

This is the point at which you need to make decisions. Do you actually need to support other OSs? Do you actually need to support files > 2gb? Are you just putting artificial obstacles in your own way? Are you hitting a point at which you're looking for a theoretical best-case solution at the expense of being pragmatic about stuff? are you focussing on edge cases that will never happen at the expense of just getting the job done?

They're questions only you can answer, but I suspect that for all practical use cases memory mapping is going to work just fine.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#15 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 07:56 PM

The interface I've provided enables optimal access to both memory and files (note when you memory-map a file, you can do it, say, 16mb at a time or something. You don't have to do the whole lot at once).


I concede, that your method will likely result in a higher speed. However, this now requires a whole new interface just for one module, whereas streams are used all over my library. Additionally, the library also targets platforms that don't support memory mapping, so it is emulated during the few times it is needed; I'd like for this not to be one of them.

#16 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 07:57 PM


For OS's that do support memory mapping a file without actually having it reside in memory, I'm still limited by address space. If I can't map even 2gb of contiguous memory on a 32bit machine, then there is no way that I can handle the maximum file size if I were to compress an entire file at once.


Any POSIX system supports memory mapping. Windows supports memory mapping.

This is the point at which you need to make decisions. Do you actually need to support other OSs? Do you actually need to support files > 2gb? Are you just putting artificial obstacles in your own way? Are you hitting a point at which you're looking for a theoretical best-case solution at the expense of being pragmatic about stuff? are you focussing on edge cases that will never happen at the expense of just getting the job done?

They're questions only you can answer, but I suspect that for all practical use cases memory mapping is going to work just fine.


I choose compatibility.

#17 Bacterius   Crossbones+   -  Reputation: 8134

Like
0Likes
Like

Posted 27 November 2012 - 08:04 PM

For OS's that do support memory mapping a file without actually having it reside in memory, I'm still limited by address space. If I can't map even 2gb of contiguous memory on a 32bit machine, then there is no way that I can handle the maximum file size if I were to compress an entire file at once.

You do know you can map the file in multiple stages - you don't have to open the entire file at once. That's what all the arguments you are used to setting to zero do :)

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#18 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 27 November 2012 - 10:52 PM


For OS's that do support memory mapping a file without actually having it reside in memory, I'm still limited by address space. If I can't map even 2gb of contiguous memory on a 32bit machine, then there is no way that I can handle the maximum file size if I were to compress an entire file at once.

You do know you can map the file in multiple stages - you don't have to open the entire file at once. That's what all the arguments you are used to setting to zero do Posted Image

I'm well aware that I can map blocks of it at a time. I've read the documentation.

Since it doesn't look like anyone has any other suggestions but memory mapping, which I can't/won't use (How do I do memory mapping for an encrypted file stream in an archive through the OS?), I consider this thread complete until someone has a new suggestion. I've thought long and hard about all of these things, and while it wasn't through the OS, I had previously used memory mapping for all streams in order to compress them, resulting in cruft like reading in the file to a memory map, compressing it to a new buffer, and writing it, rather than the new "read from a stream, write to a stream". It added too many steps, and took up too much memory, because when it comes down to it, native memory mapping solves only one problem: reading from a file descriptor given by the OS. Any other stream type, I'd have to memory map it myself, the naive way, and that puts me right back where I was before. So, it's either I stay where I am now at post #5, or I memory map a file when I see it, and allocate memory to store the contents of every other stream type so that they all use memory like the poor design I had before, or I create a new type of stream on top of the stream I already have to have the stream, which could be a block of memory acting like a file stream, become another stream that is now acting like a block of memory.

I appreciate that people are trying to offer solutions other than what I'm already doing, but I'm not doing things the way that I am now for lack of education on the subject; I sat down, and thought these through, then came here to get a fresh pair of eyes on the project to see if there are any alternatives that I missed.

#19 et1337   Members   -  Reputation: 1363

Like
0Likes
Like

Posted 27 November 2012 - 11:36 PM

How do I do memory mapping for an encrypted file stream in an archive through the OS?


Let me get this straight. You want to use your compression algorithm on an encrypted 2Gb file inside another compressed archive?

I can only assume that you're writing some kind of library that has to account for literally every possibility.

In an attempt to be at least somewhat useful, let me just recommend that you not try to re-invent the wheel. These problems have been solved at a lower level and to a higher degree of efficiency than you or I can likely achieve.

#20 Ectara   Crossbones+   -  Reputation: 2763

Like
0Likes
Like

Posted 28 November 2012 - 09:56 AM

Let me get this straight. You want to use your compression algorithm on an encrypted 2Gb file inside another compressed archive?

No, the archive need not be compressed. And one usually compresses before they encrypt, but that isn't the point. The stream does not provide information on how it is implemented. These solutions require knowing the backend to write a whole new stream interface, while only solving two types of streams. This will not work for my purposes.

In an attempt to be at least somewhat useful, let me just recommend that you not try to re-invent the wheel. These problems have been solved at a lower level and to a higher degree of efficiency than you or I can likely achieve.

You're right. I'm going to drop support for all of these features, remove several modules of my libraries, and use all features that aren't supported on all of the platforms I own. I was waiting for when the first "abandon the project" post would appear.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS