Sign in to follow this  

virtual file system... decisions, decisions

This topic is 3728 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Any reasons for use a virtual file system, instead of the native OS file system, considering today hardware? I am not counting the file protection, is out of discussion, just the pure data. All comments are welcome! Thanks for reading.

Share this post


Link to post
Share on other sites
Operating systems themselves may have one or more filesystems at any point in time, some of these can even be virtual. Ultimately, you would use a virtual filesystem of your own if the ones provided by the operating system do not have the features you need (compression or encryption, for instance).

Share this post


Link to post
Share on other sites
Quote:
Original post by Doro
Well, I don't need compression or encryption, just the files, lots of them, is here any good reasons to force me to use a VFS?


No good reason to do so, unless the underlying filesystem is so naive that it wastes memory when manipulating multiple small files.

Share this post


Link to post
Share on other sites
If you know something about your file usage that the operating system does not, you can get better performance by using a virtual file system. For example, if you know that when you load file A that you'll always load file B, C and D afterwards.

Another example: many file systems use self balancing binary trees to store directory information. Worst case performance for inserting a series of elements into a self balancing binary tree often happens if you insert an ordered sequence into the tree. So if you decide to generate a series of files f00001.data, f00002.data .... f99999.data in order into a single directory, you could get pretty bad performance on a native file system.

Share this post


Link to post
Share on other sites
Quote:
Original post by ToohrVyk
Quote:
Original post by Doro
Well, I don't need compression or encryption, just the files, lots of them, is here any good reasons to force me to use a VFS?


No good reason to do so, unless the underlying filesystem is so naive that it wastes memory when manipulating multiple small files.


I'd wager, that unless something changed in stealth and has gone unnoticed, then for lots of small files (1000+, <32kb each), Windows (any FS) will behave horribly, and just doing a tar and accessing that will give an incredible performance boost.

The simplest solution I used for such was no compression zip file, and it decreased application loading times by a factor of 10.

This isn't absolute, and where a bottle-neck occurs varies, but IMHO, Windows file systems aren't well suited for small file access.

Linux file systems however tend to be the exact opposite. While insanely fast for "scratch" operations, they tend to be poorly suited for block transfers. But for Linux there is no definitive answer, since there are so many different versions and variations of file systems, that each has its own characteristics.

There was a test published a while back comparing different access patterns. I believe the best overall was XFS, with ext? versions being best for frequent file creation and modification, and another FS being best suited for media-type of disk access. I no longer have the benchmark though, It would be about 2 years old, so enough to still be relevant.

As always, YMMV, but I'd say that providing opaque file would be better.

A pitfall here is if your access to this file isn't sequential, or tends to be inherently random. There, the performance may decrease compared to native FS.

But I wouldn't dismiss at least a very simple archive. A nice example for this is Java. Unpacking jars, or using lots of them (1000+, yes Maven, looking at you) will drastically decrease start-up times.

Share this post


Link to post
Share on other sites
You may find http://www.stud.uni-karlsruhe.de/~urkt/articles/study_thesis.pdf interesting.
Incidentally, the estimate of "10x" holds up on Windows. There are definitely large gains to be had by intelligently packing files into (compressed) archives.

Share this post


Link to post
Share on other sites
It may be worth considering that due to the time taken for disk access, compresed files may be faster, since the CPU can likely decompress data faster than the HDD can read uncompressed data.

Share this post


Link to post
Share on other sites
Another advantage is that a custom filesystem allows you to give files whatever metadata your application requires - Localization information, Patch version etc.

Share this post


Link to post
Share on other sites

This topic is 3728 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this