File I/O streaming performance.

Started by
11 comments, last by Tape_Worm 11 years, 7 months ago

It's ... bad mojo ... to even mention compression. And the word "lossless" is like a lie. Seriously. It's very strange and counterproductive from my perspective.

huh.png I'm not sure why you think that. There surely exists compression schemes such that the decompressed image is completely faithful to the original one.
Advertisement

[quote name='achild' timestamp='1347923198' post='4981055']
It's ... bad mojo ... to even mention compression. And the word "lossless" is like a lie. Seriously. It's very strange and counterproductive from my perspective.

huh.png I'm not sure why you think that. There surely exists compression schemes such that the decompressed image is completely faithful to the original one.
[/quote]
Heh... I was referring to where I work. In a lot of areas in the microbiological field, there is this stigma against any compression. I'm guessing someone with a lot of clout saw that jpeg compression distorted their data, and said "compression is bad" and now we're stuck here, unfortunately.
That's incredibly short sighted of your employers.

Regardless, here's an update:
I tried a memory mapped file, and nothing good came of it (I probably implemented it wrong, but I was getting horrid results). So I tried just CreateFile with FILE_FLAG_SEQUENTIAL_SCAN and I didn't see any improvement.

However, one thing that did impro... Wait, wait. I'm getting ahead of myself. Let me tell you a story. You see, once upon a time there was an employee, a dashing and handsome employee of a company who inherited a real mess of a code base and was told to fix it up and optimize it. He toiled day and ... day... and started seeing results. However, he noticed his CPU would spike intermittently during rendering. He said "What the shit is this?? Might be the constant file I/O..." and so off he went to research the various forums and troll dwellings to find a more optimal solution. And so, here I am trying my damnedest to get this thing optimized from slide-show to interactive frame rates. I can't really go into detail about the project, nor can I say anything about why I'm so restricted (it's in the contract).

Anyway. That's the story.

So, what I noticed this morning was that the dude who was writing this before me was calling _fseeki64 before -every- read. Even on sequential reads. Now, maybe it's not that bad, but I did notice an improvement after setting it up to only seek on random access. And it's actually not too bad now.

Regardless, 5GB of data is too damned much, and I need to cull that down. So I'm thinking the compression idea is worth a shot. I considered using BC4 to compress it, but I found out that it requires the texture be a power of 2 (at least, I think that's what I read). And the data I have (and will continue to receive) will not be guaranteed to be in powers of 2. It'll be a monumental pain in the ass to resize a volume texture and encode it into BC4, so for now I'm trying out RLE compression (yeah, it's old, it's non-native, but it's fast enough and it appears to be working...?) As this is all 8 bit values in the volumes, it might be sufficient.

Anyway, that's where I am as of today.

This topic is closed to new replies.

Advertisement