Archived

This topic is now archived and is closed to further replies.

fast data loading like shenmue

This topic is 5978 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Yesterday, I''ve play to Shenmue on Dreamcast. I was impressed by the number of details, textures and the speed of the loading (Dreamcast have only a 8x cdrom). Actually, I''m writing an aeroplane game and I need to load a lot of data (heightmap, colormap, textures, object, mp3, wav sound...): approximately 16mo for each level. What is the best way to speed up loading time? What is the fastest image format? How do I have to use archive (zip? rar? ace?) ?

Share this post


Link to post
Share on other sites
I would recommend Targa for images... Firstly, it has built in alpha channel support, and secondly, the format supports (lossless) compression. This means the saved file will be exactly the same quality as the original... The second is to adopt a lossy compression method... I am not sure if you know, but jpg compression consists of 3 parts... firstly, it stores in an HSV type format, averaging the value (V) between blocks of four pixels. this quarters the V components size, reducing the file size by (1/3)*(3/4)=1/4... The second stage is to do a 2D discrete cosine transform on the image, to analyse the frequencies in the image, generating a series of 2D matricies. The next stage is where the "compression slider" comes in (you know, when saving jpgs). since the image is most likely to have little high freqeuncy data (ie the image is smoothish) the higher frequency bits of the image, are probably not very high amplitude, so you multiply the matrix (per element) such that low amplitude, high frequency data gets lost (the process can be reversed, but obviuosly, anything reduced to zero is unrecoverable)... the data is then packed in a zigzag fashion, starting with the low horizontal/vertical frequency edge, where most of the data will be... This data is then run length encoded, to eliminate the strings of zeros, that the scaling generated... then the data is encoded using a lossless compresion method (eg huffman/entorpy coding)... This produces much smaller, but marginally lower quality image... I suggest that perhaps you would like to either use the jpg standard, with low compression... or to use some, or all of hte techniques jpg uses, on your own data... ie develop your own fiel format, around this compression method... eg, if you selected a fixed compression percentage, and did not do the final step (entropy coding), ie you stopped after RLE, the file would be much smaller, but still fast to load, because of the constant compression percentage... well anyway, I hope this helps... although I have a feeling it was a little more complex than you were looking for.

Share this post


Link to post
Share on other sites