Archived

This topic is now archived and is closed to further replies.

How is jpg so much smaller then bmp

This topic is 5033 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I am writing a platform game in DX8, and wanted to do it in 24bbp colour, so that the quality would be high. I was worried that the download would become huge, since a fullscreen bitmap at 24bpp is about 1MB on disk. I was talking to a friend and he said that .jpg is smaller and I should try it, so I ran an experiment. I had a desktop picture of space and stars with lens flare etc... quite a nice picture. It was something like 970kb in its 24bpp .bmp format. I then saved this as .jpg and looked at the file size. It was only 28kb. I opened both with IE, and the picture quality looked almost the same. Prehaps the bmp was better, I dont know. I might have been convincing myself that it should be considering the size difference. How is it that jpg can store a picture of aprox. the same quality in such a small size. Thanks Ciaran "Only two things are infinite, the universe and human stupidity, and I''m not sure about the former." --Albert Einstein

Share this post


Link to post
Share on other sites
JPGs are compressed and lossy (information is discarded or changed). BMPs are uncompressed (in most cases) and are lossless. I would not use a JPG image for a texture, since while it looks ok when viewed at the original size on its own, as soon as you start stretching and titling it the compression artifacts will become very visible. Use a lossless compressed format instead (PNG, TGA, BMP, etc).

Share this post


Link to post
Share on other sites
Hmm, I''ve used JPEGs as textures, and they looked fine (didn''t notice any artifacts in game).

Some tricks JPEGs use to save space:
(all of this is "AFAIK" information. I.e. at least partly incorrect)

DCT, discrete cosine transform, a form of fourier tranformation is the heart of JPEG. Think of a sin or cos wave. You could either save each point of that wave ("BMP") or you could save the mathematical constants that make that wave ("JPEG"). Eg. 2*cos(5*x+3). Obviously the latter goes to much smaller space as it only needs to save 3 variables (amplitude, phase and frequency of the wave) for infinitely long wave, and the former one needs to store all the discrete points you want to store.

What do cosine/sine waves have to do with images then? Well, if you start reading the image from left to right, top to bottom, you can read image''s values sequantically. So we''ve linearized the 2-dimensional problem (this is the part I''m most unsure of. JPEG probably uses some kind of 2-dimensional DCT instead of this kind of linearizing).

It might be that the values you now read form a pure cosine wave but that''s not usually the case . Then we need to sum up several waves in order to get a more correct representation of the original image. The more waves we use, the better the image quality.

But doing DCT for the whole image would lead to ugly results (and it would be slow) so JPEG divides the image in 8x8 pixel squares. DCT is done separately for each of these squares, and squares with less information can be saved in fewer bytes without affecting the rest of the image (e.g. a simple gradient can be presented with very few cosine waves).

The power of DCT comes from the fact that natural images have generally very smooth forms, as do cosine waves. By summing up several waves, finer details can also be represented.

But the fun doesn''t end. Normally you think of bitmaps in RGB format, but JPEG transforms images to a color format that has 2 channels for color and one channel for brightness (Hue, saturation, brightness perhaps). The trick is that human eyes don''t notice colors so well, but they see the variations of brightness accurately. JPEG thus stores the brightness-channel with higher quality than the colour channels.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
you missed the actual compression part:

after the values for the block have been expressed in the frequency domain ( by the IDCT ), you quantize the values in the block ( ie mul by another block called the quantization block ). Basically this produces a bunch of zero''s in the 8x8 block ( hopefully ). This determines the quality. Then the block is traversed in a zigzag pattern ( defined by the standard or image ) then huffman RLE encoded. This is where all the compression happens, the DCT is just a way of easily detecting and removing the difference in frequency.

This happens for each 8x8 block and for each colour channel (YCbCr). Pending on the YCbCr mode you get more compression by subsampling the CbCr values. Ie in 420 format it''s one cbcr for 4 Y values. Interesting thing about YCbCr is that you can drop the CbCr values and get a greyscale image, this how a colour tv channel can be decoded by b&w tele''s.

BTW Mpeg is exactly the same but with motion compensation ( and some protocol stuff but it''s orthogonal to the compression code. )

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by civguy
Hmm, I''ve used JPEGs as textures, and they looked fine (didn''t notice any artifacts in game).



Looking fine and being fine are 2 seperate issues here. Granted jpegs look wonderful but often in 3d we are manipulating our textures on screen be it through mip-mapping, Anisotropic filtering, non true Full Screen AntiAliasing, etc. We are consistantly messing with these textures. Seeing as how just about every video card approaches these abilities differently. Using a lossy compression such as JPEG creates a problem in that it is very possible that the filtering implementations of a specific video card could adversaly effect the textures quality. Artifacting is a minor issue in comparison to some of the more severe image problems that can occur when mixing a lossy compression with video hardware that does not do true filtering. Bottom line things may look wonderful on your machine but on someone elses machine your texture may look like an aerial view of the local dump. It''s best to minimize problems like these by using non-lossy image formats such as BMP, TGA, or PNG.

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
Looking fine and being fine are 2 seperate issues here.
Oh come on, that''s like superstition. My engine was tested on TNT2, Voodoos, ATI''s old cards and a set of Geforces and it looked perfectly fine on each one of them. If I open the pictures and they look almost indistinguishable from their PNG counterparts, they aren''t going to miraculously screw up when I throw them at the card. There''s no magic going on behind JPEGs. If it looks fine, it is fine.

Share this post


Link to post
Share on other sites
Like a view of the local dump?? I dont see how this is possible, unless of course the JPEG is in fact an image of a dump.

The JPEG will be stored as a bitmap in memory once it is loaded. Stretching, rotating, and otherwise editing the image will be no less effective on the image than if the original were a BMP.

If the original JPEG looks fine to your eye, then it will look no worse than a BMP would mapped to a polygon.. Of course, if your original JPEG is really ugly, then it will look just like a really ugly BMP.

Since your video card has problems displaying images that were formerly JPEG I think you should take it to see a head doctor-- maybe it had a bad encounter with a JPEG while it was still in the factory.

Cheers,
Will

Share this post


Link to post
Share on other sites
quote:
Original post by FrancoisSoft
JPEGs are for fools. Stick to PCX format. I don''t think they lose quality.
PCX only uses RLE compression so it doesn''t lose quality, but it works only on images with 8-bit colours.

Share this post


Link to post
Share on other sites
quote:
Original post by RPGeezus
Like a view of the local dump?? I dont see how this is possible, unless of course the JPEG is in fact an image of a dump.

The JPEG will be stored as a bitmap in memory once it is loaded. Stretching, rotating, and otherwise editing the image will be no less effective on the image than if the original were a BMP.

If the original JPEG looks fine to your eye, then it will look no worse than a BMP would mapped to a polygon.. Of course, if your original JPEG is really ugly, then it will look just like a really ugly BMP.

Since your video card has problems displaying images that were formerly JPEG I think you should take it to see a head doctor-- maybe it had a bad encounter with a JPEG while it was still in the factory.

Cheers,
Will



You are making naive assumptions. The fact that a JPG is stored as "BMP" when loaded into memory is irrelivent, the damage is done the second you compress the image in JPG format. JPEG is designed to do *lossy* compression based on humans ability to percieve images. When viewed at the original size and by itself the image is supposed to appear like the original to human eyes. However this breaks down when you start using those images as textures, since those textures can be enlarged beyond their original size and can tiled.

For example here are two images, one is a lossless screen capture, the other is a lossy (JPG) version of that screen capture.



They both look "fine to the eye", and so by your assumption they should be fine when manipulated. However keep in mind that computers don''t work like people. They see images all collections of pixels, rather than spaces of color. Lets examine what those images look like viewed by a computer:

The original image:


The image stored as a JPG:


The above is the kind of artifacts that become apparent to your end user when they see your textures up close, as resized by a computer. Another problem comes when you try to tile or otherwise fit lossy textures together. Due to the weakness of human perception JPG does a lot of damage to the edges of an image, and so when placed side by side a texture that was originally seamless will now show seams where the JPG compression has modified the image.

Share this post


Link to post
Share on other sites
It would be more interesting to see a second column showing the differences between the two textures used in a common 3D situation like textured onto a wall and linearly filtered. My *guess* is that the difference would not be quite as striking.

Author, "Real Time Rendering Tricks and Techniques in DirectX", "Focus on Curves and Surfaces"

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
True enough in theoryfor the picture example above - but by the time you''ve trilinear filtered, detail map, light map, bump map, glossy map and have dynamic shadows covering half of it?

Then you run it at 60fps and hopefully have a bit more geometry, and maybe a few hundred monsters to shoot...

Then "ForumWars" texture maps aren''t really going to be noticeably different when stored in jpeg or bmp.

Bring whole you meaning to flame thrower

Users may complain however that your levels take up literally hundreds of megabytes though and take a week to load off disk.

Anyway the point is if you think of it the approximation, and smoothing you get from FSAA, and trilinear alone I''m pretty sure you won''t notice the difference.

Anyone got any ''real'' screenshots?

Of course for photorealistic nonrealtime rendering...

Share this post


Link to post
Share on other sites
quote:
Original post by Stoffel
Better yet, use lossless wavelets.
I tried one lossless wavelet implementation (Lurawave) and it constantly compressed worse than PNG.
quote:
Original post by Michalson

Your example picture is pretty silly, honestly. Ok, I admit that I save very low-color textures as PNGs because I know those compress better that way and they can have noticeable artifacts with JPEGs. However, maybe 1-5% of any game''s textures are like that (mostly just fonts).

Rest of them are grass, rocks, rusted pipes, slanted metal etc. Real-life pictures for which JPEG was invented. They contain natural noise by default, so any JPEG artifacts that might occur are unnoticeable even when zoomed reaaally close. One can also use the highest quality setting in JPEG. I tried that with your artificial test image and the JPEG version became indistinquishable from the GIF version even in a close-up zoom.

Surely your GIF was 4,5 kB and my JPEG was 40 kB, but that was an artificial test anyway, so I moved on to more realistic data. A texture I would consider normal, was 69 kB as a PNG and 39 kB as a JPEG with 100% quality, and only the tone of some pixels slightly varied in a 10x close-up. I could easily reduce the JPEG size to 12 kB so that not even the creator himself could''ve told which one was the original picture (with the help of magnifying glass of course)

So, what''s next? Use WAV instead of MP3 because you can hear MP3 artifacts when volume is loud?

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
True enough in theoryfor the picture example above - but by the time you''ve trilinear filtered, detail map, light map, bump map, glossy map and have dynamic shadows covering half of it?



That will only make it worse. While humans have a hard time seeing compression artifacts when viewing images under the right conditions (original size or smaller), the computer will have no trouble. Garbage in, garbage out.

Share this post


Link to post
Share on other sites
quote:
Original post by Michalson
Garbage in, garbage out.


That doesn''t really make sense. Under decent conditions, JPG will add a slight blur. So will filtering. Mixing the two might be slightly blurrier, but not in a horribly noticeable way. In more technical terms, if jpeg downsamples luminance, this will not necessarily have any effect on your final 3D view.

Incidently, I used your screenshots to view textures under "usual" 3D conditions (I previewed them mapped onto a quad with the .dds plugin previewer). At a distance, the filtered versions were nearly identical. Up close, you obviously saw the artifacts, but this is to be expected. Which brings up another point... I created a jpg version of a portion of your original shot. The jpg was about 20% of the original size, but visually my jpg has far less artifacts than yours (more or less none). Are you using some very low compression setting (mine was 10 on the photoshop scale). If so, then of course you are going to get artifacts, but that doesn''t make jpg bad. You can scale jpg so that it''s essentially lossless. On some pictures, scale it down, on more sensitive pictures, scale it up...

Many settings will give you good compression and very little "garbage".


Author, "Real Time Rendering Tricks and Techniques in DirectX", "Focus on Curves and Surfaces"

Share this post


Link to post
Share on other sites
quite simply the only way to know for a given image, if JPEG quality will be good enough is to do the following ...

save the BMP as a JPEG ... then use your JPEG loader routine (the one the game will use at run time) to load that image again ... then take a screen shot, or otherwise save this in memory version to a BMP again ... and COMPARE TO THE ORIGINAL ...

as known to everyone when they were created, JPEGs are absolutely great for noise, photorealistic data, but terrible for data with small color spaces or high contrast edges.

So as stated ... rusty pipes, etc will look fine as JPEG .. BUT do NOT use JPEG for the textures you place on access keypads, nighttime skylines (sharp lighting edges), or anything with text or vivid geometric objects on it.

It is hillarious to see people brag about a software product (like KDE or Mac OS X) being beautiful, and then put jpeg screenshots on the internet ... which look AWFUL! The best general purpose format (not terribly small, but lossless and high color depth support) is probably the PNG format, which is what I use for almost everthing (except low quality photos on the internet - a really sharp photo (like ansel adams stuff) will look bad in jpeg too).

Share this post


Link to post
Share on other sites
I agree with Xai's point about using the right tool. There are two variables:

1. What are you doing with it and/or what is the picture? If it is a case where JPEG is bad, don't use JPEG...

2. What quality setting are you using? Don't post the lowest quality JPEG as proof that JPEGs are universally bad.

Author, "Real Time Rendering Tricks and Techniques in DirectX", "Focus on Curves and Surfaces"


[edited by - CrazedGenius on November 19, 2002 6:25:39 PM]

Share this post


Link to post
Share on other sites
Seriously, Quake 3 uses jpegs for all of its textures without alpha. Texture quality looks good to me.


Mike

"The state is that great fiction by which everyone tries to live at the expense of everyone else." - Frederick Bastiat, The Law

Share this post


Link to post
Share on other sites