Jump to content
  • Advertisement
Sign in to follow this  
Illuminate

colours and bits

This topic is 4557 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I want to know something about computer colours. In the computer everything is numbers so 256 colours will be 1 byte (0-255) every number is different colour. 16-bit colours are make of 2 bytes (2 bytes = 16 bits) in this bit order: Unused Red Red Red Red Red Green Green Green Green Green Blue Blue Blue Blue Blue 24-bit colours are maximum of the computer colours because use 1 byte for Red, 1 byte for Green and 1 byte for Blue. The programs that have most precision painting use 24-bit because there is 3 value (0-255) to set the color. What is 32-bit color and why I don't see any difference between 24 and 32 bit colours ???

Share this post


Link to post
Share on other sites
Advertisement
There are many 'formats' available, both for 8-bit (which is not always palettized) and for others. 16-bit does use X1R5G5B5 (one unused bit), A1R5G5B5 (transparency) or R5G6B5. 24-bit is R8G8B8 (used in bitmap files, for instance) and 32-bit can be X8R8G8B8 (the standard 32-bit/24-bit aspect on your computer) or A8R8G8B8 (8-bit alpha channel, used for transparency).

There is usually no difference between what is called 24-bit and 32-bit colors. A 24-bit (3 byte) value will be padded to 32-bit (4 byte) simply because it is easier and faster to manipulate. When space is a problem, it will be converted to 24-bit (such as storage in a 24-bit bitmap).

I also misread your topic as 'odours and bits'.

Share this post


Link to post
Share on other sites
As bytecoder said the only difference between 24 bit and 32 bit colors on normal computers, is that the 32 bit have an alpha channel which is normally used to indicate the level of transperacy.

You assume a lot of stuff which is not always correct, first a byte is not always 8 bit (even though they are on most PCs). Second the format of 16 bit colors varies a lot, the two most common representations is 5 bit for each color, and the other is 5 bit red, 6 bit green (since the eye is more sensitive to green) and 5 bit blue. Also you say that 24 bit is maximum because 1 byte is used for each color, why can't we use 2 bytes, or 128 for that matter? It is hard for the eye to see a difference between 24 bits color and higher, but it is possible to use many more bits for colors.

EDIT: To backup some of the stuff (I got the data from this site):
DirectX have a color format called:
D3DFMT_A32B32G32R32F
Which uses 32 bit for each color, they are floating point though, but it also has the format:
D3DFMT_A16B16G16R16
Which uses normal 16 bit unsigned integers for each color. Also here is a format where 10 bit is used per color (except alpha which is only 2):
D3DFMT_A2B10G10R10
About the 16 bit colors, here is the most common formats:
D3DFMT_R5G6B5
D3DFMT_X1R5G5B5
D3DFMT_A1R5G5B5
D3DFMT_A4R4G4B4
D3DFMT_A8R3G3B2 - Not that common
D3DFMT_X4R4G4B4
X means unused, A alpha, R red, G green, B blue.

Share this post


Link to post
Share on other sites
Quote:

Also you say that 24 bit is maximum because 1 byte is used for each color, why can't we use 2 bytes, or 128 for that matter? It is hard for the eye to see a difference between 24 bits color and higher, but it is possible to use many more bits for colors.

I think he was referring to the fact that graphics cards usually don't support anything higher than 8 bits per component.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!