Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


16-bit Grayscale + Color Palette


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 Tim-Tim   Members   -  Reputation: 101

Like
0Likes
Like

Posted 08 April 2014 - 01:05 AM

While my searching the best way for optimize size of sprites for animation I found out that there is a way to do this with using 16-bit Grayscale image (png) + custom color pallete ( I think its 8bit = 256 colors). So here is my question:

 

1. What tools can be used to convert RGB 24bit >> 16-bit Grayscale + color pallete and vise versa?

 

2. What is the point of using 16-bit Grayscale for "luminance"?
(Device displays can produce only 8-bit output and color pallete is only 256 colors in it). 

 

p.s.: I hope my English is understandable.



Sponsor:

#2 haegarr   Crossbones+   -  Reputation: 4575

Like
0Likes
Like

Posted 08 April 2014 - 02:23 AM

Chances are that you refer to indexed colors: Using a color palette (opposed to a true color picture) usually means that the image data are indices into the palette, so that for a pixel the index is fetched from the image, the color is fetched with the index from the palette, and the color is displayed for the pixel. So the 16 bit of pixel data are probably not luminance but an index, and the palette then stores 65.536 colors.

 

The way of computing a color palette is to count the different colors that are used in the original true color picture, to check for their importance for the image, and to adapt less important colors to be the same as one of the more important ones. This is done until the allowed amount of colors is reached.

 

I don't know which current graphic programs still support color palettes (especially 16 bit ones). I know that PhotoShop has some support, and others will probably do also.

 

(Notice that extracting luminance gives you a fine grayscale image, but you cannot relate back to the original color because information is lost (from the original 3 dimensions only 1 is left). You can, of course, colorize the grayscale image, but you have no information of color distribution, so you could colorize using a single color only. Otherwise you would need some masks which are related to colors. This thing is sometimes used for customized team colors and such. But from your description, I tend to assume that you are speaking of indexed colors as mentioned above.)

 

BTW: Perhaps you have a link to a page where the method you mentioned is described?


Edited by haegarr, 08 April 2014 - 02:24 AM.


#3 Tim-Tim   Members   -  Reputation: 101

Like
0Likes
Like

Posted 08 April 2014 - 04:31 AM

You are right - I mean indexed colors. So I think that kind of selfmade conveter was used to produce this output. And sprite is a result of applying some general color table (not limited to 256 colors) for all game art resources to this 16-bit grayscale image?

 

Attached Thumbnails

  • 00.png


#4 frob   Moderators   -  Reputation: 22684

Like
0Likes
Like

Posted 09 April 2014 - 01:24 AM

You would use a 16-bit value so you have more detail in future math.

 

If you were adjusting the values the extra information helps reduce or eliminate banding.

 

For example, lets say you started with all 256 values in an 8-bit range.  Then you ran it through some processing algorithm, perhaps a color curve filter. Since the results get rounded back to integers, you might (as an example) only have 163 values after going through the filter. The filter caused you to lose about 1/3 of the values.

 

Now lets say you started with all 65535 values in your 16-bit range. Then you run it through the same filter. When you are done, you get 41727 values in the range. Again the filter caused you to lose about 1/3 of the values.

 

When you render the first image, those 163 values are sent to the screen. Since a smooth grayscale would have 256 values rather than 163, there will be many bands in the image. When you render the second image, those 41727 values get reduced down to 256 values that can be displayed, so all 256 values are smoothly shown on screen.

 

Many algorithms cause this kind of issues. Lighting, fragment shading, gamma correction, and many other routines can cause that type of quantization.

 

As you posted in the Mobile forum, you should know few very mobile games use png for their image formats. While the format is great for the web and some other uses, it isn't that great for games since it requires decompression (extra memory and processing power) and can't be used directly by the graphics chip. For your final game assets you should probably be using ETC or PVRTC formats depending on if you are targeting Android or iOS.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS