shading with a palette ?

Recommended Posts

Hi, I'm curently trying to develop a software renderer. For now i'm using a 256 color palette and it's fine for rendering flat triangles. Now i'm adding gouraud shading and i face a new problem : I have one color per vertex in my triangles. But as the color is just an index in the palette, how can i interpolate these colors across the triangle ? Can i interpolate the indices ? Do i need to setup my palette in a certain way ? Or is it just impossible ? I hope someone can enlight me on this. I already asked google but couldn't find anything really usefull. Thanks. PS: If you know some good tutorials/resources about software rendering i'm interested too.

Share on other sites
First of all let me tell you that I'm not entirely sure if it works or not. It seams hard however since normal interpolation simply use the different components to create an "accurate" approximation. It seams hard to do this since you pallet could consist of several different color values in different places.

I don't entirely understand why you would like to limit yourself to such a pallet, could this be because you want to create a cell shading effect and if so I would recommend using three components and then limit the colors.

Another thing you should consider is whether or not you want to use gouraud shading in a software renderer. I would personally recommend you use flat shading and divided the model up, while rendering, into triangles no larger then a pixel in the finished rendering.

Share on other sites
I agree with you that it's not trivial.
It has nothing to do with cell shading or so, it's just that i'm developing the renderer for an embedded device.
I probably could use a 16, 24 or 32 bits true color format but i don't really need so much colors in my application. And as the memory capacity of the hardware is pretty low, i'd be happy if i could work with 8 bit color palettes.

I basically see two potentialy possible ways to do it :
1) setup the palette in a certain way so that interpolation just works. maybe using an adapted interpolation function.
2) using a lookup table, linking the palette colors together. a bit like how blending with color palettes works.

Also, i would like to know if it's possible or not, just for my personnal knowledge.
It seems to me that even in the old DOS days shading was possible. And in these old days, palette colors were frequently used. So there must be a way to do it.

About dividing the model at runtime and rendering flat polygons, it doesn't really solve my problem because when i divide a polygon, i still need to interpolate the color for each new sub polygon created.

Thanks for the help.

Share on other sites
Well, my thought was that it was lightning dependent but if not then I understand you. But I'm still wondering if it will look good, and after that find a solution. You could try and make a triangle in Photoshop and interpolate correctly there. Then limit the colors. I think you will find it looks very strange but that is just a guess. I would probably stick with flat shading or some sort of texture if possible.

Share on other sites
Embedded devices should be able to handle at least 16-bit colors nowadays, however, a LUT will work as you expect it too.

Back in the days (I remember talking about Dungeon Keeper for instance), LUTs were setup to blend between two different colors in a palette. You could use multiple palettes for different discrete blend factors (such as 50 %, 25 %, etc.).

Also, you could use dithering (OpenGL for example supports dithering in index mode), or color fitting, where a color is picked from the palette closest to the blend of two shades. Tons of info should be uncovered by google :-).

Share on other sites
Given the poor quality of the LCD display i have, i think it will be looking good enough.. if i find a way to do it of course.

A trivial way to do it would be to treat the 8 bit per pixel colors as a color in packed 323 format.
I'm gonna try this right now, but i keep opened for any other suggestion.

Thanks.

Share on other sites
Thanks todo.

Color fitting won't be fast enough imo.

I see how LUTs can be used to blend two colors.
But at the moment i fail to see how it can work for shading too.

Now i'm gonna read about dithering. It would be cool if it can improve quality.

Thanks.

Share on other sites
Quote:
 Original post by EddycharlyHi,I'm curently trying to develop a software renderer.For now i'm using a 256 color palette and it's fine for rendering flat triangles.Now i'm adding gouraud shading and i face a new problem :I have one color per vertex in my triangles.But as the color is just an index in the palette, how can i interpolate these colors across the triangle ?Can i interpolate the indices ?Do i need to setup my palette in a certain way ?Or is it just impossible ?

from the top of my head ..

If you had a flat shaded triangle and then switched to gouraud its likely that you have just flat scaled values of the base colour on the vertices. Although its a simple case lets skip that one and go to the generic cheap affine texture mapping and lighting case as it allows you to do more advanced shading without a texture anyway.

So you have a 256 Colour Palette on the Display you want to raster to. From now on lets call that DisplayPalette[ 256 ] and assume it has RGB values for each index.

You basicly have a three dimensional colour space, 2D Texture (think vertex colours) and Lighting. Lets assume you want to support 48 colours for the Texture and 8 shades of Light. The texture has its own Palette, lets name it TexturePalette[ 48 ], which also has its own RGB values.

You 'could' create a LUT thats indexed like so:

TextureAndLightToPaletteColours[ NumberOfTextureColours = 48] [ NumberOfLightShades = 8 ]

and precalculate it like so:
for (TexelIndex = 0; TexelIndex < NumberOfTextureColours; TexelIndex++){  for(LightIndex = 0; LightIndex < NumberOfLightShades; LightIndex++)  {    TextureAndLightToPaletteColours[ TexelIndex ] [ LightIndex ] =      GetIndexIntoDisplayPaletteForTextureAndLight( TexelIndex, TexturePalette, LightIndex );  }}

Where GetIndexIntoDisplayPaletteForTextureAndLight() might look just like so:
BestDisplayPaletteIndex GetIndexIntoDisplayPaletteForTextureAndLight( TexelIndex, TexturePalette[], LightScale){  BestDisplayPaletteIndex = -1;  BestDisplayPaletteError = 0x7fffff; .. well some large number thats > the colour error function.  ScaledTextureColour = LightTextureColour( TexturePalette[ TexelIndex ], LightScale);  for(PaletteIndex = 0; PaletteIndex < NumberOfDisplayPaletteColours; PaletteIndex++)  {    PaletteIndexToTextureError = SquaredDifference(ScaledTextureColour, DisplayPalette[PaletteIndex]);    if(  PaletteIndexToTextureError < BestDisplayPaletteError )    {      BestDisplayPaletteIndex = PaletteIndex;      BestDisplayPaletteError = PaletteIndexToTextureError;    }  }  return BestDisplayPAletteError;}SquaredDifference( RGB1, RGB2 ){  return ((R1 - R2)^2) + ((G1 - G2)^2) + ((B1 - B2)^2);}ie single colour component difference squared. there are better colour fit functions btw. you can google them.

So now you can translate a texture colour from TexturePalette's colourspace to DisplayPalette's colourspace with minimal error while transforming it with a light intensity value like so:

SampledTexel = Texture[ GetTextureOrigin ];; which is in TexturePalette colorspaceLightIntensity = CalculateLightIntensityForPixel;; which is basicly interpolation (linear or not) of the triangle's light values over the raster coordinatesDisplayPixelValue = TextureAndLightToPaletteColours[ SampledTexel ] [ LightIntensity ];; so look up texture + light in the LUT.

Some thoughts: Gouraud shading is, in most cases, linear interpolation between two light intensity values on a solid surface (solid surface like.. a constant colour or material), which can be represented in a one dimensional texture. Instead of using a generic light intensity function to calculate the vertex colour indices you could index, using the light intensity values, into a one dimensional texture for a material instead which can be calculated with higher precision and/or diffuse/ambience/reflection modifiers and use that as a TexturePalette / Texture.

And yes, coloured lights in software are very hard to do when having memory and processing power constraints, so back to your vertex lighting. Each vertex has a value in the DisplayPalette. You could make a two dimensional table which holds one dimensional lookup tables to blend between two colours. Such a table would be kind of huge. (8bit x 8bit x Lbit) where Lbit would be the range of distinct colour entries. I suggest to reduce the number of possible vertex colours and use a table that represents light shades on a given material. You also dont have to map everything into the DisplayPalette. Optimizing vertex colours using 2 bits per color component and 3 bits for light intensity (ie 2:2:2 bit rgb and 8 shades) can yield good results on small lcd displays. These tables would require just 6x6x3 = 15 bit. You have to keep a high precision on the components when rasterizing though. Another possible solution would be to make tables to quantize the vertex colours in DisplayPalette to 5 bits in, lets say, a table that hold quantized DisplayPalette colours for vertex lighting.

Oh and if your display supports 16bit colour precision you could skip the DisplayPalette translation and go from one TexturePalette to Screen directly by using a LUT that holds 16bit values for a given entry. From my experience, the display palette just needs to hold most possible colours for a given content, while the rest of the colour space conversion better gets handled by tiny tables that fit into a row in the data cache.

Quote:
 I hope someone can enlight me on this.I already asked google but couldn't find anything really usefull.Thanks.PS: If you know some good tutorials/resources about software rendering i'm interested too.

Articles from Chris Hecker and Michael Abrash come to mind.. You might also want to check older demo scene articles (start with a google for Tumblin Bodies In Motion).

Share on other sites
Quote:
 Original post by EddycharlyA trivial way to do it would be to treat the 8 bit per pixel colors as a color in packed 323 format.

Do not use just 2 bits for green, the eye has a higher sensitivity in green than in red and blue. You could use 332 maybe.

Another way to encode colors would be to create a colorcube of 6x6x6=216 colors, so there are still 40 entries in the palette available for some special colors (menus/dialogs for instance).

When setting up the rasterizer, just decode the color into separate values for red, green and blue. Interpolate those separately. Since with so few colors it is not possible to create a nice gradient, apply some noise to the color (add -1/0/+1 to each component). To prevent the image to flicker during an animation make sure that the noise pattern looks the same for every frame (maybe just encode it in a LUT).

Before storing the pixel convert the color back into the indexed representation. With a palette properly set up this should be simple.

Share on other sites
Hi nmi.
I implemented 233, 323, and 332 color format.
I can't really decide wich one is better yet.
The 323 format seems to produce a lot of "paterning".
233 and 332 seem to be more "smooth" but they look less comfortable to the eye.

I'm going to keep the three formats and will decide when i'll have a real scene.
At the moment i just render a spining cube with random colors or color cube's colors.

For color cube's colors, the result is not bad, but it's a bit biased because it's the best case for all faces.
With random colors it's pretty worse.

Here are three images, one for each format :
RGB 233 : http://img511.imageshack.us/my.php?image=rgb233ux3.png
RGB 323 : http://img291.imageshack.us/my.php?image=rgb323dx8.png
RGB 332 : http://img142.imageshack.us/my.php?image=rgb332oa2.png

I'm going to try the 6*6*6 color cube you suggested. Maybe it will help because of the better distribution along the RGB chanels.

I'm not sure about the noise thing you mention. I have seen in old movies like TRON that there is some sort of noise in their colors. I'm not sure if it was desired by the creators of the film, or if it was a side effect due to their rasterizer implementation.

For now, i propagate the color conversion error along the horyzontal spans from left to right. It prevents for big blocks of same color along the span, and sortof looks like i generated some noise, but doesn't look like what i saw in TRON.

Thanks.

Share on other sites
H3llBl4u,
In fact my problem is not directly related to lighting.
I need to interpolate from any color to any color.
So a solution working with light intensities and supporting only white lights won't fit.

Generating a big LUT containing transitions for each pair of colors was my first idea. It's big, 1 meg for 16 entries per color pair, but i think i can live with that.
The main problem i see with such a method is that i can't do dithering anymore.
And for sure, the quality loss will be terrible.

Anyway, you made pretty good points in your post, and i'll keep it close for when i try to implement the big LUT method.

Share on other sites
A 6*6*6 color cube gives good results with dithering.
I can't see a diference with the other palettes.
It produces more "paterning" than a 332 palette but less than a 323 one.

The good is that i have 40 colors to use freely.
The bad is that the code for extracting rgbs from the color index and for converting from rgbs into a color index is more expensive than for a classic format.

Here's a picture using the 6x6x6 palette :
http://img518.imageshack.us/my.php?image=rgb6x6x6vd2.png

Share on other sites
The renderers in Doom, Quake and Quake2 used look-up tables.. They were actually stored as images you could extract from the data files and fiddle with them. There were no colored lights so the look-up table was just a 256x64 pixel block (for 64 different light values). Quake2 added a 256x256 block for transparency, cleverly set up at 33% transparent so you could swap the coordinates and get 66% transparency with the same table. Since these games used textures with no filtering, there was no need to interpolate from one color to another.

It might be worth a try to render internally in 15-bit (555) and only convert to 256 colors for the final output. This way you just need a lookup table with a color index for 32768 different 15-bit RGB colors.

Share on other sites
Quote:
 Original post by EddycharlyA 6*6*6 color cube gives good results with dithering.I can't see a diference with the other palettes.It produces more "paterning" than a 332 palette but less than a 323 one.The good is that i have 40 colors to use freely.The bad is that the code for extracting rgbs from the color index and for converting from rgbs into a color index is more expensive than for a classic format.Here's a picture using the 6x6x6 palette :http://img518.imageshack.us/my.php?image=rgb6x6x6vd2.png

if you choose to use a direct colour model, ie a 6x6x6 or 8x8x4 cube, i suggest to scale rgb channels directly, even if the components are not bit aligned. remember to just unpack when setting up the rasterizer. merging this back into a palette index in the scanline render function is kinda simple too when using a direct mapping palette.

For a range of 6 shades like .. Red (0-5)*36, Green (0-5)*6, Blue 0-5

PackedRGB = Blue + (Green * 6) + (Red * 36)
or
Blue + Green<<1 + Green<<2 + Red<<2 + Red<<5
or scaled to values of 0-7:
PackedRGB = LUT[Blue + Green<<3 + Red<<6].

Comes down to memory traffic vs. cpu cycles.

its hard to give precise advice if the platform and use case isnt known. from what i could gather you might develop something for a Gamepark32-like device.. but thats just a guess.

ps: low bitcount direct colour models look like crap, they make artists smack developers upside the head.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• Forum Statistics

• Total Topics
627708
• Total Posts
2978732

• 21
• 14
• 12
• 22
• 35