#### Archived

This topic is now archived and is closed to further replies.

# eliminating color banding in a sphere

This topic is 5088 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I need some advice on how to eliminate ''banding'' when drawing sphere. I think that I need to dither, or add some noise, but that is about all I know. I am one of the principal developers of Jmol, an open source molecular viewer at http://www.jmol.org Small atoms look fine, but when one zooms in and the spheres get big then I am getting some banding: http://www.jmol.org/banding/bigsphere.gif Any advice/pointers on what algorithm I can apply to eliminate this would be greatly appreciated. Performance is not that important because the shading of the spheres is cached. Thanks in advance. Miguel

##### Share on other sites
I can think of 2 situations here..
1) increase your color depth.. it may help..
2) add a "fuzzy" very uniform noise texture to it, thus reducing the appearance of the bands (unless you want it to be perfectly glossy)...

##### Share on other sites
Adding noise to your rendering will simply make it look noisy—there will still be banding, except it will be noisy banding. Your problem comes from at least one of two things. Either the API you''re using is limited to, say, 8 bits per channel, or the display device is.

Let''s assume that your API supports floating-point color. That means that you have an effectively infinite range of colors, and you shouldn''t have any banding in your frame buffer. If the frame buffer is viewed on a device with floating-point color support, there''ll still be no banding. But if it''s viewed on the current standard of 8 bits per channel (or fewer) then there''ll be lots of banding. If your API does, in fact, support floating-point color, then you have a choice to make. You can either use it and just hope that your applet is viewed on a floating-point device, and suffer banding on anything else. Or, you can target the current standard and implement a real-time dithering algorithm. Now, if your API does not support floating-point color, then the banding starts at the frame buffer and will never improve. Again, a real-time dithering algorithm will help with this. Sadly, this is non-trivial.

##### Share on other sites
Thanks for your responses.

merlin9x9 said:
quote:

But if it's viewed on the current standard of 8 bits per channel (or fewer) then there'll be lots of banding.

I am limited to 24 bit color with 8 bits per channel.
The bands are adjacent color values within the gray scale

#787878
#777777
#767676
#757575

I have searched around for algorithms and have seen this problem referred to as a side-effect of 'color quantization'.

quote:

Or, you can target the current standard and implement a real-time dithering algorithm. ... Again, a real-time dithering algorithm will help with this. Sadly, this is non-trivial.

I need to dither. Can you give me some URLs to dithering algorithms?

Thanks,
Miguel

[edited by - miguel on March 5, 2004 10:39:25 AM]

##### Share on other sites
I just set my monitor to 24 (32 total) bit color and no longer see banding on your original image. I had it set to 16 bit color without realizing it...

Do you find it really that bad at 24 bit color?

##### Share on other sites
quote:

Do you find it really that bad at 24 bit color?

Well, it is not really offensive, but it is bad enough that I am worried about it :-)

Gray was probably not the best color for me to post. On my display it stands out much more when the display is Red.

But you have raised a *very* good point. Several people have ''questioned'' me about it ... I don''t know if their display was in 16-bit or 24-bit color mode.

And at least one of them has an LCD display. So in that case the display is not capable of handling it.

In any case, if I could dither (or apply a little texture?) to mask it a little bit then it would address the issues of 16 bit video settings and TFT/LCD displays.

Miguel

##### Share on other sites
Even at 24 or 32-bit, banding will still occur; the notion that 16.8 million colors is "enough" is nonsense. If it weren't, computer-rendered movies such as those by Pixar wouldn't be rendered at significantly higher precision.

Consider a monochromatic gradient from one side of a 1024x768 screen to the other. If we have 8 bits per channel, that means 256 values per channel. Let's say our gradient is red. On one side, we start at value 0 and end up on the other side with value 255. With 256 values covering 1024 pixels, we'll have bands 4-pixels wide. The human eye is excellent at finding edges and it will most certainly detect the edges of bands that large. Now, let's imagine that the gradient has less contrast, so we'll go from value 119 to value 135. This means that we'll have a range of 16 covering 1024 pixels, giving us bands 64-pixels wide! Ouch!

So, going back to your molecules, the scenario that would cause banding on 24 or 32-bit displays is that of relatively low-contrast gradients. Without extended integral or (better yet, at least for convenience) floating-point percision in both the frame buffer and display, there's no way to avoid banding without resorting to dithering.

Someone suggested applying a texture to help obsure the bands. I cannot disagree more about this. As I said before, applying a texture to your bands will simply give you textured bands. So, dithering is the only practial way to eliminate your bands.

Off the top of my head, there's a very simple dithering algorithm you can use, although it requires that you use an intermediate frame buffer. Note that since this is Java, your program may slow down considerably after implementing this algorithm. So, if you decide to try it, I strongly recommend doing it with a copy of your source.

There are many ways you could organize the memory for this buffer, which will need to hold 9 bits per channel (we'll assume 3) per pixel. One allocation technique would be to create an int array of width * height items long. Since each element can hold 32 bits, we draw the 9 * 3 bits we need for each pixel from each element. To extract 3 sets of 9 bits from a 32-bit value, though, we'd need some bitwise manipulation, and the program might slow down a little too much as a result. So, the faster solution is to waste some memory and allocate the array to be width * height * 3 items long. Once we have this, accessing the element that stores the value for the first channel for a pixel at (x, y) is: index = y * width * 3 + x. The returned index will be the index of—traditionally—the red element of pixel (x, y). Clearly, green and blue will be at index + 1 and index + 2, respectively. So, now we have our intermediate frame buffer and we know how to get the information we need to access individual pixels in it. Next comes rendering.

For convenience, you may want to make some functions that will make rendering into this buffer a little easier. It may look something like this:
int pointToIndex(int x, int y)    {return y * width * 3 + x;}// red, green, and blue are [0, 511]void setPixel(int x, int y, int red, int green, int blue){    index = pointToIndex(x, y);    _buffer[index + 0] = red;    _buffer[index + 1] = green;    _buffer[index + 2] = blue;}

Instead of rendering your colors with the assumption of a 0-255 range, assume a 0-511range. In the above code, that means that red, green, and blue must all be in the interval [0, 511]. Once we've rendered everything we want to see, it's time to display the buffer's contents on the screen. Doing this will require converting our 9-bits-per-channel pixels to 8-bits-per-channel pixels. For each channel value for each pixel we must consider this: if the value is even, divide it by 2. If it's odd, divide it by 2, then subtract 0 or 1, depending upon "where we are" in the dither. The result will be in the range [0, 255] so it can then be used to draw to the screen. As for "where we are" in the dither, this is a value that will toggle between 0 and 1 as we consider each pixel (not each channel value for each pixel). Perhaps this will become more clear in code:
// red, green, and blue are [0, 255]void drawPixel(int x, int y, int red, int green, int blue){    _graphics.setColor(new Color(red, green, blue));    _graphics.fillRect(x, y, 1, 1);}// value is [0, 255], dither is 0 or 1int convertComponent(int value, int dither){    if(value % 2 == 0)        return value / 2;    else        return value / 2 - dither;}void showFrame(){    for(int y = 0; y < height; ++y)    {        int dither = y % 2; // this, in conjunction with our per-row toggling, will make sure our dither has a checkerboard pattern        for(int x = 0; x < width; ++x)        {            int index = pointToIndex(x, y);            int red = convertComponent(_buffer[index + 0], dither);            int green = convertComponent(_buffer[index + 1], dither);            int blue = convertComponent(_buffer[index + 2], dither);            drawPixel(x, y, red, green, blue);            dither ^= 1; // invert the dither bit        }    }}

That's all there is to it! So, this algorithm will give you a virtual 2^27 (134,217,728) colors, but at the expense of speed. Other algorithms would, most likely, cost even more speed. Anyway, I hope some of this helped!

[edited by - merlin9x9 on March 6, 2004 1:27:50 PM]

##### Share on other sites
merlin9x9,

Thank you *very* much. This is *exactly* what I was looking for.

I have not implemented it yet, but will post a message when I do so.

* I will pack all three 9-bit channel values into a single 32 bit int. I am good at bit shifting, and modern Java compilers do as good a job as C compilers at arithmetic operations. (Making the buffer 3 times as large would actually result in poorer performance ... 3 times as much data means poluting your cache ... bit-shifting takes place on-CPU and doesn''t require any additional memory accesses ... memory bandwidth == perfomance)

* Because of the implementation, this should not have any performance impact at runtime. I calculate grey-scale ''intensities'' in advance for spheres of different sizes. The dithering will take place in this step. Later, when the frame buffer is being built, the ''intensities'' get mapped to rgb values.

Thanks once again for your very detailed data.

Miguel

##### Share on other sites
I ended up doing it slightly differently. It turns out that I needed more than one extra bit, so I went ahead and used floating point. The random number generator combined with the floating point gives me the dither.

In addition, since I was doing stuff with the random number generator, I went ahead and put in a little random noise.

The results are quite good:

http://jmol.sf.net/banding

Thanks *very* much for your help.

Miguel