Jump to content
  • Advertisement
Sign in to follow this  
Deliverance

Fractals: Coloring iterated function systems

This topic is 2657 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm struggling to obtain nice images using iterated function systems. I implemented the algorithm on the GPU. I read the paper(http://www.flam3.com/flame.pdf) but i can't seem to obtain the results they showcase.
The way the algorithm works is that at the output i have a floating point hdr texture. To display this on a ldr monitor i need to tonemap it. The paper referenced uses log density display, a form of tone mapping. Besides the r,g,b values i also store the number of times a pixel hit a location i,j in the alpha channel. I then use these equations:



float d = log(maxolog.w); // maximum number of times a point hit a pixel
float gamma = 1.0 / 2.2;
color.x = pow(log(color.x) / d, gamma);
color.y = pow(log(color.y) / d, gamma);
color.z = pow(log(color.z) / d, gamma);
color.w = pow(log(color.w) / d, gamma);
color = max(0.0,min(1.0, color));



Here's an image obtained after 10 seconds of calculation:

[media]http://img8.imageshack.us/i/asdtb.jpg/[/media]


As you can see the colors look washed out. In the paper, it is suggested to use a quantity named vibrancy, but i can't really understand what is this value, how is it calculated and how it affects the formulas above.

Does anyone have any idea of what can i do to improve this? I tried different tonemapping algorithms(like Reinhard) but did not get "nice" colorful images. Maybe i'm doing something wrong? I think the log density display solution and vibrancy would best suit my needs, but how to do it?

Share this post


Link to post
Share on other sites
Advertisement
You can certainly afford to compute an histogram of the hit count of no more than a few million pixels and start from that data to compute an adaptive mapping that produces balanced amounts of light and dark well saturated LDR colours, without pointless logarithms (you are not working with photographs) and meaningless gamma corrections (if needed, they belong to a later step).

Sort the hit count values, divide them into bins that have about the same number of elements (except for the large bin including 0 and for keeping all occurrences of the same value in the same bin), and map a single colour or a linearly interpolated colour range to the range of hit counts covered by each bin. Obviously, the greater the number of bins, the tighter your control of contrast.

Share this post


Link to post
Share on other sites

You can certainly afford to compute an histogram of the hit count of no more than a few million pixels and start from that data to compute an adaptive mapping that produces balanced amounts of light and dark well saturated LDR colours, without pointless logarithms (you are not working with photographs) and meaningless gamma corrections (if needed, they belong to a later step).

Sort the hit count values, divide them into bins that have about the same number of elements (except for the large bin including 0 and for keeping all occurrences of the same value in the same bin), and map a single colour or a linearly interpolated colour range to the range of hit counts covered by each bin. Obviously, the greater the number of bins, the tighter your control of contrast.


Thanks for the info! I tried what you suggested but i don't seem to get close to these results:
http://img694.images....us/g/asd13.jpg


These images were generated by my CPU side, ifs generator. The final output was a grayish image that i adjusted with this formula:


newColor = (color-someValue) * someOtherValue; // formula 1



just after i performed these computations:



image[j].x = powf(log(image[j].x) / log(maxFrec.w), 1.0 / gamma);
image[j].y = powf(log(image[j].y) / log(maxFrec.w), 1.0 / gamma);
image[j].z = powf(log(image[j].z) / log(maxFrec.w), 1.0 / gamma);



An input image for formula 1 could be this:
http://img690.images...s/i/asd11u.jpg/ (after almost 40 minutes of computation)


Now here's what images i'm obtaining with different palettes and suggested solution::

http://img37.imageshack.us/g/asduh.jpg/


As you can see, these are not nearly as nice as the ones from above. What may i be doing wrong? Clearly there is a difference between the CPU grayish texture and the GPU grayish texture(from my first post) but this may be due to the fact that i render a lot more particles on the GPU than on the CPU. Also i tried applying the same algorithm that i did it on the CPU side for the GPU resulted image but the results are ALMOST right(because the CPU side resulted image from the log density display and gamma correction contains more color than the gpu texture equivalent):

http://img36.imageshack.us/i/asd14u.jpg/

http://img834.imageshack.us/i/asd16.jpg/

Now here's some code that implements the suggested solution:



glActiveTexture(GL_TEXTURE0);
texture->bind();

int sz = texture->getWidth() * texture->getWidth();

float asd[4] = {0,0,0,0};
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, &points[0].x);

for (i=0; i<sz; i++)
{
points.z = i;
}

qsort(&points[0], sz, sizeof(Vector4<float>), vector4fCompare);

int numBins = 10;
int binMargin = float(sz) / float(numBins);
int currentBin = 0;

for (i=0; i<sz; i++)
{
float numBin = floor(float(i) / float(binMargin)) / float(numBins) * image->getWidth(); //EDIT image->getWidth() is always 256
clamp(numBin, 0.0f, 255.0f);

Vector4<unsigned char> c = image->getPixel( numBin, 0);
points2[(int)(points.z)] = Vector4f(float(c.x) / 255.0f, float(c.y) / 255.0f, float(c.z) / 255.0f, 1.0f);
}

//printf("sz=%d size=%d\n",sz, points.size());
colorTextureFinal2->bind();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0,texture->getWidth(),texture->getWidth(), GL_RGBA, GL_FLOAT, &points2[0].x);



Does everything look okay there?

Share this post


Link to post
Share on other sites
It looks completely foreign, starting with the pointless logarithms and gamma, culminating with comparing 4D vectors (how?) instead of scalar hit counts when sorting, and ending with a complete misunderstanding of what's meant by histogram bins and how they can be translated to colours. How can image size have anything to do with an histogram?

Let's work out an example: 10 pixels, with hit counts (already sorted) of 0 0 0 1 1 2 4 5 12 15, and a colormap that interpolates between colours A (lowest hit count), B, C, D (maximum hit count). Let's make 3 bins: colours between A and B both inclusive, between B and C inclusive, between C and D inclusive.
Every bin should ideally contain around 10/3 pixels (3 or 4), but we'll make an exception for the first one to include some nonzero value. Then bins can be (0 0 0 1 1) (2 4 5) (12 15), grouping 5 with the closest value (4 rather than 12).

Then the pure colours correspond to specific hit counts: 0->A, 1->B, 5->C, 15->D.

Other pixels can be interpolated by rank within their bin (or better, by how many pixels are between them and pure-colour hit counts):
12-> (C+D)/2,
4-> (2C+B ) / 3,
2-> (C+2B)/3

or by value:

12->((12-5)D+(15-12)C)/(15-5),
4-> ((4-1)C+(5-4)D)/(5-1),
2-> ((2-1)C+(5-2)D)/(5-1)

In a real application, of course, there would be many more pixels and many more bins, corresponding to larger colormaps.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!