# logarithmic image transformation - e.g. Droste Effect

This topic is 3172 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I recently came across the fascinating 'Droste' image effect, where by you have an image within an image. I've implemented the effect via porting various code samples found on the net, but the core effect has been added to over time. As its so interesting I decided I wanted to try and write the code from scratch using the original article that outlined the logarithmic image transformation. Article link Unfortunately i've stumbled at the first stage 'transformation by log(z)'. My maths skills aren't the greatest and dispite spending several hours reading up on polar coords, complex planes etc I'm lacking the knowledge needed to translate the maths into code (c++). My aim is to implement the first stage in that article. To have a function that takes x,y co-ordinate in the destination image and returns a new co-ord from which to read a pixel from the source image. The function being to transform an image of a circle into a rectangle (as shown in the article). It says that the specific transform is log(z/r1), where R1 is the inner radius, whilst log(z)=log(r)+ i*theta. My assumptions here are that z is a complex number, i is the imaginary unit, theta is the polar angle of (x,y) r is the polar distance. I'm just at a loss how to proceed from here. I don't know how to break down the equation into code that uses an x,y coordinate, and generates a new coord. If anyone would care to describe the process i'd be very grateful, even some pseudo code would be helpful as a starting point. So lets assume the pole would be in the center of the circle and thus the center of the image, which is 240x240. dx = x - 120 dy = y - 120 r = sqrt( dx*dx + dy*dy) theta = atan(dy/dx) log(z) = log(r) +i*theta ands thats where i'm stuck [Edited by - noisecrime on December 5, 2009 9:27:59 AM]

##### Share on other sites
Ok, I dont know how much detail to go into here to skip the bits you already know.

You have made the right assumptions, they are treating the image plane as a subset of the complex plane so a coordinate (x,y) on the image gets written notationally as x+iy.

So lets go through it from the beginning.

z = x + iy = re^(i * theta)
- This is just writing a complex number in polar coordinates. r and theta are as you have calculated in your post.

log(z) = log(re^(i * theta)) = log(r) + i * theta
- This is replacing z with its polar coordinate representation and then expanding using logarithm rules.

Now, you have been given a transformation of an annulus on the complex plane. This transformation takes a coordinate z and returns a coordinate log(z/r1).

What is log(z/r1) though?

log(z/r1) = log(re ^(i*theta) / r1) =log(r/r1) + i * theta

Giving x' = log(r/r1), y' = theta.

And that is why you get a rectangle. If you go around the circle along a line of constant radius then you get back a constant x value

So in your code you would have a function which takes z and returns a new coordinate. I hope c code with pseudo code inside is ok for you. Obviously though you only have x and y. But this is fine because you are only pretending to be working in complex coordinates so just pass those in.

void transform(double x, double y, double *x_out, double *y_out) {  double theta = atan2(y,x); // Or however you want to get at theta.  double r = sqr(x*x + y*y);  *x_out = log(r/r1);  *y_out = theta;   return;}

EDIT: I am pretty interested in the article and in whatever you produce so I'm more than happy to help out further. The branch of maths is complex analysis if you are interested in pursuing it further.

##### Share on other sites
Really cool article.. I should try this one day.

##### Share on other sites
Thank you for taking the time to reply, your explanation appears straightforward and looked very similar if not the same as some of my own attempts but which I was to embarrassed to post.

Quote:
 Original post by davetylerOk, I dont know how much detail to go into here to skip the bits you already know.

Probably best to assume I know nothing. Despite studying maths up to A level I was never really any good. So I remember bits and pieces, enough to 'play' around and be dangerous, occasionally even getting it right ;)

log(z/r1) = log(re ^(i*theta) / r1) =log(r/r1) + i * theta

I can follow the substitution of z with its polar coord, but i don't understand how (i*theta) ends up outside the log()? Also in this context what does ^ signify? I noticed it in one of your early code lines, but from a more 3D coding background I've seen it used as the cross-product, which isn't making a lot of sense for me here.

The code you posted looks sound and matches my expectations of how it would appear, but sadly it doesn't seem to achieve the correct results. Again I think i'm missing some important information or a final stage in the transform.

As mentioned in my previous post, i'm hoping to use this transform to change an image with a circle/ring into an image with a rectangle. Exactly as presented in figures 2 & 3 of that article. Obviously the grid will not be the same as it gets transformed too, but thats not important.

At least I assume that is what the transform should be able to do? My reason for doing this is that the article suggests a transform to take a ring and turn it into a rectangle, which is fascinating and I hope once I get this working it will give me a greater understanding of using such functions for general image manipulation. At least something to start as a point for exploration.

Anyway here is the code i'm trying and an image of current results.

As you can see none of the results are close to forming a rectangle, indeed they seem to force more circles and there is a dead zone in the center. It almost looks as though I need to apply another transform to the results, but then why bother with the first transform? Oh so confusing.

As you can see i've added offsetting (x,y) to the center of the source image and multiplied the output coordinates by r1 since they seemed to be in a range of -1.0 to 1.0. I've then tried various combinations as shown in the image i linked to.

input: x,y  r1 = 40.0  xc = 120.0  yc = 120.0    repeat with y = 0 to tHeight        repeat with x = 0 to tWidth            dx = x-xc      dy = y-yc      r = sqrt(dx*dx + dy*dy)      a = atan(y, x)             zx = log(r/r1)      zy = a            col = m_SrcImage.getpixel(zx, zy)      m_DstImage.setpixel(x,y, col)    end repeat  end repeat

Quote:
 EDIT: I am pretty interested in the article and in whatever you produce so I'm more than happy to help out further. The branch of maths is complex analysis if you are interested in pursuing it further.

Well I wouldn't wait for me to 'produce' anything that might take some time ;)
At least not in terms of the Droste effect as there are already several sources, with this page being a good start as it comes with source for Adobe Pixel Bender. The page has some nice images, a great Droste video and if you go to the blog page it links to information about its orignal development (first appeared on Flickr).

The code is easily ported to other languages, but for performance you really need to be using the GPU/shaders. In PB you can tweak the effect endlessly and easily waste a few hours playing. Oh its also available for something called mathmap and for GIMP.

I've got it running (slowly) and have started to get to grips with the various modifications, various authors have made to it, which are a whole other aspect to investigate some time. But I really want to try and understand the core algorithm and not just copy & paste.

[Edited by - noisecrime on December 5, 2009 9:26:04 AM]

##### Share on other sites
Quote:
 Original post by noisecrimeI can follow the substitution of z with its polar coord, but i don't understand how (i*theta) ends up outside the log()? Also in this context what does ^ signify? I noticed it in one of your early code lines, but from a more 3D coding background I've seen it used as the cross-product, which isn't making a lot of sense for me here.

The logarithmic function used is the "natural logarith", i.e. those with the basis called e which has the value of approx. 2.71828. This function is often written as ln() or loge() to distinguish it better from the general log().

Now ln() is the reciprocal function of ex (i.e. e to the power of x, that is written as e^x in davetyler's post). That means that
ln( ex ) == x

In total the transformation is

ln( z / r1 )
= ln( r * ei*theta / r1 )
= ln( r / r1 * ei*theta )
= ln( r / r1 ) + ln( ei*theta )
= ln( r / r1 ) + i*theta

which is a complex number with the real coefficient being ln( r / r1 ) and the imaginery coefficient being theta.

##### Share on other sites
Hopefully haegarr's post has made the manipulation I did a little more obvious.

One problem that you are going to be having there is that you are transforming the wrong domain. So what do I mean by that...

Well, when you are doing transformation on the complex plane it is very important to understand what set of points you are transforming. In this case, the only set of points that you are interested in is the annulus. So actually the simplest way to do the transform is by iterating over the range of the radius and the angle.

for r = 1 to 2 step 0.1  for theta = 0 to PI * 2 step 0.1    x = log(r / 1) ' In this case r1 = 1 but i left in anyway    y = theta  next thetanext r

This is hopefully more transparent. I hacked something together in python and it gave me the results I was expecting to see so i know it works :).

##### Share on other sites
Thanks haegarr that does make more sense now.

Thanks for the code davetyler. Although it took a bit of tweaking and guess work, this is what i've ended up with, with these results

  xc = 120.0  yc = 120.0    rMin = 25  rMax = 98    repeat with r = rMin to rMax        repeat with theta = 0 to integer(PI*2*100)            zx = log(r/float(rMax-rMin))*25            zy = (theta/100.0)*(120/pi)            x = r * cos(theta/100.0)      y = r * sin(theta/100.0)            col = m_SrcImage.getpixel(xc+x, yc+y)      m_DstImage.setpixel(xc+zx, zy, col)    end repeat  end repeat

Its not elegant, but I wanted to keep it simple. Took a while to work out how to incorporate x,y, but using the source polar coord is quite obvious I guess.

rMin and rMax radii define the area to be mapped. Setting rMin to 0 would include the center area, setting rMax to 120 would include up to the edge of the image. r being divided by (max-min) maps the value to 0.0 to 1.0 range, the log of which will always be negative and I guess must be (partly) mapping the x coord to turn the curve of the circle into a straight line?

Edit:
Opps thats wrong, if it was r-rMin/(rMax-rMin) then it would be correct. So actually i'm not sure of the exact range it maps into or quite what it is doing now.

Now here is my one problem, the 'magic number' 25. I figured this needed to either be related to the image dimension (i.e. halfwidth) or by chance i noticedusing rMin gave some decent results. However since rMin can be very small or even zero, its clearly not correct. Yet using 120 (halfwidth) or even just 60 produced missing vertical lines.

Actually thinking about it as I type I wonder if halfwidth is correct and that i'm simply running into precision issues, since r is a range of whole numbers (pixels if you like). If i were to double the resolution that might fill in the gaps and using halfwidth makes sense.

However this all seems backwards. Using transforms is usually done in a mannor to avoid subsampling, plus of course i'm defining the x,y from looping through polar coords, where as I want the input to be the dst x,y and the output the src x,y.

I would have though then passing in x,y converting to polar, then finding the log(z) should work, but isn't that what I tried before? I'll have to play aorund a bit further.

Anyway thanks for the help, looks like i'm slowly getting somewhere.

[Edited by - noisecrime on December 5, 2009 8:42:46 PM]

##### Share on other sites
Hmm. All image transformation methods I'm aware of are definitely so that they iterate over the integer pixel positions (x,y) of the target image and transform them into the source image domain where usually non-integer positions (x',y') will occur. There an interpolation with the surrounding pixels is done w.r.t to the sub-pixel position (x',y'), of course. The result is then set as the pixel at position (x,y) in the target image.

With the prescription
ln( r/r1 ) + i*theta
where
r := sqrt( x2 + y2 )
theta := atan2( y , x )
one can see that a circle in the target domain will be transformed into a vertical line (assuming that the imaginery axis is aligned vertically), and a straight line emanating from the origin will be transformed into a horizontal line (with decreasing speed). That said, if the source image shows a vertical line then the target image will show a circle (or segment of a circle).

But that doesn't match the expection:
Quote:
 Original post by noisecrimei'm hoping to use this transform to change an image with a circle/ring into an image with a rectangle. Exactly as presented in figures 2 & 3 of that article.
If I understood the article correctly then you've missed that figure 2 shows the target image and figure 3 shows the source image. If you want the other way around, you'll have to use the inverse transformation.

##### Share on other sites
Quote:
 Original post by haegarrHmm. All image transformation methods I'm aware of are definitely so that they iterate over the integer pixel positions (x,y) of the target image and transform them into the source image domain where usually non-integer positions (x',y') will occur. There an interpolation with the surrounding pixels is done w.r.t to the sub-pixel position (x',y'), of course. The result is then set as the pixel at position (x,y) in the target image.

Sorry it was late and I didn't really explain myself very well.

What I meant was that, take the case of the classic rotation of a bitmap. If you took each sample point in the source, transform the position (rotated) and placed it into the destination, you can end up with holes. However if you work backwards taking each sample point in the dest, inverse rotation and look up the value in the source you wont. (I think i've got that right).

Of course for smoothness you'd probably want to filter the lookup color.

Quote:
 If I understood the article correctly then you've missed that figure 2 shows the target image and figure 3 shows the source image. If you want the other way around, you'll have to use the inverse transformation.

Pretty sure it goes from 2 to 3 since it says 'Circles transformed by log(z)' under figure 3, the rectangle output.

##### Share on other sites
Quote:
Original post by noisecrime
Quote:
 Original post by haegarrHmm. All image transformation methods I'm aware of are definitely so that they iterate over the integer pixel positions (x,y) of the target image and transform them into the source image domain where usually non-integer positions (x',y') will occur. There an interpolation with the surrounding pixels is done w.r.t to the sub-pixel position (x',y'), of course. The result is then set as the pixel at position (x,y) in the target image.

...
What I meant was that, take the case of the classic rotation of a bitmap. If you took each sample point in the source, transform the position (rotated) and placed it into the destination, you can end up with holes. However if you work backwards taking each sample point in the dest, inverse rotation and look up the value in the source you wont. (I think i've got that right).

Exactly what I've written above. However, davetyler has suggested to iterate over r and theta. Iterating r and theta is fine to investigate what happens, but the implementation should iterate the pixel's orthogonal raster as usual.

Quote:
 Original post by noisecrimePretty sure it goes from 2 to 3 since it says 'Circles transformed by log(z)' under figure 3, the rectangle output.

May be we're meaning the same. However: The given transformation goes from the destination image to the original image. Hence the circle is in the destination image, and the straight lines are in the original image. If the original image has a rectangle then you'll get a circle/disc in the destination image.

An example: Let us iterate over a circle, that means using (x,y) so that a constant r and a varying theta is given. Then in
ln( r/r1 ) + i*theta
the real part (i.e. the horizontal co-ordinate in the original image) is constant, and the imaginary part (i.e. the vertical co-ordinate in the original image) is varying. Hence you got the co-ordniates of a straight vertical line in the original image. If the original image shows black pixels (i.e. a black line) at that co-ordinates, then those pixels are copied to the destination image. Since the co-ordinates in the destination image are circularly iterated, you get a circle there.

But you seem me to want the other way around: You want a circle in the original image be mapped to a rectangle in the destination image. Hence you need to apply the inverse transformation. Since the transformation isn't symmetric in this sense, the inverse isn't the same as the normal. So you cannot use the given transformation to yield in what you want (AFAIK).

1. 1
2. 2
frob
16
3. 3
4. 4
5. 5

• 15
• 13
• 14
• 75
• 22
• ### Forum Statistics

• Total Topics
632145
• Total Posts
3004346

×

## Important Information

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!