raytraced DoF effect: is mine correct?

Started by
1 comment, last by cignox1 15 years, 8 months ago
Here I've put the output of a test scene for the DoF of my thin lens camera. It seems reasonably correct, but it is very noisy, despite the fact that I'm using 50 samples per pixel (jpg compresssion unfortunately blurred the noise, but it is still visible). This makes me wonder if my code is really correct. The following code generates a ray by randomly sampling the lens:

		const ray<vector4<real>> GenerateRay(const vector2<real> &sample, real time = real(0)) const
		{
			//Sample the lens
			vector4<real> lensSample(vector4<real>::POINT);

			real fr = (cameraDescription.frameDistance / 1000.0) + cameraDescription.focalLength;

			sampler.SampleUniformCircle(lensSample.x, lensSample.y);
			lensSample.x *= cameraDescription.aperture;
			lensSample.y *= cameraDescription.aperture;
			lensSample.z = -fr;

			//Find the intersection between the focal plane and the ray starting from the image point and passing from the center of the lens
			const real x = sample.x * cameraDescription.frame.bottom_right.x - cameraDescription.frame.bottom_right.x / real(2);
			const real y = sample.y * cameraDescription.frame.top_left.y - cameraDescription.frame.top_left.y / real(2);

			//Compute the intersection between the unperturbed ray and the focal plane using triangles similarities
			const vector4<real> unperturbed = vector4<real>(real(0), real(0), lensSample.z, vector4<real>::POINT) + 
				vector4<real>((x * focalPlane ) / fr, (y * focalPlane ) / fr, 
				-focalPlane + fr, vector4<real>::VECTOR);
			
			//Calculate the ray from the lens sample to the point on the focal plane transformed with the world matrix
			return ray<vector4<real>>(lensSample, cameraToWorld * (unperturbed - lensSample).Direction(), Q_INF, real(1), time);
		}



Basically, what I do is calculating where the focal plane is given the focal length and the distance between the film and the lens (not in this function). Then I compute the intersection between the ray originating from the film and passinf through the lens center, and the focal plane. Then I randomly sample the lens and create a ray starting there and directed towards the intersection point previously calculated. Can anyobdy spot errors? EDIT: I use a mersenne twister generator. In addition, it seems to me that there are others two issues: -the fartest sphere is 60 meters away and has a radius of 2 meters. I'm using a standard 35mm film (36mm x 24mm and a 50mm focal length). -The blurring seems (noise apart) of low quality (patterns?).
Advertisement
Quote:Original post by cignox1
Here I've put the output of a test scene for the DoF of my thin lens camera. It seems reasonably correct, but it is very noisy, despite the fact that I'm using 50 samples per pixel (jpg compresssion unfortunately blurred the noise, but it is still visible). This makes me wonder if my code is really correct.


You know you can store things in png (non destructive) or even jpg with higher quality (most software allow you to specify the quality level when saving as a jpeg file).

How are you computing the accumulation of your blur ? If you're accumulating it in 8 bits array (256 values per channel) then increasing the amount of samples might not have the expected effect.
You could work with floats internally that way each small contribution is taken into account, or you could accumulate all samples for a pixel as a big integer (make sure you don't overflow though) and divide in the end by the number of samples per pixel. Those two measures would help you preserve precision that would be destructed if you did all your calculations in an 8 bits per channel representation instead. But it might not be all, see below.

Now on the theoretical side :

Let's imagine a worse case scenario where a pixel and its associated blur kernel is centered on a zone where half the brightness values are white and half are black. The distribution doesn't matter as long as the samples themselves follow a uniform distribution (ideal case the samples distribution and the brightness distribution are totally decorrelated).

By solving it analytically, from what we said, the value of the pixel should be exactly 0.5 * 255 = 127.5.

Now how many samples do we need to take so that the computed value is within some arbitrary range, say, within 10 units of that ideal value. That means pixel value is between 122.5 and 132.5.

Let's see what the maths are..

Chances that all the samples taken are white (0.5 is the probability one sample is white and also the probability one sample is black which simplifies the computation below) :
0.5 power n
Chances all samples taken are black :
0.5 power n

chances all samples are white except one :
n * 0.5 power n

chances all samples are black except one :
n * 0.5 power n

chances k samples are white, n-k are black :
Cnk * 0.5 power n

Cnk is the binomial factor or choose function (numbers of ways to choose k elements within a set of n).

So the total probability of being between 122.5 and 132.5 are :
for k > 122.5 / 255.0 * n to k < 132.5 / 255.0 * n
sum Cnk * 0.5 power n

Now that probability varies of course on the number n of samples you decided to throw at the problem. And it follows a non linear progression of course, see for the chosen values of n below.

For n = 50 :
k must be exactly 25
probability is C50,25 * 0.5 power 50 = 0.11
that means that out of 100 pixels, 11 will be within your range, 89 outside.

For n = 100 :
k must be 49, 50, 51
probability is
(C100,49 + C100,50 + C100,51) * 0.5 power 100 = 0.24
So there are still a large number of pixels (76 out of 100) that would be outside your conservative range of 10/255.

You can see that the probability that each value is within range follows the integration of a bell curve drawn after those binomial factors and of course the more samples you draw the thinner is the curve around you central value. Even if eventually we'll get there (probability of an "out of range value" reaching 0), it may be too slow for your taste..

The following is the result of a 100 pure random samples of a checkerboard pattern :

blur on 20x20 pixels with 100 pure random samples

Now the true randomness of this doesn't need to be. For example imagine that your image has a lot of coherency in it (if a sample has a certain color, a neighboring sample will have a same or close enough color) which is the case for our checkerboard pattern. Then it does not make sense to draw the next sample in the immediate vicinity of a previous samples (after a few draw there's a large number of previous samples). If you're using the true random scheme above then there's still the increasing chance that a new sample will fall too close from another one, contributing less "new information" overall than expected.

There's one pattern that maximizes coverage while maintaining distance between samples at the maximum. That's an ordered grid pattern (patterns at an equal distance along x and y).

Now people tend to stay away from ordered grid, because as though it gives you predictible results, it can easily interfere with ordered geometry on the screen as well as the ordered pixel grid of the screen itself (making visible patterns).

So can we still take advantage of coherence of the image while avoiding the ordered grid interference ? Yes you can, by dividing your kernel range in smaller kernels where you would take an individual random sample.
That's called jittered sampling and you don't even need true randomness a pseudo randomness (by cycling between m fixed offsets within your n samples). The interesting thing is that for most coherent images (they usually are but pure noise), the more samples you draw the less randomness you would need per samples (for a very very large number of samples you won't see the difference between a true ordered grid and a jittered grid).

So for example for 100 samples,
you divide the sampling grid in a matrix of 10x10 squares. Then within each square you decide where you're going to sample exactly. Not at the center but by dividing the small squares themselves in a matrix of 10x10 squares you choose a pattern where one sample i,j occupy one small square ii,jj and no other sample on the same column i occupies the same small column ii (idem for rows j and small rows jj).

To better visualize it, here's an example with 4 samples (each sample occupy it's 2x2 quadrant, and then the position within the quadrant is so that no two samples in the pixel is on the same row or same column):

blur structured 4 samples pattern

We apply the algorithm with 100 samples jittered within the square but with a "structured" pattern (no pseudo randomness between pixels here) :

blur on 20x20 pixels with 100 jittered samples with a structured pattern

It's realistically close to the ideal, of course if you zoom a little bit or are in a worst case scenario you might still see some visible patterns.

Here's an example with 100 samples jittered within the square this time with a more random pattern (same pseudo random draw as in the "pure random" case, except that each sample occupies a subsquare of the 10x10 square):

blur on 20x20 pixels with 100 jittered samples with a random pattern

Some people may prefer the more random version as it "diffuses" the error over several pixels but at the expense of additional noise. Of course nothing prevents you to have a really large structured pattern that covers a nxn pixel zone (dithered). The more samples you draw, the less difference there will be between the different approach, but for those smallish numbers (< 100) that preference can be taken into account.

Hope that helps.

One unrelated note : the example images in my post are gamma corrected (I did convert them to srgb space before writing them to disk).

Look at the following image and compare it to the blurred version, if the blurred version looks significantly darker than the non blurred version then there's probably a gamma correction issue.

Original checkerboard

LeGreg
Thank you for answering...
Quote:
You know you can store things in png (non destructive) or even jpg with higher quality (most software allow you to specify the quality level when saving as a jpeg file).

Yes, obviously I know that, but I didn't noticed it until uploaded on the server. Since the effect I'm refering to is still visible, though, I did not consider it necessary to save and upload it again in png.

Quote:
How are you computing the accumulation of your blur ? If you're accumulating it in 8 bits array (256 values per channel) then increasing the amount of samples might not have the expected effect.


I'm using float for the whole pipeline, and only convert it to one byte per channel when I display it.

Quote:
That's called jittered sampling ...

I was understanding quite nothing until this line, because I know this approach... :-) you now, math is not my best friend :-(
I have not thought to use jittering, I will try, perhaps it gives better results, thank you!

rating++ for the quality of the answer!

This topic is closed to new replies.

Advertisement