Jump to content
  • Advertisement
Sign in to follow this  

question about ray tracing(anti-aliasing)

This topic is 3320 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I make a ray tracer,and I want to do some anti-aliasing. I use this method: I shoot 8 more rays through the adjacent pixels around my center pixel,and take the average of them. I think this method is too slow and I can improve it because some of the adjacent pixels can be reused(that is, I can store them, and not to re-shoot the anti-liasing ray next time when I required it) BUT,The image it produced is blured.(I will upload the image and source soon, I am not on my computer now.sorry for that) Am I wrong when I do the anti-aliasing? thanks!

Share this post


Link to post
Share on other sites
Advertisement
You should shoot those 8 rays not through adjacent pixels, but through one pixel, and then average result. Then you will not get blurry image.
Look at this image: http://developer.nvidia.com/docs/IO/37516/sample_coverage.gif
Imagine that blue square is one pixel. Then those hex numbers 0..F shows locations at which to shoot rays through that one pixel. Of course, you can use different pattern and different count of rays.

Share this post


Link to post
Share on other sites
instead of shooting 8 rays around the center, let your current center be the top-left pixel.
the problem is probably the fact your 8 rays are overlapping with the 8 rays of your next "center" ray in the following example:

123
4c5
678
123
4c5
678

both c's are adhecent to each other (but a bit hard in notepad art). the bottom 4 is the top c.
if that is the case, the top c is a part of the color value of the bottom c, which isn't what you want.
even if you place the rays halfway between the center pixels, there will still be rays that overlap, probably causing the blurry effect.
the way i did it was:
2x2

c1c1
2323

3x3

c12
345
678


could you post some of the code that shoots the rays ?

--edit--
probably worth a read:
http://en.wikipedia.org/wiki/Supersampling (which is basicly what you are doing while shooting the extra rays), then look at the patterns. above example i gave was the grid approach since it's easy.

Share this post


Link to post
Share on other sites
this is the image that I shoot rays through the adjacent pixels:
there is no blur in this image.

http://img403.imageshack.us/img403/176/imageqvr.png

I will find the blured image as quickly as I can.

Share this post


Link to post
Share on other sites
first I generate the 8 adjacent pixels on the film:

int d[8][2]={
{1,1},
{1,0},
{1,-1},
{0,-1},
{-1,-1},
{-1,0},
{-1,1},
{0,1}
};


and then I generate the ray from the eye position to the pixels on the film:

vector ray(center+
(Real)d[0]/f.w * c.v+
(Real)d[1]/f.h * c.u);
ray.Normalize();


f is the structure film,f.w and f.h is the film's width and height(e.g 800 600)
c is the structure camera,c.v and c.u is the right and up direction(normalized)
center is the position of eye

after that I shoot rays in my raytracer,and average them:(i,j is the position on the film)

for(int k=0;k<8;++k){
buf[k]=RayTrace(c.p,sample.s[k],0);
}
f.buf[i*f.w+j]=ClampColor(std::accumulate(buf,buf+8,vector())/8);



this not cause the problem.

Share this post


Link to post
Share on other sites
{x,y}

{1,1} + {1,0} = {2,1} + {0,0} = {3,1} + {-1,0}

so you have 3 definitions for the same pixel {2,1}
try it like this:

Real d[8][2]= { {0.33f,0}, {0.66f,0},
{0,0.33f}, {0.33f,0.33f}, {0.66f,0.33f},
{0,0.66f}, {0.33f,0.66f}, {0.66f,0.66f}};

now you have no overlapping pixels (and you loose the int to float conversion ;) for each ray)

Share this post


Link to post
Share on other sites
Like described above, to perform antialiasing you do subpixel-precision sampling. Each pixel you have on the output image corresponds to a really small frustum in the 3D scene. Of course when you only trace one ray through this frustum (pixel center, or upper-left, or however your computation is set up), you are just approximating the total light that comes in from this whole frustum area.

With subpixel-precision sampling, you shoot several rays through this frustum defined at each pixel, offsetting them so that they intersect the near plane at different positions, but still inside the area of the current pixel. That is, you don't offset these rays so much that they will intersect the near plane inside the rectangle of another pixel (that causes the blurring you described).

Antialiasing like this is quite costly, and to optimize this, you can perform adaptive subpixel-precision sampling. Instead of blindly shooting N rays through each pixel frustum, after shooting each ray, you measure whether shooting any more rays through the current pixel will give you performance improvements. You can do this in two ways:

1) Measure the variance of the incoming light samples. If it is really close to zero, you can expect that shooting any more rays through this pixel will also return very similar light values, so stop.

2) Usually we're interested of sub-pixel sampling only along the object boundaries, since you notice the jaggedness there the most. To exploit this directly, return the object ID of each ray that you shoot through the current pixel. If after shooting a few rays, these ID's are always the same, you can be pretty confident that each ray shot through this pixel will hit the same object, and stop. If different rays hit different objects however, you want to be shooting a few rays more to get the smooth edges. To speed up a bit, these ID's can be gathered from raycasts at adjacent pixels as well.

You can also try out different sub-pixel sampling patterns to see how they affect the result, like corners+center or uniformly random, and see what gives a nice quality/performance tradeoff.

Share this post


Link to post
Share on other sites
thanks DoXicK !I understand you, and I will give a try.

but I still want to know what make the tow method different:

123
4k6a
789b
_cde


there is two center: k and 9,and the pixels around them:
1,2,3,4,6,7,8,9
k,6,a,8,b,c,d,e
respectively

method 1:(figure 1)for the center k:I shoot the rays 1,2,3,4,6,7,8,9;and average them.and then for the center 9,even though the pixels 8 6 are traced last time ,I re-shoot them, and average them again.

method 2:(figure 2)the pixels 8 and 6 are same for the center k and 9,so I don't re-shoot them.
I just make a 1x1 point sample image first, and then average the image.
this causes the blur problem.

figure 1:
not blured

figure 2:
blured

Share this post


Link to post
Share on other sites
the difference is the fact that blurring averages a pixel with all the surrounding pixels while supersampling/anti-aliasing works by taking extra samples from rays that are otherwise none-existant.

in the method that gives the blurred picture, you can just shoot all the center rays without the 8 other rays. after that you loop over them:

for(x = 0; x < width; x++)
{
for(y = 0; y < height; y++)
{
newpixel[x + y * width] = 0;
for(nx = x - 1; nx <= x + 1; nx++)
{
for(ny = y - 1; ny <= y + 1; ny++)
{
newpixel[x + y * width] += pixel[nx + ny * width];
}
}
newpixel[x + y * width] /= 9;
}
}

do that one after the non-blurred image and you will get the blurred image. i'll edit this post later on with an example picture.

Share this post


Link to post
Share on other sites
Yep it looks like blur because what you're doing (averaging neighboring pixels) IS blur.

Of course in some methods anti-aliasing could be considered like a blur but with a trade off and in the best looking methods it usually comes with sub pixel accuracy.

Here's an implementation of a raytracer with basic supersampling. There's also a blurb about gamma correction on that same page and an alternative method - more blur, less spatial grid artifacts - towards the end of the articles.

LeGreg

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!