Firing many rays at one pixel?

Started by
11 comments, last by BadEggGames 8 years, 8 months ago

Hi all

I am trying to implement the simplest antialiasing on my basic ray tracer by averaging the color of 5 rays per pixel.

My problem is, I cant fire rays at different positions on a pixel.

I am using libgdx and java and cast my rays like: Ray ray = cam.getPickRay(x,y);

The pickRay method takes floats so I figured I could fire rays at x+0.99f and y+0.99f but it doesn't work that way.

Is there a mathematical way I can compute a new ray direction within a pixel?

Thanks

Advertisement

The pickRay method takes floats so I figured I could fire rays at x+0.99f and y+0.99f but it doesn't work that way.


Actually it works exactly like that (well, not 0.99f, but you want to average up rays in a neighbourhood of (x, y)). If your camera object doesn't allow you to fire rays over a continuous plane, as opposed to a discrete grid of pixels, then it's broken.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Thanks Bacterius

It appears I have found a bug with Libgdx.

Will find a work around

Cheers

Figured it out.

I just made your tracer 5 times slower with minimal antialiasing :P

Actually, 5 samples are already pretty good. Sure, 16x AA is better than 4x AA, but 4x AA looks pretty decent, and 5 > 4.

You might need to play with your sample locations, for example try choosing much smaller offsets, and either have more of them closer to the center (preferrably!), or weight them according to distance if they're uniformly distributed (needs a lot more samples to look good).

Blending with samples as far away as 0.9 will blur the image, and not necessarily improve quality a lot if you ask me. You're basically doing an average with the neighbouring pixels, which is... a box blur.

Imagine you render an image at twice its size (in both dimensions) and then scale it down 50%. This is 4x FSAA. Where are the samples for each pixel? Surely not at integer and +0.99 coordinates but at integer and +0.5 coords!

So, try arranging your samples much closer, so you're around +0.5 rather than +0.99 (or even closer, if you want it sharper).

A gaussian random generator should do fine (just add 3-4 random values to get a bell-shaped distribution, if you will). A spiral shape, precomputed and possibly with a randomized rotation, should be nice too since a spiral will guarantee that more samples are closer to the center, and they go "around" the center rather regularly (but not really all too much regularly, as in giving a pattern), with importance decreasing as you add more samples.

You could probably even do a form of adaptive multisampling rather easily by adding samples from the spiral until either some maximum (like, 20 or 50) is reached, or until the change to the pixel's cumulated output color falls below some threshold (in that case, it is pointless to waste time on getting more samples!). Such an adaptive scheme might give you a vastly superior image quality, since you can afford to do a lot of samples where it matters as you only do few where it doesn't matter.

Try some common patterns:

31Re2uV.png

Stolen from MJP's blog, stolen from Real-Time Rendering, 3rd Edition, A K Peters 2008 :D

A nice thing with Quincunx is that neighbouring pixels can actually share those samples -- so it's a 5x filter, but you only have to do 2x as many samples! However, it does have worse quality than a real 4x sample scheme with better placements.

You can do the same trick with some other patterns by placing some samples on pixel edges. e.g. with the 2x2 RGSS (RG=rotated grid), you can move those samples over just a tad so they're on the pixel edges, and add one extra sample in the middle. Again, this makes it like Quincunx -- a 5x filter that only requires you to perform 2x as many samples thanks to the sharing between pixels.

Go for the full 8x8 checker if you want awesome quality :)

I think I remember reading somewhere that you have to perform n2 samples to get an n times reduction in image noise, meaning that anti-aliasing is by necessity an expensive operation... but I can't remember the source or the context of that claim. To get around this, there's also all of the post-process AA techniques, such as FXAA/MLAA/etc.

Thanks for the info guys

Really appreciate it.

Been playing around with it all day, hand coded 21 sample locations but image quality is little better than 5 so good to see better patterns.

This is on android phones so 21 samples really slowed things down. I was wondering if post processing would be the better way to go.

Note that nVidia holds a patent on Quincunx. I would therefore be careful (read as: reluctant) using that one. This patent is pretty well-known, it was the big new thing for advertizing the Geforce 3 back then, so if they sue you (which isn't unlikely), it will be hard to pretend you didn't know about it, since it was that much hype.

For path tracers, it's pretty common to use random or psuedo-random sampling patterns for this purpose. In contrast to regular sample patterns (AKA sample patterns that are the same per-pixel, like in Hodgman's example), they will hide aliasing better but will replace it with noise. I would strongly suggest reading through Physically Based Rendering if you haven't already, since it has a great overview of the some of the more popular sampling patterns (stratified, Hammersley, Latin Hypercube, etc.)

Thanks guys, will have to read that book.

As a test, I implemented random numbers and it is waaaaay better.

What's good is I can change the amount of samples at runtime. 10 samples looks great, 100 looks awesome and 1000 took a while but is perfect :P

Bacterius Orginal (1 Ray Per Pixel)

image.png

Ray[] rays = new Ray[100];
for(int rrays = 0; rrays < rays.length; rrays++) {
rays[rrays] = new Ray().set(cam.getPickRay((float)x+MathUtils.random(0f, 0.9999f), (float)y+MathUtils.random(0f, 0.9999f)));
}

(100 Random Samples)

1000.png

Thanks all :)

This topic is closed to new replies.

Advertisement