I'm building a photonmapping lightmapper

Started by
1 comment, last by Lightness1024 8 years, 1 month ago

Hey guys, I wanted to do some reporting on a tech I'm trying to achieve here.

I took my old (2002) engine out of its dusty shelves and tried to incorporate raytracing to render light on maps,

I wrote a little article about it here:

https://motsd1inge.wordpress.com/2015/07/13/light-rendering-on-maps/

First I built an adhoc parameterizer, a packer (with triangle pairing and re-orientation using hull detection), a conservative rasterizer, a 2nd set uv generator;

And finally, it uses embree to cast rays as fast as possible.

I have a nice ambient evaluation by now, like this:

5ATKDtq.jpg

but the second step is to get final gathering working. (as in Henrik Wan Jensen style)

I am writing a report about the temporary results here:

https://motsd1inge.wordpress.com/2016/03/29/light-rendering-2/

As you can see it's not in working order yet.

I'd like to ask, if someone already implemented this kind of stuff here, did you use a kd-tree to store the photons ?

I'm using space hashing for the moment.

Also, I have a bunch of issues I'd like to discuss:

One is about light attenuation with distance. In the classic 1/r*r formula, r depends on the units (world units) that you chose. I find that very strange.

Second, is about the factor by which you divide to normalize your energy knowing some amount of gather rays.

My tech fixes the number of gather rays by command line, somehow (indirectly), but each gather ray is in fact only a starting point to create a whole stream of actual rays that will be spawned from the photons found in the vicinity of the primary hit.

The result is that i get "cloud * sample" number of rays, but cloud-arity is very difficult to predict because it depends on the radius of the primary ray.

I should draw a picture for this but I must sleep now, I'll do it tomorrow for sure. But for now, the question is that it is kind of fuzzy how many rays are going to get gathered, so I can't clearly divide by sampleCount, nor can I divide by "cloud-arity * sampleCount" because the arity depends on occlusion trimming. (always dividing exactly by number of rays would make for a stupid evaluator that just says everything is grey)

...

I promise, a drawing, lol ;) In the meantime any thought is welcomed

Advertisement

Awight, there we go, the drawing:

1o70w1K.png

Ok so with this, it's much clearer I hope, the problem is by what figure should I be diving the energy gathered at one point ?

The algo works this way:

first pass is direct lighting and sky ambient irradiance.

second pass creates photons out of the lumels from the first pass.

the third pass scans the lightmap lumels and do the final gather.

The gather works by sending 12 primary rays, finding k-neighbors from the hit position, tracing back to origin and summing Lambertians.

Normally one would think we have to divide by number of samples, but the samples can vary according to photon density, and the density is not significant because I store a color in them. (i.e. their distribution is not the mean of storing the flux like in some implementations)

Also, the radius depends on the primary ray length, which means more or less photons will be intercepted in the neighborhood depending if it hits close or far. And finally the secondary rays can encounter occluders, so it's not like it will gather N and we can divide by N. If we divide by the number of rays that arrive at the origin, we are going to have an unfair energy boost.

I tend to think I should divide by the number of intercepted photons in the radius ?

Edit: that's the photon map visualizer. I made so that colors = clusters.

photocloud.png

Ok little report from the front.

I figured that one big problem I have is a problem of precision during reconstruction of the 3d position from the lumel position.

I have a bad spatial quantization, using some bias helped remove some artefacts, but the biggest bugs arn't gone.

anyway, some results applied to real world maps show that indirect lighting shows and makes some amount of difference:

(imgur album)

coBmvaW.jpg

hfcyC7f.jpg

B8km42g.jpg

ySgy6ms.jpg

WrCx70m.jpg

pURIJaH.jpg

This topic is closed to new replies.

Advertisement