interpolate between multiple vectors

Started by
3 comments, last by belfegor 9 years, 8 months ago

I was reading this article where they describe implementation of "light probes" in Unity game engine.

Quote from that article:


The useful property of spherical harmonics encoded probes is that a linear interpolation between two probes would just be a linear interpolation of their coefficients.

I don't know how to do this with more then 2 probes?

If i use distance from object to probe as a weight, how can i do this with (let say) 4 nearest probes?

How do i sum up this weighted vectors into one/final?

Advertisement

Assuming your probes are laid out in a grid, then interpolating between the four nearest probes would be just like doing bilinear interpolation during a texture sample. For example, you have probes A, B, C and D in a grid. A is the top left, B is the top right, C is the bottom left, D is the bottom right. The interpolated probe would be calculated like so:


AB_interp = (1 - t)A + tB where t is the distance from the left edge of the grid rect
CD_interp = (1 - t)C + tD

Final = (1 - s)AB_interp + sCD_interp where s is the distance from the top of the grid rect

Basically, you have to interpolate between two interpolated values.

If your probes aren't aligned on a grid, I would probably triangulate the network of probes and get a set of barycentric coordinates of the current position in the triangle you're in, and use those coordinates as interpolation values.

Probes are not in the "grid", i want to place them manually on specific places, so i want to use "K nearest probes" approach as described in article.

I was thinking i should use distance from object as weight, tho i would not take too distant probes in calculation, so number of probes could vary.

If i use "triangulate" approach i need to always interpolate between 3 probes?
Barycentric coordinates gives me 2 values, how can i interpret this as one is needed for lerp function?

Probes are not in the "grid", i want to place them manually on specific places, so i want to use "K nearest probes" approach as described in article.

I was thinking i should use distance from object as weight, tho i would not take too distant probes in calculation, so number of probes could vary.

If i use "triangulate" approach i need to always interpolate between 3 probes?
Barycentric coordinates gives me 2 values, how can i interpret this as one is needed for lerp function?

My bad, I didn't actually read the article tongue.png

But now I've taken a glance at the article. The "tetrahedralization" method is the 3d variant of what I suggested with triangles. I'll keep talking about triangles, because it's not fundamentally different, just simpler to talk about. Barycentric coordinates on a triangle gives you two coordinates, and the third coordinate is 1 - coord1 - coord2. With barycentric coordinates you wouldn't be using a lerp function directly, though you would implement something that is essentially equivalent. What you must do with barycentric coordinates is a linear combination of the three triangle vertices (ie the three probes) using the barycentric coords. So, in effect:


InterpolatedProbe = Probe1*coord1 + Probe2*coord2 + Probe3*(1 - coord1 - coord);

In the above code InterpolatedProbe is a linear combination of Probe1, Probe2 and Probe3 using coord1, coord2 and (1 - coord1 - coord2) as the coefficients. The lerp function is actually a special case of a linear combination (and in even more specific terms, lerp is actually interpolating using barycentric coordinates in one dimension).

But anyway. Triangulating or tetraherdalizing your probe network might be somewhat difficult, so maybe the K nearest probes method would be a better first pass. I'm not entirely sure what the author is imagining, but I suspect he's figuring people will calculate their interpolated probe as a linear combination of the K closest probes, possibly using distance as a weight and normalizing by the total weights of all K probes. In code:


float weight(Vector3 position, Probe p)
{
    float maxDist = 10.0f;
    return max(0, maxDist - length(p.position - position));
}

InterpolatedProbe = weight(pos, probe[0])*probe[0] + 
                    weight(pos, probe[1])*probe[1] +
                    ...
                    weight(pos, probe[k])*probe[k];

float totalWeight = weight(pos, probe[0]) + ... + weight(pos, probe[k]);
InterpolatedProbe /= totalWeight;

That's just off the top of my head. It's a linear combination of K probes, though I obviously have no idea how it would look. And you can write the weighting function however you want.


I'm not entirely sure what the author is imagining, but I suspect he's figuring people will calculate their interpolated probe as a linear combination of the K closest probes...

They don't have to do anything except to place probes in scene, all the hard work is done "behind scene". biggrin.png

I am just trying to implement something similar into my framework/engine.

Thank you for pointing me in the right direction. smile.png

I tried your weight function but it gives me unexpected results :

max.jpg

there is 2 light probes and they are at spheres position in the screenshot (3 frames taken, moved my dynamic object around)

So i change it to this:


float weight(float totalDist, Vector3 position, Probe p)
{
    float dist = length(p.position - position);
    return 1.0f - (dist / totalDist); // totalDist is sum of all pos->probe distances
}

total.jpg

and it gives me expected results, unless i am mistaken somehow, need to test this more.

This topic is closed to new replies.

Advertisement