Three come to mind:
* Kriging: This is a complicated technique, but it gives very good results. I don't know if there is a readily available library implementation you can use.
* Natural neighbor interpolation.
* Constrained optimization using a quadratic penalty for difference between neighbors. I am sure this is not an original idea of mine, but I can't find a reference to it. You try to find the 4096x4096 array that agrees with your data at all the given points and that minimizes the sum of squares of differences between neighboring entries.
You can find a longer list of methods on Wikipedia.
Krigling I think I got lost with the maths (reading maths is not my strong point lol )..
The natural neighbour is a good idea, I might give that a whirl at some point. I think I already have written code for doing voronoi / dirichlet tesselation stuff.
The third idea about minimizing differences is something I've dabbled with for this too, in various guises.
Ok enough with the lack of bias, some of the things I can remember trying:
- Weighted averages, weighting the influence of each data point by distance. Either taking into account all the data points or have some kind of cutoff distance. I think I ended up using this last time I faced the problem.
- Triangulating the points. Ugly.
- Cellular automata type approaches. I think these might be ultimately the best but I had real problems coming up with good rules.
The cellular automata methods I tried were things like having discrete amount in each cell, then redistributing according to the neighbour amounts (like water flow). And also something similar but like a spring system, like FEM physics models (I think?). This is probably similar to your third method.
I do fancy the iterative cellular technique best if I could get it working well. It also has the advantage that has no significant extra cost for each data point, as you only need to 'seed' it. So the cost would just be related to the size of the grid.