I don't have a lot of graphics programming experience, but I'm trying to do something that will expand my horizons.
Suppose you had a list of events at various world positions, with an influence radius, and suppose you had a 3d model of an environment.
I'm trying to figure out how to best take this world event data and render it in 3d in a form that illustrates the usual heat map functionality, such as a radial falloff of influence around the events, and an accumulation of stacked influences for overlapping events, that ultimately result in a cool to warm color mapping based on the weight range.
I was thinking of possibly treating the events as 'point lights' in an opengl render loop and iteratively 'render' them into the 3d scene with their radius and falloffs represented how you normally would with a light. I suppose I could do them 8 at a time or whatever the max opengl supports, in such a way that their effects are additively blended. Is that a reasonably way to approach this? If so, then the part I'm not sure about is how to take the resulting rendering and normalize the values back into a cool->warm color gradient. Might that be a screen space post process in some way?
Most heatmap examples can find are 2d based, and involve essentially rasterizing the events into a 2d image additively with some falloff to generate a 2d image. I'm looking to do one in 3d. Not as volumetric event representations, but more likely as the color gradients on the 'floor' of a 3d level that may have a good amount of verticality and overlap. This is why treating them as additive lights comes to mind as a possible starting point. I'm just not sure how to get the min/max weight range out of of the resulting rendered data and how then to colorize the image according to normalize influences within that weight range.
Real time is preferred, so something shader based seems like it would be the way to go, but I'm interested to hear all potential solutions.
Thanks