Rendering a heatmap on 3d model surfaces

Started by
2 comments, last by DrEvil 10 years, 5 months ago

I don't have a lot of graphics programming experience, but I'm trying to do something that will expand my horizons.

Suppose you had a list of events at various world positions, with an influence radius, and suppose you had a 3d model of an environment.

I'm trying to figure out how to best take this world event data and render it in 3d in a form that illustrates the usual heat map functionality, such as a radial falloff of influence around the events, and an accumulation of stacked influences for overlapping events, that ultimately result in a cool to warm color mapping based on the weight range.

I was thinking of possibly treating the events as 'point lights' in an opengl render loop and iteratively 'render' them into the 3d scene with their radius and falloffs represented how you normally would with a light. I suppose I could do them 8 at a time or whatever the max opengl supports, in such a way that their effects are additively blended. Is that a reasonably way to approach this? If so, then the part I'm not sure about is how to take the resulting rendering and normalize the values back into a cool->warm color gradient. Might that be a screen space post process in some way?

Most heatmap examples can find are 2d based, and involve essentially rasterizing the events into a 2d image additively with some falloff to generate a 2d image. I'm looking to do one in 3d. Not as volumetric event representations, but more likely as the color gradients on the 'floor' of a 3d level that may have a good amount of verticality and overlap. This is why treating them as additive lights comes to mind as a possible starting point. I'm just not sure how to get the min/max weight range out of of the resulting rendered data and how then to colorize the image according to normalize influences within that weight range.

Real time is preferred, so something shader based seems like it would be the way to go, but I'm interested to hear all potential solutions.

Thanks

Advertisement

There's two ways I'd approach it,

1) If the data is going to be completely dynamic every frame and up to maybe a thousand of these events.

2) If the data is static, or new events are added over time but old ones aren't removed.

For #1, I'd do what you've started to describe. Deferred rendering would probably be better, but forward rendering (e.g. 8 at a time) would also work.

Create a floating point frame buffer, and for each "light" calculate the attenuation/falloff, and add all the light values together (e.g. additively blend them into the frame buffer).

Then in a post-processing step, you can make a shader that reads each pixel of that float texture, remaps it to a colour, and writes it out to a regular RGBA/8888 texture for display.

If the float->colour gradient steps needs to know the min/max/average value, then you can create a set of floating point frame-buffers, each half the resolution of the previous until you get to 1x1 pixel (or create a single one with a full mipmap chain -- same thing). After rendering to the full-res float buffer, you can make a post-process shader that reads 4 pixel from it and writes one pixel to the next smallest float buffer. It can take the min/max/average/etc of those 4 values and output it to the next buffer. Repeat and you end up with a 1x1 texture with the value you're looking for.

Your float -> colour post-processing shader can then read from this 1x1 texture in order to know what the avg/min/etc value is.

For #2, I'd work in texture-space instead of screen space. This requires that your model has an unique UV layout -- if the model has "lightmap texture coords" as well as regular ones, then for this, you'd want to be using the "lightmap" ones. These are just an arbitrary mapping to 2D texture space, where each triangle gets a non-overlapping region of a 2D texture to store it's "lightmap"/etc data.

First, make a float texture at a high enough resolution to give good detail on the surface of the model. Then render the model into this float texture, except instead of outputting the vertex position transformed by the WVP matrix, you instead output the "lightmap" texture coordinates (but scaled to the -1 to 1 range, instead of 0 to 1) as if they were the positions. This will allow you to "bake" data into the "lightmap" texture.

Then in the pixel shader, you compute the attenuation/etc as usual, and additively output all the light values.

Then once you've generated your "lightmap", you can perform the same steps as above to reduce it to a 1x1 texture if you want to find an average value, etc.

You can then render the model as usual in 3D (outputting positions to the screen), and you can texture it using the pre-generated lightmap.

If new events arrive, you can additively blend them into the existing lightmap texture.

Thanks for the input. That confirms some of the stuff I was thinking and adds some interesting new things as well. Sounds like it's going to be rather complex.

One question with regards to automatically calculating the min/max range with which to map the heat gradient. That rendering would have to be the entire mesh in order to generate the global range for the color adjustments, so that step would need to be done with the camera at a perspective of seeing everything, correct? ie, looking down on the world, possibly with an orthographic perspective so that there is no perspective loss/covered pixels that will be lost. This may be a stretch goal and at first I will probably have the min/max values map to a slider that can be manipulated to map the color. That may be useful in other ways too, by allowing you to adjust the sliders to visualize detail that might be lost or very subtle with a fixed 'real' color mapping.

Basically I'm trying to make a general purpose heat map renderer that I wish to use with a game analytics library that I'm using to collect game events and submit them to gameanalytics.com which has gone free a while back. They don't have any heatmap rendering capabilities on the site unless you are using unity. Ultimately I want to release this library open source for users of game analytics or any custom event logging that want to visualize data in this way. Something like provide a .obj of your map geometry and an event log file and it will render a heatmap with it.

Ok, I don't really understand the #1 parts about attenuation and falloff. I'm not familiar with how to calculate them. I'd also like to avoid the lightmapping approach due to the extra cost and book keeping of light mapping. My mesh doesn't have UVs either, so that's not terribly useful. In general the mesh is a colored flat shaded mesh pulled from a navigation mesh in which I don't have need for UVs.

I've got a basic app going where I'm passing the events into the shader as uniform blocks of vec4, representing the world position and event weight in the w. I'm having trouble figuring out how to calculate the world position of the fragment though.

My initial implementation intent is to loop through the events in the shader and accumulate the cost weighting per fragment by distance from fragment to each event. I'm just not sure how to calculate the world position of the fragment. If I can get it working it will be fully dynamic and can respond in real time to events and weighting changes better than most techniques.

This topic is closed to new replies.

Advertisement