Sign in to follow this  

OpenGL Rendering a heatmap on 3d model surfaces

Recommended Posts

DrEvil    1148

I don't have a lot of graphics programming experience, but I'm trying to do something that will expand my horizons.


Suppose you had a list of events at various world positions, with an influence radius, and suppose you had a 3d model of an environment.


I'm trying to figure out how to best take this world event data and render it in 3d in a form that illustrates the usual heat map functionality, such as a radial falloff of influence around the events, and an accumulation of stacked influences for overlapping events, that ultimately result in a cool to warm color mapping based on the weight range.



I was thinking of possibly treating the events as 'point lights' in an opengl render loop and iteratively 'render' them into the 3d scene with their radius and falloffs represented how you normally would with a light. I suppose I could do them 8 at a time or whatever the max opengl supports, in such a way that their effects are additively blended. Is that a reasonably way to approach this? If so, then the part I'm not sure about is how to take the resulting rendering and normalize the values back into a cool->warm color gradient. Might that be a screen space post process in some way?


Most heatmap examples can find are 2d based, and involve essentially rasterizing the events into a 2d image additively with some falloff to generate a 2d image. I'm looking to do one in 3d. Not as volumetric event representations, but more likely as the color gradients on the 'floor' of a 3d level that may have a good amount of verticality and overlap. This is why treating them as additive lights comes to mind as a possible starting point. I'm just not sure how to get the min/max weight range out of of the resulting rendered data and how then to colorize the image according to normalize influences within that weight range.


Real time is preferred, so something shader based seems like it would be the way to go, but I'm interested to hear all potential solutions.



Share this post

Link to post
Share on other sites
Hodgman    51334

There's two ways I'd approach it, 

1) If the data is going to be completely dynamic every frame and up to maybe a thousand of these events.

2) If the data is static, or new events are added over time but old ones aren't removed.


For #1, I'd do what you've started to describe. Deferred rendering would probably be better, but forward rendering (e.g. 8 at a time) would also work.

Create a floating point frame buffer, and for each "light" calculate the attenuation/falloff, and add all the light values together (e.g. additively blend them into the frame buffer).

Then in a post-processing step, you can make a shader that reads each pixel of that float texture, remaps it to a colour, and writes it out to a regular RGBA/8888 texture for display.


If the float->colour gradient steps needs to know the min/max/average value, then you can create a set of floating point frame-buffers, each half the resolution of the previous until you get to 1x1 pixel (or create a single one with a full mipmap chain -- same thing). After rendering to the full-res float buffer, you can make a post-process shader that reads 4 pixel from it and writes one pixel to the next smallest float buffer. It can take the min/max/average/etc of those 4 values and output it to the next buffer. Repeat and you end up with a 1x1 texture with the value you're looking for.


Your float -> colour post-processing shader can then read from this 1x1 texture in order to know what the avg/min/etc value is.



For #2, I'd work in texture-space instead of screen space. This requires that your model has an unique UV layout -- if the model has "lightmap texture coords" as well as regular ones, then for this, you'd want to be using the "lightmap" ones. These are just an arbitrary mapping to 2D texture space, where each triangle gets a non-overlapping region of a 2D texture to store it's "lightmap"/etc data.

First, make a float texture at a high enough resolution to give good detail on the surface of the model. Then render the model into this float texture, except instead of outputting the vertex position transformed by the WVP matrix, you instead output the "lightmap" texture coordinates (but scaled to the -1 to 1 range, instead of 0 to 1) as if they were the positions. This will allow you to "bake" data into the "lightmap" texture.

Then in the pixel shader, you compute the attenuation/etc as usual, and additively output all the light values.


Then once you've generated your "lightmap", you can perform the same steps as above to reduce it to a 1x1 texture if you want to find an average value, etc.

You can then render the model as usual in 3D (outputting positions to the screen), and you can texture it using the pre-generated lightmap.


If new events arrive, you can additively blend them into the existing lightmap texture.

Share this post

Link to post
Share on other sites
DrEvil    1148

Thanks for the input. That confirms some of the stuff I was thinking and adds some interesting new things as well. Sounds like it's going to be rather complex.


One question with regards to automatically calculating the min/max range with which to map the heat gradient. That rendering would have to be the entire mesh in order to generate the global range for the color adjustments, so that step would need to be done with the camera at a perspective of seeing everything, correct? ie, looking down on the world, possibly with an orthographic perspective so that there is no perspective loss/covered pixels that will be lost. This may be a stretch goal and at first I will probably have the min/max values map to a slider that can be manipulated to map the color. That may be useful in other ways too, by allowing you to adjust the sliders to visualize detail that might be lost or very subtle with a fixed 'real' color mapping.


Basically I'm trying to make a general purpose heat map renderer that I wish to use with a game analytics library that I'm using to collect game events and submit them to which has gone free a while back. They don't have any heatmap rendering capabilities on the site unless you are using unity. Ultimately I want to release this library open source for users of game analytics or any custom event logging that want to visualize data in this way. Something like provide a .obj of your map geometry and an event log file and it will render a heatmap with it.

Share this post

Link to post
Share on other sites
DrEvil    1148

Ok, I don't really understand the #1 parts about attenuation and falloff. I'm not familiar with how to calculate them. I'd also like to avoid the lightmapping approach due to the extra cost and book keeping of light mapping. My mesh doesn't have UVs either, so that's not terribly useful. In general the mesh is a colored flat shaded mesh pulled from a navigation mesh in which I don't have need for UVs.


I've got a basic app going where I'm passing the events into the shader as uniform blocks of vec4, representing the world position and event weight in the w. I'm having trouble figuring out how to calculate the world position of the fragment though.


My initial implementation intent is to loop through the events in the shader and accumulate the cost weighting per fragment by distance from fragment to each event. I'm just not sure how to calculate the world position of the fragment. If I can get it working it will be fully dynamic and can respond in real time to events and weighting changes better than most techniques.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
  • Popular Now