Jump to content

  • Log In with Google      Sign In   
  • Create Account

DrEvil

Member Since 13 Jul 2003
Offline Last Active Oct 14 2014 07:09 AM

Topics I've Started

Rendering a heatmap on 3d model surfaces

28 October 2013 - 09:18 PM

I don't have a lot of graphics programming experience, but I'm trying to do something that will expand my horizons.

 

Suppose you had a list of events at various world positions, with an influence radius, and suppose you had a 3d model of an environment.

 

I'm trying to figure out how to best take this world event data and render it in 3d in a form that illustrates the usual heat map functionality, such as a radial falloff of influence around the events, and an accumulation of stacked influences for overlapping events, that ultimately result in a cool to warm color mapping based on the weight range.

 

 

I was thinking of possibly treating the events as 'point lights' in an opengl render loop and iteratively 'render' them into the 3d scene with their radius and falloffs represented how you normally would with a light. I suppose I could do them 8 at a time or whatever the max opengl supports, in such a way that their effects are additively blended. Is that a reasonably way to approach this? If so, then the part I'm not sure about is how to take the resulting rendering and normalize the values back into a cool->warm color gradient. Might that be a screen space post process in some way?

 

Most heatmap examples can find are 2d based, and involve essentially rasterizing the events into a 2d image additively with some falloff to generate a 2d image. I'm looking to do one in 3d. Not as volumetric event representations, but more likely as the color gradients on the 'floor' of a 3d level that may have a good amount of verticality and overlap. This is why treating them as additive lights comes to mind as a possible starting point. I'm just not sure how to get the min/max weight range out of of the resulting rendered data and how then to colorize the image according to normalize influences within that weight range.

 

Real time is preferred, so something shader based seems like it would be the way to go, but I'm interested to hear all potential solutions.

 

Thanks


A.I. Path Following

14 September 2013 - 11:31 AM

So I have a nav mesh, the implementation and usage of which is pretty easy. The part of navigation systems that I typically struggle with a bit is the logic for following a path, which in principal doesn't sound like it would be difficult, but it has a number of non trivial elements.

 

Suppose you have a series of points representing a path to a destination. You often simply 'seek' to the next point in the path, but the tricky part comes in when it comes to what the logic is for 'advancing' the 'active' point to the next point. If you use some sort of 'in radius' check it can sometimes cause guys to abandon a point early and seek to the next point where they haven't cleared a corner for instance, and may get caught on the corner. In nav meshes in particular, paths around corners tend to hug the wall as closely as possible, whether due to string pulling or whatever, so incrementing the seek point needs to do so in a reliable way so as to recover from getting hung on a ledge.

 

In the past I would increment the current point index and/or remove points from the head of the list, but what I don't like about that is that it is destructive to the path points in a way that removes information useful if the A.I. needs to get back on the path. Maybe he was blown out of a navmesh region, or got hung on something or whatever. Since the path alteration is destructive errors often manifest as guys running into walls seeking to a point because they didn't fully get around the last corner vertex and got hung up.

 

Curious how others avoid this issue and/or maintain their path following such that it can recover and reliably follow the path. In many ways the path following part has proved more difficult for me than implementing the navmesh itself.


Navmesh without agent radius offsets

05 May 2013 - 07:31 PM

Wondering if anyone has used a navigation mesh with AI without doing the typical agent radius offsetting that is normally done. I know that it simplifies a lot to bake in the agent radius to the navmesh, but I'm experimenting with a navmesh representing the entire floor space without the offsetting. As you can probably imagine, this complicates certain aspects of it, so I'm looking for ideas if anyone has tried or done similar before.

 

Here's some screens to get an idea where I'm at with my tinkering. This is from one of many games my bot supports but it's a great testing ground for various A.I. stuff

 

http://omni-bot.com/tmp/navmesh/shot0001.jpg

http://omni-bot.com/tmp/navmesh/shot0002.jpg

http://omni-bot.com/tmp/navmesh/shot0003.jpg

http://omni-bot.com/tmp/navmesh/shot0004.jpg

http://omni-bot.com/tmp/navmesh/shot0006.jpg

http://omni-bot.com/tmp/navmesh/shot0007.jpg

http://omni-bot.com/tmp/navmesh/shot0008.jpg

http://omni-bot.com/tmp/navmesh/shot0013.jpg

http://omni-bot.com/tmp/navmesh/shot0014.jpg

http://omni-bot.com/tmp/navmesh/shot0015.jpg

http://omni-bot.com/tmp/navmesh/shot0016.jpg

http://omni-bot.com/tmp/navmesh/shot0017.jpg

http://omni-bot.com/tmp/navmesh/navmesh.jpg

 

 

The last picture shows the main obvious issue with not offsetting the navmesh by the agent radius. Basically I will need to modify the paths found through the navmesh to account for the agent radius some as part of building the result path. I'm wondering if anyone has some ideas about how you might address this. I'm not yet funnel pulling the path, though I plan to, and I'm tinkering now with modifying the path along the edge portal normal by the agent radius to try and compensate, but there are situations where that distance even can overshoot the size of an adjacent sector, so I'm looking for some ideas on perhaps how/when to optionally perform fixup in a more selective and robust manner.

 

Thanks in advance.


OpenGL VBO Problems

02 March 2013 - 02:27 AM

I'm having trouble rendering a pretty large VBO of quads. This code appears to work fine on small meshes, yet when I try to render pretty large ones(58997 quads) from a single VBO it crashes specifically in 

 

glDrawArrays( it->second.mType, 0, it->second.mNumPrimitives * it->second.mNumVertsPerPrimitive );

 

 

No call stack, just a single random address in the callstack window and the output windows shows

 

 

First-chance exception at 0x003ff829 in ET.exe: 0xC0000005: Access violation reading location 0x01e27000.
Unhandled exception at 0x003ff829 in ET.exe: 0xC0000005: Access violation reading location 0x01e27000.

 

If I remove for example the * it->second.mNumVertsPerPrimitive from the glDrawArrays I don't see the whole mesh. 

 

Here's my construction and rendering code. Since I always get confused about whether various parameters means number of geometric primtives(quad, triangle, etc), or number of floats, I'm trying to wrap the VBO creation and rendering in some helper functions to hide it.

 


bool RenderBuffer::StaticBufferCreate( obuint32 & bufferId, const QuadList & primitives )
{
    static int nextBufferId = 0;
 
    StaticBufferDelete( bufferId );
 
    if ( bufferId == 0 )
        bufferId = ++nextBufferId;
 
    glPushClientAttrib( GL_CLIENT_ALL_ATTRIB_BITS );
 
    VBO v;
    v.mType = GL_QUADS;
    v.mNumPrimitives = primitives.size();
    v.mNumVertsPerPrimitive = 4;
 
    glGenBuffersARB( 1, &v.mBufferVertId );
    glGenBuffersARB( 1, &v.mBufferColorId );
 
    std::vector<float> coords( v.mNumPrimitives * v.mNumVertsPerPrimitive * 3 /* floats per vert */ );
    std::vector<float> colors( v.mNumPrimitives * v.mNumVertsPerPrimitive * 4 /* floats per color */ );
 
    coords.resize( 0 );
    colors.resize( 0 );
 
    for ( size_t i = 0; i < primitives.size(); ++i )
    {
        const Quad & q = primitives[ i ];
        for ( size_t v = 0; v < 4; ++v )
        {
            coords.push_back( q.v[v].x );
            coords.push_back( q.v[v].y );
            coords.push_back( q.v[v].z );
 
            colors.push_back( q.c.rF() );
            colors.push_back( q.c.gF() );
            colors.push_back( q.c.bF() );
            colors.push_back( q.c.aF() );
        }
    }
 
    GLsizeiptr vertexBytes = sizeof(float) * coords.size();
    GLsizeiptr colorBytes = sizeof(float) * colors.size();
 
    glBindBufferARB( GL_ARRAY_BUFFER_ARB, v.mBufferVertId );
    glBufferDataARB( GL_ARRAY_BUFFER_ARB, vertexBytes, &coords[ 0 ], GL_STATIC_DRAW_ARB );
 
    glBindBufferARB( GL_ARRAY_BUFFER_ARB, v.mBufferColorId );
    glBufferDataARB( GL_ARRAY_BUFFER_ARB, colorBytes, &colors[ 0 ], GL_STATIC_DRAW_ARB );
    
    glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
 
    vbos.insert( std::make_pair( bufferId, v ) );
 
    glPopClientAttrib();
    return true;
}

 

 And the drawing code


for ( size_t i = 0; i < mVBOList.size(); ++i )
{
    VBOMap::const_iterator it = vbos.find( mVBOList[ i ] );
    if ( it != vbos.end() )
    {
        glBindBufferARB( GL_ARRAY_BUFFER_ARB, it->second.mBufferVertId );
        glEnableClientState( GL_VERTEX_ARRAY );
        glVertexPointer( 3, GL_FLOAT, 0, NULL /*start of buffer, no offset*/ );
 
        glBindBufferARB( GL_ARRAY_BUFFER_ARB, it->second.mBufferColorId );                
        glEnableClientState( GL_COLOR_ARRAY );
        glColorPointer( 4, GL_FLOAT, 0, NULL /*start of buffer, no offset*/ );
                
        glDrawArrays( it->second.mType, 0, it->second.mNumPrimitives * it->second.mNumVertsPerPrimitive );
 
        glBindBufferARB( GL_ARRAY_BUFFER_ARB, 0 );
 
        glDisableClientState( GL_VERTEX_ARRAY );                
        glDisableClientState( GL_COLOR_ARRAY );
    }
}

 

 


Mouse Cursor and debugger

13 June 2011 - 02:47 PM

I'm working on an FPS game that hides the mouse cursor, as most of them do, however there is an annoying quirk that when asserts or breakpoints are hit and such, one must ctrl-alt-del and click around to un-hide the mouse cursor to do anything useful. Surely there is a better way to manage the mouse cursor?

PARTNERS