Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ender1618

Member Since 30 Apr 2004
Offline Last Active Nov 17 2014 03:38 PM

Topics I've Started

View Frustum Culling Corner Cases

03 January 2014 - 03:05 PM

I was reviewing my view frustum culling code for a new OpenGL project I am working on, and was noticing a bit too many corner cases with large bounding volumes (e.g. spheres) and smaller frustums. Corner cases where the bounding volume is in no way intersecting will my frustum volume, yet getting accepted as visible. I am using the Lighthouse3d (http://www.lighthouse3d.com/tutorials/view-frustum-culling/) method (geometric frustum plane method not radar) for extracting planes and testing against bounding volumes.

 

Here is an example (all frustum plane normals face inward (blue lines) )

http://img10.imageshack.us/img10/3970/70bm.jpg

 

The above image is top down, but neither the top or bottom frustum planes reject the sphere either.

 

This family of intersect methods rely on having at least one of the frustum planes reject the volume as outside. But there are corner cases where the volumes do not intersect, yet none of the frustum planes reject the volume, for example in the image I posted.

 

How does one deal typically deal with such cases (while still using world space frustum plane culling techniques, if possible)?

 


Normal oriented elliptical shapes (surfels) using point sprites.

10 December 2013 - 02:45 PM

I am trying to reproduce this effect with point sprites (given vertices with a position and normal). 

 

http://imageshack.com/a/img36/7057/5t7b.jpg

 

Essentially discarding fragments of a point sprite, dependent on the normal of that point, to produce an an elliptical shape tangent to the point normal (essentially approximated ortho projection of a 3D circle to a 2D ellipse).

 

From the equation I found d = -(n.x/n.z)*x-(n.y/n.z)*y, where a fragment is discarded if the world space distance from the point center to a point (x,y,d) is greater that the disk radius (as indicated by the text above the image).

 

I am trying to figure the right way of doing this in my GLSL vertex and fragment shaders, using point sprites.  

 

in my  shaders I am doing something like this, which isn't working:

 

//vertex shader
#version 400


layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;

out vec3 Color;
flat out vec3 PtPosition;
flat out vec3 PtNormal;
out vec3 FragPosition;

uniform mat4 MVP;
uniform float heightMin;
uniform float heightMax;
uniform bool invertGrad = false;


uniform mat4 MV;
uniform float pointSize;
uniform float viewportDim;


float perspPtSize(vec3 ptPos, mat4 mv, float ptWorldSize, float viewportDim)
{
  vec3 posEye = vec3(mv * vec4(ptPos, 1.0));
  return ptWorldSize * (viewportDim / length(posEye));
}

void main()
{
  Color = vec3(1.0,1.0,1.0);
  PtPosition = vec3(MV * vec4(VertexPosition,1.0));
  FragPosition = PtPosition; 
  PtNormal = vec3(MV * vec4(VertexNormal,1.0));
  gl_Position = MVP * vec4(VertexPosition,1.0);


  gl_PointSize = perspPtSize(VertexPosition.xyz,MV,pointSize,viewportDim);
}


//fragment shader
#version 400

layout( location = 0 ) out vec4 FragColor;

in vec3 Color;
flat in vec3 PtPosition;
flat in vec3 PtNormal;

void main() 
{
  vec2 ptC = gl_PointCoord- vec2(0.5);
  float depth = -PtNormal.x/PtNormal.z*ptC.x - 
                 PtNormal.y/PtNormal.z*ptC.y; 
  float sqrMag = ptC.x*ptC.x + ptC.y*ptC.y + depth*depth; 

  if(sqrMag > 0.25) 
  { discard; } 
  else 
  { fragColor = vec4(color, 1.0); }
}

Has anyone successfully implemented an effect like this? I tried doing this in world space as well but ended up getting incorrect results, I figured if i left it all in point sprite expressed space it might be easier.

 

I think I am missing some basic concept for doing this, any suggestions?


Choosing specific GPU for OpenGL context?

23 October 2013 - 03:06 PM

I have an issue with my application (Win7 64bit OpenGL 4.0), picking the wrong GPU on some peoples machine for OpenGL acceleration such as the Intel HD3000 embedded GPU vs the Nvidia or ATI GPU. HD3000 does not support OpenGL 4.0 (AFAIK), which is my min requirement, so the app fails to run.

 

BTW, my app is intended to be cross platform (but for right now Windows 7 is most important, then Linux, then Mac). 

 

I am currently creating my OpenGL 4.x context with the aid of SDL 1.2 (started this code base a while back) and glew. With SDL 1.2 there is no way to enumerate the available devices (GPUs) and select one. I remember back in my DX days, device enumeration and selection was supported.

 

Does anyone know if any other cross platform OpenGL context creating libraries such as SDL 2.0, SFML, GLFW, support device enumeration and device specific gl context creation (with glew support)?

 

My only work around right now is forcing the app to use the Nvidia card under the Nvidia control panel (or ATI), and turning off Intel Optimus at the bios level, neither of which (I think) can be automated. This is alot to ask of a user, and is a horrid kludge.

 

Thanks for any guidance.


Proper shutdown for SDL 1.2 with OpenGL

11 October 2013 - 10:03 PM

I am using SDL 1.2 in a minimal fashion to create a cross platform OpenGL context (this is on Win7 64bit) in C++. I also use glew to have my context support OpenGL 4.2 (which my driver supports).
 
Things work correctly at run-time but I have been noticing lately a random crash when shutting down on calling SDL_Quit. 
 
What is the proper sequence for SDL (1.2) with OpenGL start up and shutdown?
 
Here is what i do currently:
 
    int MyObj::Initialize(int width, int height, bool vsync, bool fullscreen)
    {
      if(SDL_Init( SDL_INIT_EVERYTHING ) < 0) 
      {
        printf("SDL_Init failed: %s\n", SDL_GetError());
        return 0;
      }

      SDL_GL_SetAttribute(SDL_GL_RED_SIZE,         8);
      SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE,       8);
      SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE,        8);
      SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE,       8);

      SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE,       24);
      SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE,       8);
      SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE,     24);

      SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS,  0);
      SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL,  vsync ? 1 : 0);

      if((m_SurfDisplay = SDL_SetVideoMode(width, height, 24, 
                                           SDL_HWSURFACE | 
                                           SDL_GL_DOUBLEBUFFER | 
                                           (fullscreen ? SDL_FULLSCREEN : 0) |
                                           SDL_OPENGL)) == NULL)
      {
        printf("SDL_SetVideoMode failed: %s\n", SDL_GetError());
        return 0;
      }

      GLenum err = glewInit();
      if (GLEW_OK != err) 
        return 0;
      
      m_Running = true;
      return 1;
    }

    int MyObj::Shutdown()
    {   
      SDL_FreeSurface(m_SurfDisplay);
      SDL_Quit();


      return 1;
    }
In between the init and shutdown calls i create a number of GL resources (e.g. Textures, VBOs, VAO, Shaders, etc.) and render my scene each frame, with a SDL_GL_SwapBuffers() at the end of each frame (pretty typical). Like so:
 
    int MyObject::Run()
    {
      SDL_Event Event;
    
      while(m_Running) 
      {
        while(SDL_PollEvent(&Event))
        { OnEvent(&Event); } //this eventually causes m_Running to be set to false on "esc"
        
        ProcessFrame();
        SDL_SwapBuffers();
      }
      return 1;
    }
Within the ~MyObject MyObject::Shutdown() is called. Where just recently SDL_Quit crashes the app. I have also tried call Shutdown instead outside of the destructor, after my render loop returns to the same effect.
 
One thing that I do not do (that i didn't think I needed to do) is call the glDelete* functions for all my allocated GL resources before calling Shutdown (i thought they would automatically be cleaned up by the destruction of the context, which i assumed was happening during SDL_FreeSurface or SDL_Quit(). I of course call the glDelete* function in the dtors of there wrapping objects, which eventually get called by the tale of ~MyObject, since the wrapper objects are part of other objects that are members of MyObject.
 
As an experiment i trying forcing all the appropriate glDelete* calls to occur before Shutdown(), and my crash never seems to occur. Funny thing i did not need to do this a week ago, and really nothing has changed according to GIT (may be wrong though). 
 
Is it really necessary to make sure all GL resources are freed before calling MyObject::Shutdown with SDL? Does it look like I might be doing something else wrong?
 
Thanks for any insights.

Flowgraph AI?

27 August 2013 - 08:32 PM

Has anyone had any experience using a flow graph based AI specification? Like so:

http://seithcg.com/wordpress/?page_id=1605

How does it compare with Behavior Trees? Any non-obvious (or obvious) pros or cons? Or just apples and oranges?

From the looks of the samples i have seen it sort of looks like a behavior tree (ish) style of specification, but with parameter fields (such as target, speed, etc.) coming from source nodes and other activation semantics, also the samples i have seen are missing things like fallbacks and concurrency that I get with my BT implementation, but it doesn't seem like a big jump to add these.

It may be how i have been writing my example BTs or the nodes I have implemented (probably is), but users attempting to create new trees are having a hard time expressing what they want, without my help.

I also have a visual tree editor similar to Braniac Designer, and have your usual nodes: Leaves: Condition, Wait, Action,TreeCall; Branches: Selector, Sequence, Parallel, ActiveSelector, SwitchSelector, WeightedRandomSelector; Decorators: Always, Negate, Loop, Monitor, ConditionDec, Limit, Periodic, Toggle, etc

Maybe the problem is too many node types, not enough higher level node types, Actions that are to granular or not granular enough?

I often puzzle at how to express a particular behavior in the system, keep thinking to myself "oh if only I had this imperative like ability from Lua", but I manage. But I am a developer (who wrote the system), and the users are maybe very junior developers or not developers at all.

I am also starting to really appreciate the benefits I am getting from my BT implementation like simulated concurrency, latent functions, subgraph aborts, etc. that would be REALLY hard via imperative scripting (i've been there done that).

What i have is working, but i am ending up building too many of the trees or aiding in creating them too often. Trying to reduce this.

I have also been looking into adding utility based selectors into the mix, but so far, it has been a bit of a can of worms, extremely useful!, but trying to formulate appropriate factors, considerations, and picking proper response curves is an art unto itself. Don't think i will ever get a user to comprehend building those (planned to hide utility methods in trees they can just call through the TreeCall leaf, in things like the perception system, or in custom actions like findBestEnemyTarget).

I wonder if a flow graph representation might be more intuitive? I often have to keep reminding my users priority order priority order...
They keep missing or miss understanding how the BT activation flow works.

Maybe a mix of flow graphs and strict BTs (like Crysis)?

I'm asking for the impossible, i know. User level intuitiveness, yet powerful expression, but just trying to inch over.

Any thoughts?

BTW, my previous system was visual HFSM based, and in that the users were TRULY were lost, BTs are going over relatively better.

P.S. maybe i should be looking into some automatic BT (partial) construction through combinations of higher level rules (e.g. ogres hate grues, ogre like spiders, when raining get nervous, not these rules per say, but the concept), or partial planning?


PARTNERS