Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Ender1618

Member Since 30 Apr 2004
Offline Last Active Jun 16 2015 03:33 PM

Topics I've Started

Unity 5.1 and Rune Locomotion system?

16 June 2015 - 03:36 PM

I am currently using Unity 4.6, but I plan on upgrading my Unity 4.6 commercial license to Unity 5.1 soon.

Part of my project still uses the legacy animation system for character animation (will be porting that to Mecanim in the future), and it currently makes extensive use of the Rune Locomotion System for foot planting over uneven terrain during character locomotion.

I just read a user comment on the Rune Locomotion System asset store page, that said the system would not work at all in Unity 5+ (even if you are still using the legacy animation system for character animation), for undisclosed reasons. But it does still work in Unity 4.6.

Does anyone know if this is true?

Does anyone know of a good replacement for abilities of the Rune Locomotion System, but works with Mecanim?

I came across "Mecanim - Basic Foot Placement", in the asset store, but it doesn't claim to support a large amount of what the Rune system could do.

Thx,
-Ryan


View Frustum Culling Corner Cases

03 January 2014 - 03:05 PM

I was reviewing my view frustum culling code for a new OpenGL project I am working on, and was noticing a bit too many corner cases with large bounding volumes (e.g. spheres) and smaller frustums. Corner cases where the bounding volume is in no way intersecting will my frustum volume, yet getting accepted as visible. I am using the Lighthouse3d (http://www.lighthouse3d.com/tutorials/view-frustum-culling/) method (geometric frustum plane method not radar) for extracting planes and testing against bounding volumes.

 

Here is an example (all frustum plane normals face inward (blue lines) )

http://img10.imageshack.us/img10/3970/70bm.jpg

 

The above image is top down, but neither the top or bottom frustum planes reject the sphere either.

 

This family of intersect methods rely on having at least one of the frustum planes reject the volume as outside. But there are corner cases where the volumes do not intersect, yet none of the frustum planes reject the volume, for example in the image I posted.

 

How does one deal typically deal with such cases (while still using world space frustum plane culling techniques, if possible)?

 


Normal oriented elliptical shapes (surfels) using point sprites.

10 December 2013 - 02:45 PM

I am trying to reproduce this effect with point sprites (given vertices with a position and normal). 

 

http://imageshack.com/a/img36/7057/5t7b.jpg

 

Essentially discarding fragments of a point sprite, dependent on the normal of that point, to produce an an elliptical shape tangent to the point normal (essentially approximated ortho projection of a 3D circle to a 2D ellipse).

 

From the equation I found d = -(n.x/n.z)*x-(n.y/n.z)*y, where a fragment is discarded if the world space distance from the point center to a point (x,y,d) is greater that the disk radius (as indicated by the text above the image).

 

I am trying to figure the right way of doing this in my GLSL vertex and fragment shaders, using point sprites.  

 

in my  shaders I am doing something like this, which isn't working:

 

//vertex shader
#version 400


layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;

out vec3 Color;
flat out vec3 PtPosition;
flat out vec3 PtNormal;
out vec3 FragPosition;

uniform mat4 MVP;
uniform float heightMin;
uniform float heightMax;
uniform bool invertGrad = false;


uniform mat4 MV;
uniform float pointSize;
uniform float viewportDim;


float perspPtSize(vec3 ptPos, mat4 mv, float ptWorldSize, float viewportDim)
{
  vec3 posEye = vec3(mv * vec4(ptPos, 1.0));
  return ptWorldSize * (viewportDim / length(posEye));
}

void main()
{
  Color = vec3(1.0,1.0,1.0);
  PtPosition = vec3(MV * vec4(VertexPosition,1.0));
  FragPosition = PtPosition; 
  PtNormal = vec3(MV * vec4(VertexNormal,1.0));
  gl_Position = MVP * vec4(VertexPosition,1.0);


  gl_PointSize = perspPtSize(VertexPosition.xyz,MV,pointSize,viewportDim);
}


//fragment shader
#version 400

layout( location = 0 ) out vec4 FragColor;

in vec3 Color;
flat in vec3 PtPosition;
flat in vec3 PtNormal;

void main() 
{
  vec2 ptC = gl_PointCoord- vec2(0.5);
  float depth = -PtNormal.x/PtNormal.z*ptC.x - 
                 PtNormal.y/PtNormal.z*ptC.y; 
  float sqrMag = ptC.x*ptC.x + ptC.y*ptC.y + depth*depth; 

  if(sqrMag > 0.25) 
  { discard; } 
  else 
  { fragColor = vec4(color, 1.0); }
}

Has anyone successfully implemented an effect like this? I tried doing this in world space as well but ended up getting incorrect results, I figured if i left it all in point sprite expressed space it might be easier.

 

I think I am missing some basic concept for doing this, any suggestions?


Choosing specific GPU for OpenGL context?

23 October 2013 - 03:06 PM

I have an issue with my application (Win7 64bit OpenGL 4.0), picking the wrong GPU on some peoples machine for OpenGL acceleration such as the Intel HD3000 embedded GPU vs the Nvidia or ATI GPU. HD3000 does not support OpenGL 4.0 (AFAIK), which is my min requirement, so the app fails to run.

 

BTW, my app is intended to be cross platform (but for right now Windows 7 is most important, then Linux, then Mac). 

 

I am currently creating my OpenGL 4.x context with the aid of SDL 1.2 (started this code base a while back) and glew. With SDL 1.2 there is no way to enumerate the available devices (GPUs) and select one. I remember back in my DX days, device enumeration and selection was supported.

 

Does anyone know if any other cross platform OpenGL context creating libraries such as SDL 2.0, SFML, GLFW, support device enumeration and device specific gl context creation (with glew support)?

 

My only work around right now is forcing the app to use the Nvidia card under the Nvidia control panel (or ATI), and turning off Intel Optimus at the bios level, neither of which (I think) can be automated. This is alot to ask of a user, and is a horrid kludge.

 

Thanks for any guidance.


Proper shutdown for SDL 1.2 with OpenGL

11 October 2013 - 10:03 PM

I am using SDL 1.2 in a minimal fashion to create a cross platform OpenGL context (this is on Win7 64bit) in C++. I also use glew to have my context support OpenGL 4.2 (which my driver supports).
 
Things work correctly at run-time but I have been noticing lately a random crash when shutting down on calling SDL_Quit. 
 
What is the proper sequence for SDL (1.2) with OpenGL start up and shutdown?
 
Here is what i do currently:
 
    int MyObj::Initialize(int width, int height, bool vsync, bool fullscreen)
    {
      if(SDL_Init( SDL_INIT_EVERYTHING ) < 0) 
      {
        printf("SDL_Init failed: %s\n", SDL_GetError());
        return 0;
      }

      SDL_GL_SetAttribute(SDL_GL_RED_SIZE,         8);
      SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE,       8);
      SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE,        8);
      SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE,       8);

      SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE,       24);
      SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE,       8);
      SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE,     24);

      SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS,  0);
      SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL,  vsync ? 1 : 0);

      if((m_SurfDisplay = SDL_SetVideoMode(width, height, 24, 
                                           SDL_HWSURFACE | 
                                           SDL_GL_DOUBLEBUFFER | 
                                           (fullscreen ? SDL_FULLSCREEN : 0) |
                                           SDL_OPENGL)) == NULL)
      {
        printf("SDL_SetVideoMode failed: %s\n", SDL_GetError());
        return 0;
      }

      GLenum err = glewInit();
      if (GLEW_OK != err) 
        return 0;
      
      m_Running = true;
      return 1;
    }

    int MyObj::Shutdown()
    {   
      SDL_FreeSurface(m_SurfDisplay);
      SDL_Quit();


      return 1;
    }
In between the init and shutdown calls i create a number of GL resources (e.g. Textures, VBOs, VAO, Shaders, etc.) and render my scene each frame, with a SDL_GL_SwapBuffers() at the end of each frame (pretty typical). Like so:
 
    int MyObject::Run()
    {
      SDL_Event Event;
    
      while(m_Running) 
      {
        while(SDL_PollEvent(&Event))
        { OnEvent(&Event); } //this eventually causes m_Running to be set to false on "esc"
        
        ProcessFrame();
        SDL_SwapBuffers();
      }
      return 1;
    }
Within the ~MyObject MyObject::Shutdown() is called. Where just recently SDL_Quit crashes the app. I have also tried call Shutdown instead outside of the destructor, after my render loop returns to the same effect.
 
One thing that I do not do (that i didn't think I needed to do) is call the glDelete* functions for all my allocated GL resources before calling Shutdown (i thought they would automatically be cleaned up by the destruction of the context, which i assumed was happening during SDL_FreeSurface or SDL_Quit(). I of course call the glDelete* function in the dtors of there wrapping objects, which eventually get called by the tale of ~MyObject, since the wrapper objects are part of other objects that are members of MyObject.
 
As an experiment i trying forcing all the appropriate glDelete* calls to occur before Shutdown(), and my crash never seems to occur. Funny thing i did not need to do this a week ago, and really nothing has changed according to GIT (may be wrong though). 
 
Is it really necessary to make sure all GL resources are freed before calling MyObject::Shutdown with SDL? Does it look like I might be doing something else wrong?
 
Thanks for any insights.

PARTNERS