• Content count

  • Joined

  • Last visited

Community Reputation

254 Neutral

About Ender1618

  • Rank
  1. I am having trouble using glGetTexImage with a depth texture of an FBO.   My FBO is set up like so: GLfloat border[] ={ 1.0f, 0.0f, 0.0f, 0.0f}; // depth texture setup m_DepthTexture.Create(dbFormat,m_Width,m_Height); m_TextureID = m_DepthTexture.GetTextureID(); m_DepthTexture.SetFilterMode(GL_NEAREST,GL_NEAREST); m_DepthTexture.SetWrapMode(GL_CLAMP_TO_BORDER,GL_CLAMP_TO_BORDER); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, border); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE,                 GL_COMPARE_REF_TO_TEXTURE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LESS); // Assign the depth map to texture unit 0 glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, m_TextureID); // Create and set up the FBO glGenFramebuffers(1,&m_FBOID); glBindFramebuffer(GL_FRAMEBUFFER, m_FBOID); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,                        GL_TEXTURE_2D, m_TextureID, 0); GLenum drawBuffers[] = {GL_NONE}; glDrawBuffers(1, drawBuffers); GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glBindRenderbuffer(GL_RENDERBUFFER, 0); glBindTexture(GL_TEXTURE_2D,0); if(status != GL_FRAMEBUFFER_COMPLETE) { return false; } I draw to it like so: glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_FBOID);   // draw some things   glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); And I attempt to copy the depth texture to system memory like so: glBindTexture(GL_TEXTURE_2D, m_TextureID); glGetTexImage(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, GL_FLOAT,                        m_DepthImage.data); glCheckErrorRetB(); glBindTexture(GL_TEXTURE_2D, 0); The function glGetTexImage throws an error invalid operation;   This same code works in another project i have, but not in this project, so I am sure it has to do with some internal state that is being incorrectly set.   Any ideas?
  2. 3D printer for hobby robotics

    I am getting into hobby robotics, and I am finding that I more and more am in need of the ability to create custom parts.   What would be a good 3D printer to buy (couple grand max cost), for creating parts such as: chassis, wheels, servo armatures and holders, gears, etc.   For me one of the most important things to be able to make would be my own custom gear trains.   What would be a good 3D printer, precise enough and using tough enough materials to great gears for robots on the size scale of a cat. Such as differential wheeled, hexapod, and quadrapod platforms?   Thanks, -Ryan
  3. So I am working in Unity (5.x) Pro and have my AI agents moving about via calls to Unity's built in path finding and movement system (NavMesh base).   I also, in the past did alot of work with Reynold style steering behaviors, and had alot of nice (uselfull) emergent behaviors come about. But when I used them there was a low density of static obstacles, so a true global path finding solution was unnessisary.   But for this new Unity project, my maps have a high density of obstacles (also some areas of low density, e.g. large courtyards, etc), and a global path finder is quite nessisary (some places can be maze like).   So what I want to do is somehow properly mix NavMesh based path planning with Steering Behaviors (Evade, Chase, Formations, etc.). It is not quite clear to me how that would work.   The NavMesh does give me all walk-able areas, which I can query with ray casts. I also can compute paths from any location on the navmesh to any other location on the navmesh, without having to traverse them (Ive used that to know if even a path exists), but this is an expensive operation and can not be perform too often (like once an update frame, would not work). The number of AI agents I am dealing with right now is anywhere from 1 to 32.   I was just wondering if others have tried to mix these two movement systems, and how they went about it?   Thanks, -Ryan
  4. I am trying to have character motion controlled by a mix of RootMotion for translation and code for rotation, and I am having difficulty. Like so: [source] void OnAnimatorMove() {   if(m_CurDeltaAngle != 0)   { transform.rotation = Quaternion.Euler(Vector3.up * m_CurDeltaAngle); }   if(m_CurTranslSpeed != 0)   { transform.position = m_Animator.rootPosition; } } [/source]   Now as long a transform. rotation is never set (in the function), m_Animator.rootPosition is updated with the correct value. But once m_CurDeltaAngle != 0 and transform.rotation assigned to, from then on Animator.rootPosition is never updated (neither is Animator.deltaPosition). The character now turns in place correctly but does not translate. Why would this be? What could I be doing wrong? Can anyone suggest another way of achieving this mix of using tranlational motion from root, but rotation from code? I need to do this because I have a rather large animation set that has plenty of translation movement for locomotion animations, but no turning animations. Thx, -Ryan
  5. I am currently using Unity 4.6, but I plan on upgrading my Unity 4.6 commercial license to Unity 5.1 soon. Part of my project still uses the legacy animation system for character animation (will be porting that to Mecanim in the future), and it currently makes extensive use of the Rune Locomotion System for foot planting over uneven terrain during character locomotion. I just read a user comment on the Rune Locomotion System asset store page, that said the system would not work at all in Unity 5+ (even if you are still using the legacy animation system for character animation), for undisclosed reasons. But it does still work in Unity 4.6. Does anyone know if this is true? Does anyone know of a good replacement for abilities of the Rune Locomotion System, but works with Mecanim? I came across "Mecanim - Basic Foot Placement", in the asset store, but it doesn't claim to support a large amount of what the Rune system could do. Thx, -Ryan
  6. Thanks for that article.   Ack that does add quite a bit more processing! So i guess checking all frustum corners to see if they are within my sphere bounds would be the equivalent?   I also do AABB culling, i'm using the AABB pn-vertex optimization (mentioned in the Lighthouse article), not sure how it would work in this case (not sure how to get the pn of a frustum as I could with an AABB).   Has anyone attempted doing something like the pn-vertex optimization for proper large AABB to frustum checking?    Any semi-optimized (like pn-vertex type things) implementations out there?
  7. I was reviewing my view frustum culling code for a new OpenGL project I am working on, and was noticing a bit too many corner cases with large bounding volumes (e.g. spheres) and smaller frustums. Corner cases where the bounding volume is in no way intersecting will my frustum volume, yet getting accepted as visible. I am using the Lighthouse3d (http://www.lighthouse3d.com/tutorials/view-frustum-culling/) method (geometric frustum plane method not radar) for extracting planes and testing against bounding volumes.   Here is an example (all frustum plane normals face inward (blue lines) ) http://img10.imageshack.us/img10/3970/70bm.jpg   The above image is top down, but neither the top or bottom frustum planes reject the sphere either.   This family of intersect methods rely on having at least one of the frustum planes reject the volume as outside. But there are corner cases where the volumes do not intersect, yet none of the frustum planes reject the volume, for example in the image I posted.   How does one deal typically deal with such cases (while still using world space frustum plane culling techniques, if possible)?  
  8. I am trying to reproduce this effect with point sprites (given vertices with a position and normal).    http://imageshack.com/a/img36/7057/5t7b.jpg   Essentially discarding fragments of a point sprite, dependent on the normal of that point, to produce an an elliptical shape tangent to the point normal (essentially approximated ortho projection of a 3D circle to a 2D ellipse).   From the equation I found d = -(n.x/n.z)*x-(n.y/n.z)*y, where a fragment is discarded if the world space distance from the point center to a point (x,y,d) is greater that the disk radius (as indicated by the text above the image).   I am trying to figure the right way of doing this in my GLSL vertex and fragment shaders, using point sprites.     in my  shaders I am doing something like this, which isn't working:   //vertex shader #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; out vec3 Color; flat out vec3 PtPosition; flat out vec3 PtNormal; out vec3 FragPosition; uniform mat4 MVP; uniform float heightMin; uniform float heightMax; uniform bool invertGrad = false; uniform mat4 MV; uniform float pointSize; uniform float viewportDim; float perspPtSize(vec3 ptPos, mat4 mv, float ptWorldSize, float viewportDim) {   vec3 posEye = vec3(mv * vec4(ptPos, 1.0));   return ptWorldSize * (viewportDim / length(posEye)); } void main() {   Color = vec3(1.0,1.0,1.0);   PtPosition = vec3(MV * vec4(VertexPosition,1.0));   FragPosition = PtPosition;    PtNormal = vec3(MV * vec4(VertexNormal,1.0));   gl_Position = MVP * vec4(VertexPosition,1.0);   gl_PointSize = perspPtSize(VertexPosition.xyz,MV,pointSize,viewportDim); } //fragment shader #version 400 layout( location = 0 ) out vec4 FragColor; in vec3 Color; flat in vec3 PtPosition; flat in vec3 PtNormal; void main()  { vec2 ptC = gl_PointCoord- vec2(0.5); float depth = -PtNormal.x/PtNormal.z*ptC.x - PtNormal.y/PtNormal.z*ptC.y; float sqrMag = ptC.x*ptC.x + ptC.y*ptC.y + depth*depth; if(sqrMag > 0.25) { discard; } else { fragColor = vec4(color, 1.0); } } Has anyone successfully implemented an effect like this? I tried doing this in world space as well but ended up getting incorrect results, I figured if i left it all in point sprite expressed space it might be easier.   I think I am missing some basic concept for doing this, any suggestions?
  9. Other than Optimus GPU switching issue (say for example we just shut it off in the bios), do any of the cross platform OpenGl Context creating libraries out there such as SDL2, SFML, or GLFW, support device enumeration and device specific GL context creation (with glew support, since i would like to use OGL 4.0 min or OGL 4.3 if available)?   From their online documentation, I haven't found it obvious if they do or do not support this (haven't gone into much depth).   Or is it just beyond there control fundamentally?
  10. I have an issue with my application (Win7 64bit OpenGL 4.0), picking the wrong GPU on some peoples machine for OpenGL acceleration such as the Intel HD3000 embedded GPU vs the Nvidia or ATI GPU. HD3000 does not support OpenGL 4.0 (AFAIK), which is my min requirement, so the app fails to run.   BTW, my app is intended to be cross platform (but for right now Windows 7 is most important, then Linux, then Mac).    I am currently creating my OpenGL 4.x context with the aid of SDL 1.2 (started this code base a while back) and glew. With SDL 1.2 there is no way to enumerate the available devices (GPUs) and select one. I remember back in my DX days, device enumeration and selection was supported.   Does anyone know if any other cross platform OpenGL context creating libraries such as SDL 2.0, SFML, GLFW, support device enumeration and device specific gl context creation (with glew support)?   My only work around right now is forcing the app to use the Nvidia card under the Nvidia control panel (or ATI), and turning off Intel Optimus at the bios level, neither of which (I think) can be automated. This is alot to ask of a user, and is a horrid kludge.   Thanks for any guidance.
  11. Proper shutdown for SDL 1.2 with OpenGL

    BTW I was told that the call to SDL_FreeSurface in shutdown I was doing was incorrect and that SDL would handle the destruction of the surface during quit, so I am not calling SDL_FreeSurface anymore. But I still get the crash and its random.   I understand calling glDetele* after SDL_Quit could be a problem, but the crash seems to occur when calling SDL_Quit, not during the destructors of my gl object wrappers that call glDelete*. Why would that be?
  12. I am using SDL 1.2 in a minimal fashion to create a cross platform OpenGL context (this is on Win7 64bit) in C++. I also use glew to have my context support OpenGL 4.2 (which my driver supports).   Things work correctly at run-time but I have been noticing lately a random crash when shutting down on calling SDL_Quit.    What is the proper sequence for SDL (1.2) with OpenGL start up and shutdown?   Here is what i do currently:       int MyObj::Initialize(int width, int height, bool vsync, bool fullscreen)     {       if(SDL_Init( SDL_INIT_EVERYTHING ) < 0)        {         printf("SDL_Init failed: %s\n", SDL_GetError());         return 0;       }       SDL_GL_SetAttribute(SDL_GL_RED_SIZE,        8);       SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE,      8);       SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE,      8);       SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE,      8);       SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE,      24);       SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE,      8);       SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE,    24);       SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS,  0);       SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL,  vsync ? 1 : 0);       if((m_SurfDisplay = SDL_SetVideoMode(width, height, 24,                                             SDL_HWSURFACE |                                             SDL_GL_DOUBLEBUFFER |                                             (fullscreen ? SDL_FULLSCREEN : 0) |                                            SDL_OPENGL)) == NULL)       {         printf("SDL_SetVideoMode failed: %s\n", SDL_GetError());         return 0;       }       GLenum err = glewInit();       if (GLEW_OK != err)          return 0; m_Running = true; return 1;     }     int MyObj::Shutdown()     {          SDL_FreeSurface(m_SurfDisplay);       SDL_Quit();       return 1;     } In between the init and shutdown calls i create a number of GL resources (e.g. Textures, VBOs, VAO, Shaders, etc.) and render my scene each frame, with a SDL_GL_SwapBuffers() at the end of each frame (pretty typical). Like so:       int MyObject::Run()     {       SDL_Event Event;            while(m_Running)        {         while(SDL_PollEvent(&Event))         { OnEvent(&Event); } //this eventually causes m_Running to be set to false on "esc"                  ProcessFrame();         SDL_SwapBuffers();       }       return 1;     } Within the ~MyObject MyObject::Shutdown() is called. Where just recently SDL_Quit crashes the app. I have also tried call Shutdown instead outside of the destructor, after my render loop returns to the same effect.   One thing that I do not do (that i didn't think I needed to do) is call the glDelete* functions for all my allocated GL resources before calling Shutdown (i thought they would automatically be cleaned up by the destruction of the context, which i assumed was happening during SDL_FreeSurface or SDL_Quit(). I of course call the glDelete* function in the dtors of there wrapping objects, which eventually get called by the tale of ~MyObject, since the wrapper objects are part of other objects that are members of MyObject.   As an experiment i trying forcing all the appropriate glDelete* calls to occur before Shutdown(), and my crash never seems to occur. Funny thing i did not need to do this a week ago, and really nothing has changed according to GIT (may be wrong though).    Is it really necessary to make sure all GL resources are freed before calling MyObject::Shutdown with SDL? Does it look like I might be doing something else wrong?   Thanks for any insights.
  13. Flowgraph AI?

    So would you recommend maybe hiding a BT system beneath the visual guise of a flowgraph for the game designers to use, but still letting an experienced designer to build trees directly? Does that even make sense?
  14. Flowgraph AI?

    What i mean by flowgraph AI, is the declaritive flow of action activation with minimal or no imperative like side effects. Kind of what a BT does, a flexable rule set (plan) to determine what actions get activated when.    
  15. Flowgraph AI?

    Has anyone had any experience using a flow graph based AI specification? Like so: http://seithcg.com/wordpress/?page_id=1605 How does it compare with Behavior Trees? Any non-obvious (or obvious) pros or cons? Or just apples and oranges? From the looks of the samples i have seen it sort of looks like a behavior tree (ish) style of specification, but with parameter fields (such as target, speed, etc.) coming from source nodes and other activation semantics, also the samples i have seen are missing things like fallbacks and concurrency that I get with my BT implementation, but it doesn't seem like a big jump to add these. It may be how i have been writing my example BTs or the nodes I have implemented (probably is), but users attempting to create new trees are having a hard time expressing what they want, without my help. I also have a visual tree editor similar to Braniac Designer, and have your usual nodes: Leaves: Condition, Wait, Action,TreeCall; Branches: Selector, Sequence, Parallel, ActiveSelector, SwitchSelector, WeightedRandomSelector; Decorators: Always, Negate, Loop, Monitor, ConditionDec, Limit, Periodic, Toggle, etc Maybe the problem is too many node types, not enough higher level node types, Actions that are to granular or not granular enough? I often puzzle at how to express a particular behavior in the system, keep thinking to myself "oh if only I had this imperative like ability from Lua", but I manage. But I am a developer (who wrote the system), and the users are maybe very junior developers or not developers at all. I am also starting to really appreciate the benefits I am getting from my BT implementation like simulated concurrency, latent functions, subgraph aborts, etc. that would be REALLY hard via imperative scripting (i've been there done that). What i have is working, but i am ending up building too many of the trees or aiding in creating them too often. Trying to reduce this. I have also been looking into adding utility based selectors into the mix, but so far, it has been a bit of a can of worms, extremely useful!, but trying to formulate appropriate factors, considerations, and picking proper response curves is an art unto itself. Don't think i will ever get a user to comprehend building those (planned to hide utility methods in trees they can just call through the TreeCall leaf, in things like the perception system, or in custom actions like findBestEnemyTarget). I wonder if a flow graph representation might be more intuitive? I often have to keep reminding my users priority order priority order... They keep missing or miss understanding how the BT activation flow works. Maybe a mix of flow graphs and strict BTs (like Crysis)? I'm asking for the impossible, i know. User level intuitiveness, yet powerful expression, but just trying to inch over. Any thoughts? BTW, my previous system was visual HFSM based, and in that the users were TRULY were lost, BTs are going over relatively better. P.S. maybe i should be looking into some automatic BT (partial) construction through combinations of higher level rules (e.g. ogres hate grues, ogre like spiders, when raining get nervous, not these rules per say, but the concept), or partial planning?