Advertisement Jump to content
  • Advertisement

mikev

Member
  • Content Count

    31
  • Joined

  • Last visited

Community Reputation

247 Neutral

About mikev

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello, I have a circumstance where I have two very thin cylinders, and I want to determine the (reasonably) precise moment that they collide as they move toward each other. They will not be moving or rotating extremely fast, but it may be enough that they will pass through each other in a single time step, because they are so thin. I am using the Bullet physics library (Is anyone else still using Bullet, btw?) and it has a fancy function that lets you perform a sweep test of a convex object as it moves from transformation A1 to transformation A2. The return value includes the proportion of how far it had to move from A1 to A2 to reach the point of contact, which is all I need. Cool! I could use this instead of discrete, once per frame collision checking. My issue is that BOTH objects are moving. Object A is moving from transform A1 to A2, and object B is moving from transform B1 to B2. Would it be reasonable to check for this collision by computing only the relative motion of object B from A (in other words, let A be our reference frame), and then do a sweep test of B, observing that relative motion? More precisely, the initial position of object B relative to object A would be D1 = A1^(-1)B1, and the final position would be D2 = A2^(-1) B2. I could then fix the position of object A at the origin, and perform a sweep test of B moving from D1 to D2. Does this seam like a reasonable approach, or how else should I do this? Any advice would be appreciated. Thanks!
  2. I seem to be having an issue with making a quaternion interpolate through the shortest path. I am making use of Bullet's math/phyiscs package and the quaternion slerp function it provides. My research tells me that when the dot product of two quaternions is negative you should negate one of them. Is my approach correct? It doesn't seem to be producing the correct result. void vSlerp(btQuaternion& start, btQuaternion& end, float t, btQuaternion& result) { start = start.normalize(); end = end.normalize(); float dot = start.dot(end); if (dot > 0.999995) { result=start; } else if (dot >= 0) { result = slerp(start,end,t); // Bullet quaternion slerp } else { result = slerp(start, -end, t); // avoid going the longer way around? } result.normalize(); } Any idea or suggestion is greatly appreciated! Thanks!
  3. Thank you all for your input!   Yes! Ideas like this are what I'm looking for. On my engine, I have been lazy about memory management on both GPU and CPU (hence a similar post on the general programming forum for C++ cleanup) and I'm paying the price for it now. Trying to make sure I have all my cleanup in place, and looking for a way to check that I have been thorough.. 
  4. It looks like, in theory, I could use: SDL_GL_DeleteContext(mainGLContext); mainGLContext = SDL_GL_CreateContext(mainWindow); Between levels... not sure how common this approach would be...
  5. I am working with an older game engine code base that uses regular * pointers (not smart pointers). I want to verify that my destructor for a GameLevel objects is cleaning up all the memory that it allocated. My first idea was: measure memory used by process new up, then delete a GameLevel object measure memory again and make sure its the same I don't know if functions exist to accurately measure the process memory. I'm using the SDL2 framework. I could refactor the code and try to find every single place its using a * pointer but that could be quite a task... Ideas and advice would be appreciated.  
  6. I am wondering what are some good  strategies to make sure I am definitely cleaning up all my GPU allocated memory when I exit a level. I plan to allocate/de-allocate on a per-level basis. I am using the OpenGL 4.x C++ API.  My thought was maybe: 1.  measure GPU memory, 2. instantiate a level, then delete it 3. measure GPU memory again and verify that its the same amount. I am not sure if functions exist to even accurately measure GPU memory used by a process.. I see there are some functions that I could use on my NVidia card, such as the one mentioned here: http://www.geeks3d.com/20100531/programming-tips-how-to-know-the-graphics-memory-size-and-usage-in-opengl/  but I think that only operates at the Kilobyte level... Alternatively, I believe all the GPU memory allocated for my level will be local to my GL Context. So maybe I could delete and re-create the context, but I'd have to do this through SDL and wonder if this would be practical, or good practice to do between levels. Suggestions and advice are appreciated!  
  7. I am using my own custom game engine for my project. I just finished writing a graphics engine, complete with a robust FBX file loader, and I am now integrating Bullet for my physics.   I am trying to choose a good approach/set of tools to allow me to quickly design static collision geometry to match my level geometry - a simplistic level editor if you will. I have experimented with an approach of using Maya entirely for my level editing - essentially I can draw meshes on a 'Collision' display layer. Maya's automatic mesh naming conventions make it fairly straightforward to distinguish and parse mesh objects on this layer into the various collision shapes available in Bullet. You can adopt additional mesh/node naming and attribute tagging conventions, and write them into the FBX parser, code in grouping and redundancy optimizations, etc etc.   Alternatively, while reading through Bullet's documentation I saw reference to both Maya and COLLADA plugin's. Maya 2015 features Bullet integration for simulation, and exporting to these "Alembic" file formats. I'm researching but still not yet clear on if this feature is useful to me and how quickly I can get it up and running - I am using FBX SDK 2013 so I would probably need to upgrade my loader (and maybe write an Alembic loader?) and that's time consuming. Still, if this interface allows me to create/parse collision shapes directly through Maya and the FBX SDK, as opposed to the ad-hoc approach described above, that may be worth the time investment   Then there's the COLLDA file format which apparently has some conventions for collision/physics objects. I am not yet clear on if Maya (or perhaps another editor) is useful for creating graphics and collision geometry in tandem and exporting to COLLADA files. I suppose I could utilize ASSIMP for loading collada files and whatever, but I'm not sure if its faster or better to put in place then my Maya-only approach. I am trying to avoid super high level tools. I kinda like my Maya-only approach since it requires me to get my hands dirty with the scene construction and working with Bullet, since this project is largely a learning experience for me.   Advice, opinions and perspectives would be appreciated!
  8. Well done, Erik! It seems my viewport size somehow got shrunk!   Now to figure out what caused that...   Thank you! :)
  9. Hello!   I have a G-Buffer where I populate a world texture. I project my geometry using the World/View/Projection matrix of my camera and the world position corresponding to each pixel is written to the texture (xyz values as floats). That appears to be working just fine.   I am rendering a window-sized quad, using vertices (-1,-1,0),(1,-1,0),(1,1,0),(-1,1,0) directly, with no projection or transformation (just  NDC coordinates). I  have the world position texture from my G-Buffer available to sample. I was observing a strange bug in my fragment shader so I coded in the following test:   //Pick a texture coordinate vec2 SampTex=vec2(0.5,0.5); // get the corresponding world vector from the WorldPosition texture vec3 SampWorld = texture(gWorldPos, SampTex.xy); //project using the same camera transform used in the g-buffer vec4 SampProj = gVP * vec4(SampWorld.xyz,1); // normalize w SampProj /= SampProj.w;  // move from NDC coordinates (-1,1) to texture coordinates (0,1) SamplProj.xy = 0.5 * SampProj.xy + vec2(0.5,0.5)      At this point, SampProj.xy and SampTex should be approximately equal, and they are! However, if I increase my window height to nearly my screen height (after re-starting my application, I don't do it dynamically), after a certain threshold, there is suddenly significant vertical disagreement between SampProj.y and SampTex.y.   My screen height is 768. Somewhere around 760 the disagreement abruptly beings, and the difference in texture y coordinates seems to increase linearly from there as I increase the window height (i.e. if I set a window size of   770, the coordinates disagree by about 10 pixels, if 780, they disagree by 20 pixels. etc).   I am wondering if this is somehow due to my windowing system having small borders and a title bar (I'm on Windows 7 and using SDL). It appears to occurring roughly when the vertical window borders start to cross the screen size.  But I tried running it in full screen mode (I would think SDL doesn't include the borders in this instance) and observed the same issue.   Does anyone have any idea what might be going on?
  10. Thank you for your response, Matias,   Well that clears up 1. for me but I think I didn't articulate 2 well enough.   I'm implementing cube map shadows and I need to first render the scene into the cube map and then sample it. Sampling is easy in that I just need a direction vector from the light source to the pixel I am shading and my texture holds the distance to the light source. Clean and simple.   My problem is the orientation of the camera when rendering from the light source point of view into the cube map. Consider the diagram. In this snap shot I am rendering a torus into Negative_Y cube map. The direction of the camera is clear but which way is "up" from the camera's point of view is not. From the camera's point of view, up could be along the z or x axis and in the positive or negative direction, but only one of the 4 will put the texture in the right spot.       That's why I need to know how the rendered image is mapped to the cube map for each face of the 6 faces (+/-, and x/y/z).
  11. 1.   In some tutorials I see the following parameters for setting up cube maps: glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);     glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);     glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); From this, S,T and R seem to standardized texture coordinate names. I also see that within a shader, a 'samplerCube' can be sampled using a 3d coordinate vector. Do the x/y/z coordinates of this coordinate vector correspond to s,t, r coordinates?     2.  There are six faces I can render to when writing to a cube map.   GL_TEXTURE_CUBE_MAP_{POSITIVE/NEGATIVE}_{X/Y/Z}   If I render to a particular face, what to do the x and y coordinate of the render target correspond to in the cube's texture coordinates? (by x I mean 'right' and y I mean 'up').   In other words, I don't know how the texture is oriented when writing to the face of a cube map, and I don't know how its oriented when reading from the face of a cube map. :\ just trying to get my bearings!  
  12. Just a tip... In many tutorials I've seen, rather than deriving the world coordinates of your pixel from NDC, you can just output them from your vertex shader and use them as input to your fragment shader. That's what I used when implementing cascaded shadow maps.     vertex shader: #version 420 // or whatever layout (location = 0) in vec4 position; layout (location = 0) out vec3 worldPos; uniform mat4 gMVP; unform mat4 gModel; void main() {  gl_position = gMVP * position; worldPos = (gModel * position).xyz; } fragment shader: #version 420 layout (location = 0) in vec3 worldPos; layout (location = 0) out vec4 fragColor; void main() {   // do whatever } Note that in your fragment shader, worldPos will be an interpolated vector, corresponding to the location of the pixel on the given primitive (exactly what you want). And of course if you want eye coordinates in your fragment shader, you'd transform by the view model matrix in the vertex shader.
  13. *emerges from dark cavern, tattered clothing, singed hair, scarred, and bleeding*  I AM VICTORIOUS! *holds up disembodied head of Optimus*   So it looks like Optimus refuses to be disabled on my laptop screen - if I disable my Intel chip, everything goes black. HOWEVER! If I then connect an external monitor, it connects entirely with the NVidia GPU and I can run NSight's debugger without issue!   Finally I can mark this thread as resolved. Man, what a bitch this was!
  14. Btw, you should know how to activate NVIDIA's GPU in your application. Well I know my application definitely runs on the NVidia GPU (at least partly, not sure now that I know about this Optimus stuff) - I had problems long ago when I was getting started and I didn't realize I was running off the Intel chip.   So Aks, you think I should conclude that using NSight on this machine is a hopeless case?   -Michael
  15. Well now I am on the trail of trying to disable Optimus... and it looks like it might not be possible on my Dell XPS 15Z.  :\ I tried disabling my Intel Chip on my device manager, and that pretty much breaks everything. Resolution is dropped to 800x600 - and I'm guessing its not GPU accelerated.   Going to follow up with NVidia.. man its really going to suck if this a dead end for my machine, after all this work! :(
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!