mikev

Member

31

247 Neutral

• Rank
Member

• Role
Programmer
• Interests
Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

1. sweep test with two moving bodies

Hello, I have a circumstance where I have two very thin cylinders, and I want to determine the (reasonably) precise moment that they collide as they move toward each other. They will not be moving or rotating extremely fast, but it may be enough that they will pass through each other in a single time step, because they are so thin. I am using the Bullet physics library (Is anyone else still using Bullet, btw?) and it has a fancy function that lets you perform a sweep test of a convex object as it moves from transformation A1 to transformation A2. The return value includes the proportion of how far it had to move from A1 to A2 to reach the point of contact, which is all I need. Cool! I could use this instead of discrete, once per frame collision checking. My issue is that BOTH objects are moving. Object A is moving from transform A1 to A2, and object B is moving from transform B1 to B2. Would it be reasonable to check for this collision by computing only the relative motion of object B from A (in other words, let A be our reference frame), and then do a sweep test of B, observing that relative motion? More precisely, the initial position of object B relative to object A would be D1 = A1^(-1)B1, and the final position would be D2 = A2^(-1) B2. I could then fix the position of object A at the origin, and perform a sweep test of B moving from D1 to D2. Does this seam like a reasonable approach, or how else should I do this? Any advice would be appreciated. Thanks!
2. Quaternion Slerp - Shorter way around?

I seem to be having an issue with making a quaternion interpolate through the shortest path. I am making use of Bullet's math/phyiscs package and the quaternion slerp function it provides. My research tells me that when the dot product of two quaternions is negative you should negate one of them. Is my approach correct? It doesn't seem to be producing the correct result. void vSlerp(btQuaternion& start, btQuaternion& end, float t, btQuaternion& result) { start = start.normalize(); end = end.normalize(); float dot = start.dot(end); if (dot > 0.999995) { result=start; } else if (dot >= 0) { result = slerp(start,end,t); // Bullet quaternion slerp } else { result = slerp(start, -end, t); // avoid going the longer way around? } result.normalize(); } Any idea or suggestion is greatly appreciated! Thanks!
3. Ensuring proper cleanup of GPU memory in OpenGL

Thank you all for your input!   Yes! Ideas like this are what I'm looking for. On my engine, I have been lazy about memory management on both GPU and CPU (hence a similar post on the general programming forum for C++ cleanup) and I'm paying the price for it now. Trying to make sure I have all my cleanup in place, and looking for a way to check that I have been thorough..
4. Ensuring proper cleanup of GPU memory in OpenGL

It looks like, in theory, I could use: SDL_GL_DeleteContext(mainGLContext); mainGLContext = SDL_GL_CreateContext(mainWindow); Between levels... not sure how common this approach would be...
5. Verifying C++ Destructor Cleanup for regular pointers

I am working with an older game engine code base that uses regular * pointers (not smart pointers). I want to verify that my destructor for a GameLevel objects is cleaning up all the memory that it allocated. My first idea was: measure memory used by process new up, then delete a GameLevel object measure memory again and make sure its the same I don't know if functions exist to accurately measure the process memory. I'm using the SDL2 framework. I could refactor the code and try to find every single place its using a * pointer but that could be quite a task... Ideas and advice would be appreciated.
6. OpenGL Ensuring proper cleanup of GPU memory in OpenGL

I am wondering what are some good  strategies to make sure I am definitely cleaning up all my GPU allocated memory when I exit a level. I plan to allocate/de-allocate on a per-level basis. I am using the OpenGL 4.x C++ API.  My thought was maybe: 1.  measure GPU memory, 2. instantiate a level, then delete it 3. measure GPU memory again and verify that its the same amount. I am not sure if functions exist to even accurately measure GPU memory used by a process.. I see there are some functions that I could use on my NVidia card, such as the one mentioned here: http://www.geeks3d.com/20100531/programming-tips-how-to-know-the-graphics-memory-size-and-usage-in-opengl/  but I think that only operates at the Kilobyte level... Alternatively, I believe all the GPU memory allocated for my level will be local to my GL Context. So maybe I could delete and re-create the context, but I'd have to do this through SDL and wonder if this would be practical, or good practice to do between levels. Suggestions and advice are appreciated!

8. Window-sized texture problems when nearing/exceeding screen size

Well done, Erik! It seems my viewport size somehow got shrunk!   Now to figure out what caused that...   Thank you! :)
9. Window-sized texture problems when nearing/exceeding screen size

Hello!   I have a G-Buffer where I populate a world texture. I project my geometry using the World/View/Projection matrix of my camera and the world position corresponding to each pixel is written to the texture (xyz values as floats). That appears to be working just fine.   I am rendering a window-sized quad, using vertices (-1,-1,0),(1,-1,0),(1,1,0),(-1,1,0) directly, with no projection or transformation (just  NDC coordinates). I  have the world position texture from my G-Buffer available to sample. I was observing a strange bug in my fragment shader so I coded in the following test:   //Pick a texture coordinate vec2 SampTex=vec2(0.5,0.5); // get the corresponding world vector from the WorldPosition texture vec3 SampWorld = texture(gWorldPos, SampTex.xy); //project using the same camera transform used in the g-buffer vec4 SampProj = gVP * vec4(SampWorld.xyz,1); // normalize w SampProj /= SampProj.w;  // move from NDC coordinates (-1,1) to texture coordinates (0,1) SamplProj.xy = 0.5 * SampProj.xy + vec2(0.5,0.5)      At this point, SampProj.xy and SampTex should be approximately equal, and they are! However, if I increase my window height to nearly my screen height (after re-starting my application, I don't do it dynamically), after a certain threshold, there is suddenly significant vertical disagreement between SampProj.y and SampTex.y.   My screen height is 768. Somewhere around 760 the disagreement abruptly beings, and the difference in texture y coordinates seems to increase linearly from there as I increase the window height (i.e. if I set a window size of   770, the coordinates disagree by about 10 pixels, if 780, they disagree by 20 pixels. etc).   I am wondering if this is somehow due to my windowing system having small borders and a title bar (I'm on Windows 7 and using SDL). It appears to occurring roughly when the vertical window borders start to cross the screen size.  But I tried running it in full screen mode (I would think SDL doesn't include the borders in this instance) and observed the same issue.   Does anyone have any idea what might be going on?
10. cube map coordinates orientation

Thank you for your response, Matias,   Well that clears up 1. for me but I think I didn't articulate 2 well enough.   I'm implementing cube map shadows and I need to first render the scene into the cube map and then sample it. Sampling is easy in that I just need a direction vector from the light source to the pixel I am shading and my texture holds the distance to the light source. Clean and simple.   My problem is the orientation of the camera when rendering from the light source point of view into the cube map. Consider the diagram. In this snap shot I am rendering a torus into Negative_Y cube map. The direction of the camera is clear but which way is "up" from the camera's point of view is not. From the camera's point of view, up could be along the z or x axis and in the positive or negative direction, but only one of the 4 will put the texture in the right spot.       That's why I need to know how the rendered image is mapped to the cube map for each face of the 6 faces (+/-, and x/y/z).
11. cube map coordinates orientation

1.   In some tutorials I see the following parameters for setting up cube maps: glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);     glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);     glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); From this, S,T and R seem to standardized texture coordinate names. I also see that within a shader, a 'samplerCube' can be sampled using a 3d coordinate vector. Do the x/y/z coordinates of this coordinate vector correspond to s,t, r coordinates?     2.  There are six faces I can render to when writing to a cube map.   GL_TEXTURE_CUBE_MAP_{POSITIVE/NEGATIVE}_{X/Y/Z}   If I render to a particular face, what to do the x and y coordinate of the render target correspond to in the cube's texture coordinates? (by x I mean 'right' and y I mean 'up').   In other words, I don't know how the texture is oriented when writing to the face of a cube map, and I don't know how its oriented when reading from the face of a cube map. :\ just trying to get my bearings!

13. Knowing my REAL OpenGL version - RESOLVED

*emerges from dark cavern, tattered clothing, singed hair, scarred, and bleeding*  I AM VICTORIOUS! *holds up disembodied head of Optimus*   So it looks like Optimus refuses to be disabled on my laptop screen - if I disable my Intel chip, everything goes black. HOWEVER! If I then connect an external monitor, it connects entirely with the NVidia GPU and I can run NSight's debugger without issue!   Finally I can mark this thread as resolved. Man, what a bitch this was!
14. Knowing my REAL OpenGL version - RESOLVED

Btw, you should know how to activate NVIDIA's GPU in your application. Well I know my application definitely runs on the NVidia GPU (at least partly, not sure now that I know about this Optimus stuff) - I had problems long ago when I was getting started and I didn't realize I was running off the Intel chip.   So Aks, you think I should conclude that using NSight on this machine is a hopeless case?   -Michael
15. Knowing my REAL OpenGL version - RESOLVED

Well now I am on the trail of trying to disable Optimus... and it looks like it might not be possible on my Dell XPS 15Z.  :\ I tried disabling my Intel Chip on my device manager, and that pretty much breaks everything. Resolution is dropped to 800x600 - and I'm guessing its not GPU accelerated.   Going to follow up with NVidia.. man its really going to suck if this a dead end for my machine, after all this work! :(