Jump to content
  • Advertisement

Conoktra

Member
  • Content count

    79
  • Joined

  • Last visited

Community Reputation

144 Neutral

About Conoktra

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you to everyone the help!  It turns out it was the "sliver" problem.  Marching cubes was making some triangles that where simply too small and narrow in certain places.  Tweaking the scale was able to increase the stability.  Thanks again.
  2. Hello.   I am using Bullet Physics 2.82 and am getting some weird results.  I build some voxel terrain and pass the mesh information to Bullet Physics (via btTriangleIndexVertexArray with btBvhTriangleMeshShape).  It works fantastically, except that in certain spots basic shapes can fall through the terrain.  Here is a video of it:     I have tried quite a bit to fix it, and post here as a "last resort" of sorts.  Help is very much appreciated!  I have tried: Rebuilding Bullet Physics with double precision. Increasing the cycles-per second to 120 and even 240 simulations/second. Increased all objects' collision margins (btCollisionShape::setMargin()) to larger values (0.1, 0.5, 1.0). Reversed the vertex winding order (in-case it was a backward facing polygon allowing the fall-through). And still objects fall through the terrain!  There is no visual deformity-- the terrain is built and displayed correctly.  The debug drawer also shows the terrain is built in that area.  The physics shape is built off of the same data that is used for the rendering.  Any idea on what could cause this?   Terrain initialization (performed on each chunk of the terrain's octree): // pre-filled, just here to show you the declaration s32 * indices; btScalar * vertices; physicsMesh = new btTriangleIndexVertexArray(mesh->indices.size() / 3, indices, sizeof(s32) * 3, mesh->vertices.size(), vertices, sizeof(btScalar) * 3); physicsShape = new btBvhTriangleMeshShape(physicsMesh, true, true); physicsShape->setMargin(0.04); // tried changing to 0.1/0.5/1.0 btVector3 inertia(0, 0, 0); physicsShape->calculateLocalInertia(0.0, inertia); physicsMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(0, 0, 0))); btRigidBody::btRigidBodyConstructionInfo rigidBodyCI( 0, // mass physicsMotionState,// initial position physicsShape, // collision shape of body inertia // local inertia ); physicsRigidBody = new btRigidBody(rigidBodyCI); physicsRigidBody->setLinearVelocity(btVector3(0, 0, 0)); physicsRigidBody->setAngularVelocity(btVector3(0, 0, 0)); physicsRigidBody->setFriction(1.0); // This is done in a separate threads. Making it single-threaded made no difference. bulletManager.lock(); bulletManager.addRigidBody(physicsRigidBody); bulletManager.unlock(); Basic shape initialization: physicsShape = new btSphereShape(minGroupRange); physicsShape->setMargin(0.04); // tried 0.1/0.5/1.0 btVector3 inertia(0, 0, 0); physicsShape->calculateLocalInertia(1.0, inertia); btRigidBody::btRigidBodyConstructionInfo rigidBodyCI( 1.0, // mass this, // initial position physicsShape, // collision shape of body inertia // local inertia ); physicsRigidBody = new btRigidBody(rigidBodyCI); physicsRigidBody->clearForces(); physicsRigidBody->setLinearVelocity(btVector3(0, 0, 0)); physicsRigidBody->setAngularVelocity(btVector3(0, 0, 0)); //physicsRigidBody->setActivationState(DISABLE_DEACTIVATION); BulletManager::get().addRigidBody(physicsRigidBody); And updating Bullet Physics: dynamicsWorld->stepSimulation(timer.getDeltaTime_Seconds()); //dynamicsWorld->stepSimulation(timer.getDeltaTime_Seconds(), 10, 1.0 / 240);
  3. Thanks for the help!   It's being rendered as two triangles.  I've read that OpenGL uses the "Barycentric coordinate system" which might cause the issue as seen here.  But that individual was rendering in 2D, so he was able to bypass the problem by generating his texture coordinates from screen space (rather then interpolating them from the vertices).   Drawing two quads (one red and one green) that overlay and blend together to create a linear fade from red to green (as is seen in image B).  Somehow the interpolation of the color channel is non-linear, resulting in the background bleeding through as is seen in image A.  Code: // Draw the red quad glBegin(GL_TRIANGLES); glVertex3f(0, 1, 0); glColor4f(1, 0, 0, 0); glVertex3f(1, 1, 1); glColor4f(1, 0, 0, 0); glVertex3f(1, 1, 0); glColor4f(1, 0, 0, 1); glVertex3f(0, 1, 1); glColor4f(1, 0, 0, 1); glVertex3f(1, 1, 1); glColor4f(1, 0, 0, 0); glVertex3f(0, 1, 0); glColor4f(1, 0, 0, 0); glEnd(); // Draw the green quad glBegin(GL_TRIANGLES); glVertex3f(0, 1, 0); glColor4f(0, 1, 0, 1); glVertex3f(1, 1, 1); glColor4f(0, 1, 0, 1); glVertex3f(1, 1, 0); glColor4f(0, 1, 0, 0); glVertex3f(0, 1, 1); glColor4f(0, 1, 0, 0); glVertex3f(1, 1, 1); glColor4f(0, 1, 0, 1); glVertex3f(0, 1, 0); glColor4f(0, 1, 0, 1); glEnd(); See the above for vertex colors.  The blend mode is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I am using sRGB. I am using Ogre3D so I can't verify that gamma correction isn't happening for sure.   I got Googling and added "fragColour.a = pow(outColour.a, 1.0 / 5.0);" to my shader rather then "fragColour.a = outColor.a;" and it seems to have done the trick!  Reading the OpenGL registry though says gamma correction shouldn't happen on alpha values for textures.  What about vertex colors?  That and a gamma factor of 5 is a lot bigger then the standard 2.2.
  4. I have a rather simple problem that I am positive has a simple solution.  I am rendering a quad twice as two layers, each with a color.  The first is green, the second is red.  I am using the alpha channel of the color component for the quad's vertices to blend the two colors together across the quads.   Attached bellow are two images.  Image A is how it looks when rendered with OpenGL, image B is how it should look.  The black color in image A is the background bleeding through.  If OpenGL interpolated the color values linearly there would be no bleeding, it would be a nice transition from red to green and no black would show.  But that's obviously not the case.   I've tried using a GLSL program and setting the color variable to noperspective but it makes no difference.  Is there a way to force OpenGL to do plain linear interpolation of my vertex colors, so that the red and green blend evenly across the quad like in image B?   Additional information: Red layer alpha values: 1.0 -------- 0.0 |             | |             | |             | 0.0 -------- 1.0 Green layer alpha values: 0.0 -------- 1.0 |             | |             | |             | 1.0 -------- 0.0 . Thanks!
  5. Conoktra

    Template specialization and class references

    Even the most sound code can be broken when you throw scenarios at it that are specifically designed to break it.  My compiler doesn't mangle the class names nor do I use namespaces, hence neither is an issue.
  6. Conoktra

    Template specialization and class references

      tolua++ needs to know the class's typename or else its treated as a void* inside lua.  The "typeid(T).name + 6" skips the "class " prefix and only passes the classes unique type name ("class MyClass" -> "MyClass") which is what tolua++ expects.  luaPushArg() has specialized template instances for all non-class types (thus avoiding the issue of instantiating the template with, say, an int).  That way I can pass any class to lua without having to define a specialized instance of luaPushArg() for each class type.  With hundreds of classes exported to lua, this saves a lot of time and code.     Portability isn't an issue.  If it was fixing it would be a matter of a couple of #ifdefs.  Whoopdeedoo.     Its not possible to have two classes with the same typename.  Period. Try to compile this:   class A { }; class A { }; // error C2011: 'A' : 'class' type redefinition
  7. Consider the following:   class Vector3 { public: f64 x, y, z; }; // pass an unkown userdata type to lua template<typename T> inline void luaPushArg(lua_State * const LuaState, T arg) { tolua_pushusertype(LuaState, (void*)&arg, typeid(T).name() + 6); } void testFunction(const Vector3 &v) { luaPushArg(LuaState, v); luaCallFunction("TestFunction"); // CRASH (only sometimes though!) luaPop(); }     What went wrong here?  Can you spot the bug?  This one was a real pain in the tush.  luaPushArg() would work with all my specialized types (int, float, etc) that I was passing to Lua, but when I passed classes it would sometimes crash.  Turns out that luaPushArg() is taking a T arg instead of a T &arg.  This means that a new copy of 'v' is created inside testFunction() when it calls luaPushArg().  luaPushArg() then pushes the newly created object onto the Lua stack.  Upon luaPushArg()'s return, the pointer too the class object that was just pushed onto the lua stack is now invalidated.  Sometimes it would crash, sometimes it wouldn't.  This one was a real nightmare.     Hehehe .  I can't wait for C11 support in GCC, that way bugs like this can be avoided using type-generic expression macros.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!