Jump to content
  • Advertisement

Nagle

Member
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Nagle

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. There's tons of IMU code on line from drone people. http://x-io.co.uk/open-source-imu-and-ahrs-algorithms/
  2. Nagle

    Mesh reduction in reverse

    Yes. I'd allow cutting objects up to 6 faces, like a cube. For an L-shaped object, you'd cut with a rectangular solid and get close to an L with one cut. Doing the cuts is a CSG problem, of course. Deciding where to cut is the hard part. The metric is, make the cut that removes the most unwanted material. Think of this as a way to make impostor objects. Start with a chair model, enclose it in a bounding box. Make one cut with a cube and you're pretty much done. Then texture map onto the impostor. Never seen this approach used, and I've done some literature searches. It's kind of obvious, and you'd think someone would have tried this.
  3. Nagle

    Mesh reduction in reverse

    You don't let it go for that many iterations. The idea is to make only a few cuts and get a rough approximation to the object.
  4. Mesh reduction is well known. You have some algorithm for removing faces and vertices while trying to maintain something close to the original shape. This usually doesn't work well if you push it too hard and try for an extreme low-poly version of the mesh. Has anyone ever looked at this from the other direction? Suppose you take a mesh and enclose it in a bounding box. That's the ultimate "low poly" version. Then you use a simple cutting object, such as a cube or a tetrahedron or a plane, to cut off part of the bounding box. Pick the cut that will remove the most excess volume. Repeat until some error criterion is reached. Here's the general idea, in 2D. We take the black design and surround it with a red bounding box. Then we clip off sections of the red area, as shown by the blue outlines. The next cut should be the one that removes the most red without removing any black. Here, two cuts would get us roughly the house shape. Enough cuts would get you the original object, but you stop long before that point. Somebody must have tried this; it's kind of obvious. But I've never seen it done.
  5. Probably right. I glanced at the source, and it's mostly glue code. No code that actually does much. But at least the API is open. What are the alternatives to Spatial OS for big shared world back-ends? For something like Second Life or OpenSim. Those scale, but with difficulty. They have problems with region crossings and choke if too many players visit a single region. Now that somebody has done this, it should go like physics engines - first it was super hard and took an army of PhDs, and the startups that figured it out had grand ambitions and were way overvalued. Then the startups (Mathengine, Havok), went bust or had a down round and downsized. Now there are several good open source physics engines and it's no big deal. How good is Spatial OS's performance, anyway? If 1000 players get into a melee, will it hold up? Anybody tried?
  6. Improbable said they were going to open-source part of their system under the MIT license: "Improbable says it has set up an emergency fund to assist partners facing financial challenges as a result of Unity's enforcement action and has pledged to make its GDK open-source under an MIT license." Now that's a big deal. They removed the old "Improbable" license and switched to the MIT license. Someone can now make an open source compatible back-end and not use the Improbable/Google servers at all. Like OpenSim did to Second Life. A simple single-server compatible back end would be useful; it wouldn't scale, but you could develop. Then a multithread back end, to run on a big AWS instance. That would probably get you to a few thousand users.
  7. Nagle

    Collision Detection - why GJK?

    Very similar. Here's the comparable figure from my 1998 patent on spring-damper physics for ragdolls. Expired this year. The paper from Valve is from GDC 2015. "Skin" and "Bone" contact model. The rectangular collision objects 86-88 are rigid and are not allowed to interpenetrate. The visual geometry is out at the curved boundaries. With this approach, the closest-points vector 78-79 moves smoothly as two objects come close near a corner. If you use interpenetration and compute the collision as "shortest move to out of collision", the contact vector snaps sharply as a contact reaches a corner. In a system where the time step is variable (simulation and animation rather than real-time games) the integrator keeps cutting the time step to reduce error, trying to zero on where things suddenly changed. Timesteps can drop into the nanoseconds chasing that discontinuity. Don't want that. That's step one to smooth collisions. Step two is multiple contact points. That takes care of what's called static indeterminacy, which is why a table with four legs not exactly the same length wobbles on a hard surface. You need some minimal springiness or error tolerance to get anything with more than three contact points to settle. With both of these, contacts become differentiable. A tiny move will not produce a big change in forces. Now you can integrate stably. This was for a spring-damper system. Those are more accurate, but slower. Games use impulse-constraint systems, which can be solved in constant time, but always look a little off, because there are big velocity changes in one frame time. Real-world large objects don't do that. With enough compute power, spring-damper physics might come back. It's like ray-tracing - better, but expensive.
  8. Nagle

    Collision Detection - why GJK?

    That must be some dumbed-down version of GJK. See Cameron's paper, section 5, for how to compute interpenetration. Here's the code. When I used this approach, I generated a collision geometry slightly inside the visual geometry. So I'd get a closest-points vector between the collision geometries and use that to keep them apart. You can compute a closest-points vector from the final simplex of GJK. The closest points can be vertex-vertex,vertex-edge, edge-edge, vertex-face, or edge-face. Not face-face, though; that's reported as vertex-face. I wanted a more accurate simulation than games usually use. So, from that I'd compute the two most parallel faces and the plane through the midpoint of the closest points vector and normal to it. Then the two closest faces were projected onto that plane to get two polygons. The polygons were intersected. That's the contact polygon. Then the distance from the vertices of the contact polygon to each of the closest faces was computed, to get the distance to contact at each corner of the polygon. The closest points distance was used to compute the spring energy for a spring-damper system. That energy was then distributed among the contact points in inverse proportion to their distance to contact. This made face-face contact stable. Single-point contact systems tend to have the contact jump around as objects settle, resulting in jitter. This approach doesn't have that problem. Friction was computed at the multiple contact points. So spinning object friction worked right. Here's a video from the 1990s. Even a rattleback worked. This is slower than impulse-constraint physics, but it doesn't have the "boink" problem. In impulse-constraint physics, objects change direction in zero time, which looks all wrong for big objects. This is the main reason game physics doesn't look like real physics. Here's a big mecha falling, from 1997.
  9. Nagle

    Collision Detection - why GJK?

    Oh, this is still a problem? This was figured out in the 1990s, but the technology may have been lost. Back in the 1990s, I developed Falling Bodes, a 3D ragdoll system, and probably the first one that worked. Not real-time, for animation. Internally, it had axis-oriented bounding boxes as the outer phase, and GJK for the actual collision. I started with the implementation from Prof. Steven Cameron at Oxford. I discovered that, due to floating point roundoff error, the algorithm may not terminate, but can cycle between a few vertices. This usually happens in a physics engine as two bodies settle into contact and two faces become closer and closer to perfectly parallel. This generates a numerical situation where the small difference between two large numbers matters. That's why it won't work with 32 bit floats. Cameron's code had been tested with random polyhedra, which didn't force the failure. I could run for hours with no fails on random orientations, but when two bodies settled into face-parallel contact, it failed within seconds. I came up with an programmer-type solution which stored the last few states to detect that the algorithm was in a loop. That worked in practice. Prof. Cameron worked on a mathematical solution, and that's in his current code. It took months to get that right. He makes it available without comments online; commercial use requires a fee. Incremental GJK, where you use the previous state to start the closest points search on the next frame, is near constant time after the first time a body pair gets computed. The first time, it's O(sqrt(N)), where N is the number of vertices. The solver walks across the edges of the polyhedron towards the closest point. That's the big win with GJK. High-poly convex mesh collisions work well. I was doing animation, not games, so it didn't have to be real-time and the models tended to be high poly. Geometry for GJK has to be strictly convex. This means you must not have coplanar triangles. Coplanar faces will make the algorithm hang or stick, because there's no direction that leads the search for a closer point towards a solution. Your meshes for GJK thus can't be all triangles; they have to be arbitrary flat polygons. You need a minimum break angle of a few degrees at each edge for best performance. Your convex hull algorithm has to handle all this. I used to use QHull. When you do everything right, it works fine. But the details of how to get this working right may have been lost over the years. I sold the technology and patent to Havok back in the 1990s. Worked out quite well. The patent and NDA expired long ago, so I can talk about it now.
  10. Nagle

    Whatever happened to Havok?

    Right, you get PhysX with Unreal now. Unity has their own physics engine. Bullet is available free. There's not much room left for Havok's once-high prices.
  11. Nagle

    Whatever happened to Havok?

    Because I want to hear from the customer side, not the marketing side.
  12. Nagle

    Whatever happened to Havok?

    Thanks. My concern is that Second Life still uses Havok, possibly some ancient version. Does Microsoft still support Havok cross-platform? Anyone know?
  13. Whatever happened to Havok, the physics engine? They used to be a independent company, offered a cross-platform physics engine, and had lots of publicly available documentation and demos. Then they were acquired by Intel. Intel didn't know what to do with them, so they sold Havok to Microsoft in 2015. The public documentation disappeared. There's still a Havok web site, but it's barely been updated since 2015. Last press release was November 9, 2015, about an Xbox game using it. Games are still using Havok, but is it still available cross-platform, or are only Microsoft platforms supported now? Is it still under active development? What's pricing like?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!