Jump to content
  • Advertisement

coderchris

Member
  • Content count

    307
  • Joined

  • Last visited

Community Reputation

305 Neutral

1 Follower

About coderchris

  • Rank
    Member

Personal Information

  • Industry Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. coderchris

    sweep test with two moving bodies

    Spot on. That's typically how you would handle the case of two moving rigid objects. Fix object A and subtract object A's transform from object B. I'm not sure how bullet's swept function works internally, but if you want to implement it yourself, you could look into conservative advancement.
  2. Yeah that sounds right. I emailed the author and asked for clarification on this. He confirmed that the final K matrix is indeed the linear combination of hessians. I've implemented it this way into my physics engine and it seems to be working as described in the paper. I found matlab to be of great help when deriving K. For anyone else that gets stuck, here is an example of computing K for a simple length constraint between points "A" and "B" with rest length "l": syms A1 A2 B1 B2 lambda l A = [A1; A2]; B = [B1; B2]; phi = norm(A - B) - l; J = jacobian(phi, [A1, A2, B1, B2]); K_til = jacobian(transpose(J), [A1, A2, B1, B2]) * lambda; pretty(K_til) The same logic should work for any type of constraint.
  3. This is in reference to "Stable Constrained Dynamics": https://hal.inria.fr/hal-01157835/document Equation (18) / (21) I'm having trouble understanding how to build this K "geometric stiffness" term. K = (∂J^T / ∂x) λ Where J is the constraints jacobian and λ is the constraint force magnitudes. What I do know - based on its usage in (21), K should be a (3n x 3n) matrix in 3D where n is the number of particles lets say. What I'm confused about - the jacobian J is a (C x 3n) matrix where C is the number of constraints. λ is (C x 1). This doesn't seem to work out in terms of the matrix dimensions... What am I missing here? If I consider only a single constraint, then it does appear to work out in terms of the dimensions - I end up with λ being a scalar and K ultimately being (3n x 3n). However, that leads to the question of how to then build K such that it contains all of the individual constraint K's (one K for each constraint I guess)?
  4. coderchris

    Divide and conquer broadphase

    Thanks for the feedback Aressera. Yeah its a tricky comparison. I guess it comes down to whether or not you are tracking pairs between frames. I have a benchmark setup in which about half my scenes have everything moving. When I am comparing methods, I use the total time required to simulate all scenes for some number of steps. So, it's not a very scientific comparison to be sure. For my BVH, I refit each frame and apply internal tree rotations based on the SAH cost (used the method presented here: https://www.cs.utah.edu/~aek/research/tree.pdf). It's very likely that I have a sub-optimal BVH implementation though (no SIMD, poor cache locality). Indeed, if I track pairs between frames and do not query objects which haven't left their bounds, it costs very little. That requires me to use a fattened AABB so that I don't re-test an object unless it has moved out of its bounds. However the fattened AABB generates a bunch of extra pairs that my narrowphase has to check, which is particularly expensive in my case. I think that's why I'm seeing lower than normal overall performance. One day I will do a more scientific comparison with some existing BVH implementations. Overall the divide and conquer method is just a novelty due to the memory savings and simplicity - I don't foresee using it in my own engine because I do want to optimize for the case where nothing is moving. That's very interesting. Octree is one which I haven't tried yet. I will have to give it a go! I'm still trying to beat my uniform hash grid. For the octree, do you only insert an object into the node in which it entirely fits, or all nodes which it overlaps? I'm guessing just the one it fits in since the object has a pointer to the node its in. What metric did you use for determining when to subdivide a node? I suppose when testing for collisions, you need to check the node an object as in, as well as traverse down any intersecting children right?
  5. coderchris

    Divide and conquer broadphase

    Ah yes, that should be std::partition ("cleaned" up my implementation for the post but missed that :P) Yeah it's sort of like a a top down BVH build with pair checking at the same time. Main difference from BVH is that an object can potentially appear in both subtrees, hence the need to partition twice. And yeah makes sense, you could copy the index array then run left and right in parallel. Wouldn't be fixed memory usage anymore, but we've got plenty of memory these days so no biggie
  6. coderchris

    Divide and conquer broadphase

    Hello everyone, I just wanted to share a, what I believe to be novel, broadphase collision detection technique that I stumbled over while working on my physics engine. If anyone knows of related work, please refer me! The code will be at the end of this post, but first some discussion. Briefly, the goal of a broadphase is to determine potentially colliding pairs of objects. There exist many well known techniques such as: uniform / hierarchical grids, bounding volume hierarchies, sweep and prune, or quadtrees. They all have their respective pros and cons, which I won't go into here. One thing they all have in common however, is that they require maintaining and building a data structure which facilitates the process of finding pairs. Divide and conquer broadphase is based on the relatively new "Divide and conquer ray-tracing" technique. The idea is to recursively subdivide space, partitioning objects into these spaces, until there are sufficiently few objects in a space. When there are few enough objects, use a brute force method to find the potential pairs. The great thing about this technique is, it requires no data structures to be built and it uses a fixed amount of additional memory. In addition, the code is surprisingly simple. The code shown works in 2D. It is trivially extended to 3D or beyond by changing only the partitioning method to consider the additional dimensions. Performance analysis: Subjectively speaking, I have tested this method against many others in my own physics engine on a variety of scenes. It performs very nicely, surpassing the dBVT in many cases (especially when many objects are in motion). It rivals the speed of my current best, the uniform grid. From an analytical point of view, the best case is O(n log n) when the objects are not overlapping, and the worst case O (n^2) when all objects occupy the same space. However, in typical physics simulation scenes, objects are nicely distributed and do not overlap much (assuming the constraint solver is doing its job). O(n log n) doesn't sound that great considering BVHs achieve the same on paper. But unlike BVHs, no structure needs to be maintained or updated, which causes it to perform better in practice. Memory usage, as you will see, is minimal. The algorithm is run from scratch at the beginning of a time step. Potential optimizations: There are a few avenues for optimization. The first and most obvious being, unroll the recursion into a loop with an explicit stack in order to reduce the overhead of the recursive calls. Another is to use a more intricate partitioning method, rather than the simple one shown here. Note that I use a set to track pairs, since this method can result in duplicate pairs. The set is actually rather slow. You can easily swap this for a vector, then sort and remove duplicates at the end for a nice performance boost. There exists a parallel version of "Divide and conquer ray-tracing". I'm sure the ideas presented there could be used to parallelize this technique as well, though I have not looked into it, Lastly, one could potentially cache the partition locations, and use those as a starting point for computing the partition locations in the next frame. This exploits the temporal coherence of physics simulations. This could theoretically divide the cost of the whole process by 2, since computing the partition location is half of the work in each call. Here is my implementation (feel free to use this however you wish): #include <vector> #include <set> #include <algorithm> // Divide and conquer broadphase void dac_bp(const vector<aabb>& bounds, const vector<int>::iterator start, const vector<int>::iterator end, set<pair>& pairs) { const int BRUTE_FORCE_THRESH = 32; if ((end - start) < BRUTE_FORCE_THRESH) { // Brute force check the remaining pairs in this interval for (auto i = start; i != end; i++) { for (auto j = i + 1; j != end; j++) { if (bounds[*i].intersects(bounds[*j])) { pairs.insert(pair(*i, *j)); } } } } else { // Compute bounds of all boxes in this interval float2 pmin(FLT_MAX, FLT_MAX), pmax(-FLT_MAX, -FLT_MAX); for (auto i = start; i != end; i++) { pmin = min(pmin, bounds[*i].min); pmax = max(pmax, bounds[*i].max); } // Choose the partition axis and partition location float2 size = pmax - pmin; int axis = (size[1] > size[0]); float split = (pmin[axis] + pmax[axis]) * 0.5f; // Partition boxes left and recurse auto mid_left = partition(start, end, [split, axis, bounds](int i) { return bounds[i].min[axis] <= split; }); dac_bp(bounds, start, mid_left, pairs); // Partition boxes right and recurse auto mid_right = partition(start, end, [split, axis, bounds](int i) { return bounds[i].max[axis] >= split; }); dac_bp(bounds, start, mid_right, pairs); } } // USAGE // Build a vector containing bounds of all objects vector<aabb> bounds = { /* fill with bounds of objects */ }; // Initialize indices vector<int> indices; for (size_t i = 0; i < bounds.size(); i++) indices.push_back(i); // Get all colliding pairs! set<pair> pairs; dac_bp(bounds, indices.start(), indices.end(), pairs); Thanks for reading, and I hope someone finds this of use! If you have any further ideas for extending this, please share!
  7. If you are looking for extreme robustness in a swept test, check out the method outline in this paper: http://gamma.cs.unc.edu/BSC/BSC.pdf It covers two variants. There is a fast floating point version which, while not exact, will be correct in the vast majority of cases. I use this variant in my own project and it works great. The paper also covers an exact version which uses extended precision arithmetic, and it should return the exact answer 100% of the time. This method says nothing above resolving the collision. You could use a swept test, and then resolve the collision by moving the vertex and triangle such that they lie on the same plane. The issue with this is that on the next iteration / step, it would be impossible to know which side of the triangle the vertex should be on. I've used two approaches to deal with this issue. 1) Track collisions from between steps, remembering which side of the triangle the vertex should be on. This way, you can even skip the expensive swept test if a vertex is known to be colliding with a triangle. This is what I would call a corrective approach. You are correcting penetration issues after finding them, potentially over the course of many steps. There are a few issues with this - the biggest being that special handling is required when the vertex reaches the edge of the triangle while still in the "colliding" state. You essentially have to transfer the collision to adjacent triangles to avoid having different saved sides for adjacent triangles. 2) Have a margin on all triangles. This requires two tests - the swept test, as well as a discrete test which simply pushes the vertex away from triangles if closer than the margin. This is what I would call a preventative approach. It prevents any collisions from persisting between iterations / steps. This method is less sensitive to the precision of the swept test than #1. The major issue here is that all collisions must be resolved before advancing to the next step, otherwise you will see missed collisions. #1 is probably more visually pleasing since you don't have an artificial margin separating things, however I have had more success with #2 since there are less strange cases to deal with.
  8. coderchris

    When contact resolution fails

    As an absolute last resort, you could look into using "rigid impact zones". It was originally devised for cloth simulation, but in my experience it works surprisingly well for complex rigid body collision scenarios. It will not resolve overlap that existed at the beginning of the time step, but it can guarantee that no new overlap will exist by the end of the step. The basic idea is to take a group of colliding bodies for which normal collision resolution failed to find a solution for, and treat that group as a single rigid body. This is done by computing, for that group, an average linear and angular velocity, then applying that to all objects in the group. This should all be done as the very last step in your solver. The method is detailed here (section 7.5): http://physbam.stanford.edu/~fedkiw/papers/stanford2002-01.pdf
  9. Ah interesting, hadn't seen those slides. In the slides they reference this paper (https://www.cs.ubc.ca/~rbridson/docs/english-siggraph08-cloth.pdf) which talks about performing an explicit velocity correction to counteract the effect. I'm guessing this correction would have to go after the velocity is computed. I will experiment with it some.
  10. Hello, In reference to position based dynamics survey section 4.2.5: http://matthias-mueller-fischer.ch/talks/2017-EG-CourseNotes.pdf The second order integration method they derive works very well for conserving energy. The original PBD integration loses a lot of energy, and it is especially apparent during rotation of groups of constrained particles. With the second order method, it looks almost perfect. However, I'm seeing bouncing and generally inconsistent reactions when this method is paired with any inequality constraints, but most importantly, collisions. They mention that no changes are required to the inner solve loop, but it seems as though inequality constraints will have to be treated specially for this type of integration to work. I don't understand the math enough to know how or why it isn't working. Does anyone know how to treat inequality constraints (collisions) properly with second order integration in position based dynamics? My current attempt to fix it is to change collisions to equality constraints, and essentially pull the colliding object back towards the collision point (like a distance constraint). This works and no longer bounces. However, now the problem is how to determine when to destroy the constraint... Thanks! -Chris
  11. Nah I dont think normalized constraints are any better from what I've seen.    I was playing with the normalized distance constraint because in his other paper (strain based dynamics), all the constraints are normalized.   That said, I ran into an issue getting the damping to work with strain based constraints - the damping was basically having little to no effect. This was due to those constraints being normalized. Turns out that the damping mechanism as described only works when the constraints are not normalized, so I'm having to unnormalize / scale the strain based constraints for damping to work on them.   I haven't figured out the exactly correct scaling though. Still trying to wrap my head around the math  :blink:
  12. I think you are correct, that seems to work nicely! Thanks for the input.   Everything seems to behave as described in the paper, definitely an improvement over regular PBD.   The stiffness parameter is a bit tricky though, for two reasons.   By their definition (if I'm interpreting correctly), alpha = 1 / stiffness. I found this to not completely solve a constraint in one step when stiffness = 1. So, I use a slightly modified version: alpha = (1 / stiffness) - 1. This will give alpha = 0 when stiffness is 1, which will fully solve a constraint in one step.   The second thing about the stiffness is that, although the valid range is [0, 1], how stiff it makes the constraint is highly dependent on the constraint's scaling. For example, with the usual distance constraint C(a,b) = |a - b| - restLength, with a stiffness of 0.5 and a timestep of 1, the constraint will be reasonably stiff using XPBD. However, using the normalized version C(a,b) = (|a - b| / restLength) - 1, will apply almost zero correction, so you need a much higher stiffness value to achieve the same effect.   Intuitively, this is due to the alpha in the denominator of the delta lambda "overpowering" the constraint. This is not a huge issue, since you can simply scale the stiffness value based on your choice of constraint function.
  13. The paper in question is XPBD (http://mmacklin.com/xpbd.pdf)   Equation (26) in the paper relates to adding damping to the constraint.   It contains a term that I do not understand how to evaluate - the grad C(xi - x^n) in the numerator.   For example, with a distance constraint we have grad C(a, b) = (a - b) / |a - b| This takes two positions, and is itself a vector function, but the equation in question (26) evaluates to a scalar.   What might they mean by this term as it relates to a distance constraint? 
  14. coderchris

    Cubic interpolation over a triangular surface

    This guy gives a good explanation for solving the system in 1D:   http://www.paulinternet.nl/?page=bicubic   Which all makes sense to me. I would like to apply this to my case, but I'm not sure what my system of equations actually is...
  15. coderchris

    stable constraint solver with no warmstarting

      In PBD (position based dynamics), we essentially move the predicted positions of each particle / object / fluid / whatever directly in order to solve any constraint (including friction).   [attachment=27691:PBD.png]   Pardon the crude drawing and probably poor explanation -    Consider this example of a static edge and a moving particle that collides with that edge. The black square is the initial position and the red square is its unconstrained predicted position. Predicted position = current position + current velocity * dt   To solve this collision constraint without friction, we project the predicted (red) position onto the edge which results in the (purple) point. We simply set our predicted position to this purple point - thats the constraint solve.   To add friction, we can compute the (blue) intersection point between the edge and the particles trajectory. We then compute a modified (green) position that is some amount between the (blue) intersection point and the (purple) projected point, based on how much friction we want. For example, for 100% friction we would set the predicted position to be at the blue point directly.   I guess it's not the most accurate way to model friction, but I did it this way because it let me have a normalized friction parameter [0, 1]   I have not implemented motors, but in the PBD framework you do have a velocity to play with, so I'm fairly sure it could be done without too much hassle.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!