Finding usable normals from complex embedded sphere-tri situations. Try to poke holes in my idea.

Started by
11 comments, last by Defend 8 years, 2 months ago

I plan to write a new guide to sphere-mesh collision handling, since I haven't been able to find anything that doesn't come with a few problems. (The next best thing is Fauerby's guide at http://www.peroxide.dk/papers/collision/collision.pdf ).

Part of the approach I'm taking is to support one-way collisions. That is, to ignore back-facing collisions not just for a performance gain, but for practical application as well. The desired result is for the sphere to be able to find itself embedded inside a triangle and still produce "expected" motion and collisions. But not just one triangle; many triangles, at the same time, in perhaps complicated ways.

By "expected" motion, I mean that it slides only in ways that make sense, it doesn't fall through, and if required to push away from the collision, it pushes away along 1 overall normal that also makes sense. (For example, the user might simply press jump.)

Some intersections might be in front of a tri's face, while some are off to the side, while some triangles register the sphere as closest to one edge, while others find it closest to a vertex shared by earlier triangles, etc etc etc., all at once.

Much words. Such confuse.

I spent entire minutes putting together this example of 3 relatively simple embedded situations. On the left is an embedded situation; on the right is a similar realistic situation that my brain thinks matches the embedded one. The goal is for the sphere in the embedded situation to behave as if it were in the realistic situation. These 3 examples are okay demonstrations of this idea, I believe.

[attachment=30539:Embedded situations 2.png]

Bear in mind these are just 2D; 3D situations must be handled as well.

I believe I have found a process to get a correct set of normals from any situation at all. But mathematical proofs are tough, and I wouldn't know how to begin unit testing this. So instead, I'll describe my solution/algorithm and challenge any readers who might be really keen to dream up an embedded situation that breaks it.

Hopefully someone will find holes in someone else's ideas. You can use as many triangles as you want. If it's not obvious, specify which side is the front of a tri.

My solution:

Things we know:

* Sphere's centre

* Sphere's radius

* Position of every vertex of every triangle.

* Face intersection == the sphere's centre projects onto the face of a tri that it intersects.

* Edge intersection == the sphere's centre projects onto an edge of a tri that it intersects, but doesn't project onto the face of that tri.

* Vertex intersection == the sphere's centre overlaps a vertex of a tri that it intersects, but doesn't project onto the face of that tri, nor either of that vertex's two edges.

* Intersection point == closest point on any given intersected triangle to the sphere's centre

* Collision normal (per intersection type) == Direction from intersection point to the sphere's centre

[attachment=30540:Intersections.png]

Here's that algorithm:

1. Test every triangle to see if it is intersecting the sphere. If it isn't, discard it. Ignore any back-facing situations (sphere's centre is behind triangle's plane).

2. If a triangle is intersecting the sphere, determine whether it is a face intersection, edge intersection, or vertex intersection.

3. Face intersection? Add its collision normal to "the list" of collision normals discovered so far.

4. Edge intersection? If that edge is not involved in any face intersections, add its collision normal to the list. Else, ignore it.

5. Vertex intersection? If that vertex is not involved in any edge intersections, nor face intersections, add its collision normal to the list. Else, ignore it.

6. To slide the sphere with a new velocity, remove from that velocity any vector component that goes into any collision normal in the list. Then slide.

7. Or to push away from the embedded position, average all the normals' directions and use that as an overall collision normal.

To help the brain think in 3D, recognise that this image is showing 1 face intersection, 2 edge intersections, and 1 vertex intersection. In this example, only the face intersection's normal would be used, not just because it already involves all intersected edges and vertices, but also because the sphere's centre is behind the other 3 triangles' planes.

[attachment=30541:4 ways.png]

Advertisement
I know the algorithm you describe under 'Voroni region' based collision detection.
The first source i'm aware is Brian Mirtichs V-Clip implementation, he also wrote a paper.
It can find closest features between convex nonintersecting polyhedra, your example (point and convex polyhedra) is a case of this.

The problems will begin when the triangle mesh is not convex, and / or the sphere center crosses its surface,
so it becomes harder to find a correct displacement direction.

Thanks for those terms, I'll look them up. That's really helpful.

Regarding crossing surfaces, part of the process of ignoring back-facing collisions is ignoring the tri completely once the sphere's centre goes behind its surface. So I believe that situation is ok.

I'm not sure why you say convex though... my examples are concave. Perhaps I should have made it clearer in the images. The black lines are facing up... as if they're the ground.

Regarding finding closest features between a point and a polyhedra, that is something I am already able to do, convex or concave.

Hmm after a quick read this seems to be a different thing. Probably because my opening post was rushed, so I didn't spell out exactly what I do and don't have already.

Finding the closest point on any triangle is something I already assume is done. Convex, concave, it doesn't matter.

The task that I think I have solved, and am asking for comments on, is to devise an algorithm that knows which normals to accept, and which normals to ignore, from the set of closest points found in any given embedded situtation. An embedded situation can involve multiple tris, and thus, multiple collision normals, some of which should be ignored.

is to devise an algorithm that knows which normals to accept, and which normals to ignore, from the set of closest points found in any given embedded situtation


That's what i mean with 'problems'. Those cases are concave, sphere is in multiple voroni regions and multiple collision normals appear.
Rejecting some normals and averaging the rest sounds no good idea, instead you should weight the normals in a way that the resulting normal has no abrupt changes when the moving sphere enters an additional region.
I've done that years ago but can't find the code anymore and forgot the details.
Thinking of it again i would try to weight based on distance to the boundary of the voroni cell (not the distance to the collision mesh).

Going through your 3 examples with this idea (refering the black lines as 'faces'):
1. Sphere is in region of left face, but outside right face region, so use only left edge normal.
2. Sphere is in both face regions. Measure closest distances from sphere center to the thin black normal lines to get 'voroni boundary distances'.
Say Left distance is 3, right is 7, contact normal is left normal * 3 + right normal * 7, left gets less weight bacause sphere is closer to leaving it's region.
3. You would need to draw the normals on the vertices to illustrate voroni boundary, then we see the sphere is not in any face region, but in both vertex regions -> similar example as 2.

Finaly the collision can be resolved by projecting the sphere along collision normal so it does not intersect any involved mesh elements.
In case this is not possible (e.g. sphere is in a room that is too small), average intersection, maybe by a similar weighting.
Downside: The sphere may not contact BOTH mesh elements like in your right side drawing - just the closest, but it will come closer to this after the next simulation step.
That's ok as long as the sphere does not jitter if we move against a sharp corner in a room, like we can see in many games smile.png

That weighting is just an idea - i'm not sure it works (i'm no collision expert).
Keep in mind that the usual way is simply to generate multiple contacts and let a physycs solver take care for the problem.

Rejecting some normals and averaging the rest sounds no good idea, instead you should weight the normals in a way that the resulting normal has no abrupt changes when the moving sphere enters an additional region.

I don't think that makes it a bad idea, because the abrupt changes are no more abrupt than changes in mesh itself. It seems odd to fear sudden changes in incline while embedded, when they're already fine while not embedded (such as if example 1, realistic situation, were to travel to the right) if that is simply what the mesh dictates.

I'm not eager to smooth over all non-embedded changes in direction. I prefer the collision response to be as sharp as the mesh. That said, the way you suggest weighting things is a good idea I'd surely use if I feel differently about smoothing later.

To clarify something also, I wouldn't be averaging anything for moving through the mesh. I would be restricting to all valid sliding planes found (ie., using all selected normals). Averaging was what I suggested for things like jumping off of it.

And just making it explicit again, the topic is about asking "how to select normals", not "what do I do with normals once selected". I'm not sure if you've noticed, but your suggestion also rejected normals, and we happen to have agreed on which ones. It could be that you're only thinking in terms of face normals, so you have rejected an embedded *edge collision's* normal intuitively. In Example 1 we've both ignored the edge intersection (the bottom corner, since the image is in 2D) that the right-hand triangle would register.

That's the kind of thing I'm actually looking for agreement/disagreement with in this thread.

In case that is confusing ("Why would the right-hand triangle register an edge intersection?"), it's because the collision detection algorithm checks triangles one by one.

But those 3 examples are only simple ones. Here is an example that has no face intersections:

[attachment=30559:Pyramid.png]

The orange lines are simply guidelines displaying extrusions of each tri's face along its normal, to show more clearly that the sphere's centre does not project onto any face.

The question continues to be: Which normals would you choose, and is there any problem in the process I wrote for selecting them?

Finally the collision can be resolved by projecting the sphere along collision normal so it does not intersect any involved mesh elements.

Actually the whole goal of this is to have no such collision resolution trying to put the sphere "on" a surface. If the sphere is embedded, it stays embedded and moves just as comfortably as a non-embedded sphere. Besides, as you mentioned, cheeky displacements like that are just asking for trouble.

But those 3 examples are only simple ones. Here is an example that has no face intersections:

The orange lines are simply guidelines displaying extrusions of each tri's face along its normal, to show more clearly that the sphere's centre does not project onto any face.

The question continues to be: Which normals would you choose, and is there any problem in the process I wrote for selecting them?


That case is very well defined: The sphere center is not in the region of any face, and also it's not in a region of any edge.
It is clearly in the region of the pyramid top vertex, thus the contact normal is the difference from sphere center to vertex.

Also the pyramid is convex, so there is always a single simple answer:
Face: use face normal
Edge: use difference closest point on edge to sphere center
Vertex: use diff. sphere center to vertex

Also the contact normal (equals closest feature on mesh - sphere center) changes smoothly as the sphere moves.
(You may be right and abrupt changes are no problem, i don't remember if i had problems with abrupt changes or if it's just paranoia)

However, i wonder why you have chosen this simple pyramid example, maybe you have missed something on the voroni region stuff.
Let me know if i should raw a image to illustrate it.

And just making it explicit again, the topic is about asking "how to select normals", not "what do I do with normals once selected"

Because resolving collisions is the goal, we should keep it in mind even if we actually talk only about detecting collisions.

I've a much better idea for the weighting: Simply use penetration depth.
This also answers the question which normals to use, namely from all mesh features that intersect the sphere.
For your 1st drawing that would mean to mix both face normals and the vertex (or edge in 3d) normal.

Edit:
This may be a good approx. for what i think would be the correct way, which is:
For the pyramid example, which is a solid polyhedra with volume:
Use the sphere to cut the intersection volume from the pyramid.
Assuming uniform density, calculate the volume center (== mass center) of that clipped volume and use difference from volume center to sphere center as seperating vector.

For a nonsolid mesh which has no volume we can take area instead.

I don't want to just poop on that idea, but I think that calculating the centres of possibly quite complex shapes, multiple times per frame, is kind of bordering on overkill. It's not that it's complicated (it is though), it's that it's complicated without something in reality to justify the approach. If it were based on a real-life physics situation then I would embrace any complications required to do things the "correct" way, but I don't believe there is one correct way, no more than there is a correct way to solve the moving portal problem.

http://9gag.com/gag/adj90GB/moving-portal-problem

If the developers had done it one way, players would have said "Yeah, that feels right."
If the developers had done it the other way, players would have said "Yeah, that feels right."
And people argue both ways using classical physics until the cows come home.

Since tennis balls don't slide around inside bricks in real life, if someone were to put volume centre calculation in a guide, readers would question its justification.

So in my opinion the goal is just something that feels consistent and acceptable to the user, but from there I say the developer really has free reign. Averaging, weighting, penetration depths, volume centres... they're all going to give a normal of some kind, and all can feel fine.

Personally, averaging is fine to me for what I have in mind. I don't find value in embedded situations giving different results to their non-embedded equivalents, nor in asking the user to factor in penetration when trying to intuit the direction they're going to jump in. But maybe you do. Perhaps you're imagining a scenario where penetration definitely *will* factor into the users intuition. And that's cool too. (But on that note, I don't think your volume-centering idea gives anything to the user that the weighting/penetrating idea didn't already.)

By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it.

The discussion on what to do with normals is welcome; it's making me think. As long as the thread's topic of how to select them does also get closer to closure. And possibly it has now, because you actually seem quite confident about the normal selection process. We describe it with different language (you say Voroni, I say "hey guys I've got this 7 step diet routine") but it looks like we are thinking the same way.

The image I posted last (the pyramid) actually produces an edge normal, but I think you called vertex not because we disagree, but simply because the image isn't clear. I tried to find good angles, but looks like I failed hehe. Here it is again, new angle.

[attachment=30564:Pyramid2.png]

So edge, yeah? Between the purple and blue.

A final one, if you want, about selecting normals:

[attachment=30566:Contrived.png]

By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it.


That justifacation is easy to make:
In graphics you can do a lot of things, nonrealistic renderings, screenspace tricks, and it will be ok.
In physics, either you do it correctly and it will work, or it will fail (jitter, tunneling) - there is not much space in between.
This is what i've learned the hard way (but i talk about harder multibody problems, so ignore my paranoia until it hits you too).

I agree calculating intersecting volume / area is complex, but i would not bother to implement is as reference to compare it with a faster approximate method.

Also agree on pyramid edge case and differnt terminology.

In the final image i see center in two vertex regions and in one face region.
But the more detailed the mesh becomes, the less satisfying our classification seems.
And the more attractive the accurate intersection idea appears: no need for any normals, no need for any classification.


You might wanna look ant this code.
I use it to calculate how much area of each texel in a hemisphere enviroment mip map is in the sphere, so it clips squares against unitcircle in 2D.
A polygon - sphere clipper should have similar complexity.
Calculating area weighted center of a clipped polygon then is pretty fast (cross products along the edge boundary, i already do it to calc area).


void InitRemainingArea ()
		{
			float halfSphereSurface = 2*PI;
			float proofSurface = 0;

			area[GetFirstIndex(LEVELS-1) + 0] = PI * 0.25; // for 2x2 mip map level we already know each texel intersects quarter circle area
			area[GetFirstIndex(LEVELS-1) + 1] = PI * 0.25;
			area[GetFirstIndex(LEVELS-1) + 2] = PI * 0.25;
			area[GetFirstIndex(LEVELS-1) + 3] = PI * 0.25;

			for (int level = 0; level < LEVELS-1; level++) // for each mip map level
			{
				int dim = MAX_RES>>level;
				for (int y=0; y<dim; y++) for (int x=0; x<dim; x++)
				{
					float &texel = area[GetFirstIndex(level) + y*dim+x];

					sVec3 p[4] = { // make a quad for texel
						PlaneVertex(x,	y,	level),
						PlaneVertex(x+1,y,	level),
						PlaneVertex(x+1,y+1,level),
						PlaneVertex(x,	y+1,level),
					};

					int pointOutside = 0;
					for (int i=0; i<4; i++)
					{
						if (p[i].Length() > 1.0) pointOutside |= (1<<i);
					}

					if (pointOutside == 0) // quad entirely in circle
					{
						texel = 2.0f / float(dim);
						texel *= texel;
						continue;
					}

					if (pointOutside == 1+2+4+8)
					{
						texel = 0;
						continue;
					}

					// find texel / circle intersections...

					int clipIndex = 0;
					sVec3 clippedPoly[8];

					sVec3 eis[2];
					int intersectingEdgeIndices[2];
					int edgeIntersects = 0;
					int countIntersections = 0;
					for (int i=0; i<4; i++)
					{
						if (!(pointOutside & (1<<i))) clippedPoly[clipIndex++] = p[i];

						int j=(i+1)&3;

						sVec3 d = p[j] - p[i];
						sVec3 f = p[i];

						float a = d.Dot(d);
						float b = 2.0f * f.Dot(d);
						float c = f.Dot(f) - 1.0f;

						float discriminant = b*b - 4.0f * a * c;
						if (discriminant > 0.0f)
						{
							discriminant = sqrt(discriminant);
							float t1 = (-b - discriminant) / (2.0f * a);
							float t2 = (-b + discriminant) / (2.0f * a);

							if (t1 >= 0.0f && t1 <= 1.0f)
							{
								intersectingEdgeIndices[countIntersections] = i;
								eis[countIntersections] = f + t1 * d;
								clippedPoly[clipIndex++] = eis[countIntersections];
								edgeIntersects |= 1<<i;
//RenderPoint (3, eis[i], 1,0,0, 1);
								countIntersections++;
							}
							else if (t2 >= 0.0f && t2 <= 1.0f)
							{
								intersectingEdgeIndices[countIntersections] = i;
								eis[countIntersections] = f + t2 * d;
								clippedPoly[clipIndex++] = eis[countIntersections];
								edgeIntersects |= 1<<i;
//RenderPoint (3, eis[i], 0,1,0, 1);
								countIntersections++;
							}
						}
					}
//base_debug::logF->Print ("countIntersections = ", (float)countIntersections);

					if (countIntersections == 2)
					{
						// polygon
						for (int i=1; i<clipIndex-1; i++)
						{
//RenderPoint (3, clippedPoly[i], 0,0.5,1, 1);
//float col[] = {0,1,0,0.2};
//simpleVis.RenderTriangle (&clippedPoly[0][0], &clippedPoly[i][0], &clippedPoly[i+1][0], col);
							texel += (sVec3(clippedPoly[i]-clippedPoly[0]).Cross(clippedPoly[i+1]-clippedPoly[0])).Length();
						}
						texel *= 0.5f;

						// circle segment area
						float angle = acos(eis[0].Dot(eis[1]));
						texel += (angle - sin(angle)) * 0.5f;

					}
					else texel = 0; // never happens
				}
			}
		}

This topic is closed to new replies.

Advertisement