Sign in to follow this  

Finding usable normals from complex embedded sphere-tri situations. Try to poke holes in my idea.

This topic is 665 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I plan to write a new guide to sphere-mesh collision handling, since I haven't been able to find anything that doesn't come with a few problems. (The next best thing is Fauerby's guide at http://www.peroxide.dk/papers/collision/collision.pdf ).

 

Part of the approach I'm taking is to support one-way collisions. That is, to ignore back-facing collisions not just for a performance gain, but for practical application as well. The desired result is for the sphere to be able to find itself embedded inside a triangle and still produce "expected" motion and collisions. But not just one triangle; many triangles, at the same time, in perhaps complicated ways.

 

By "expected" motion, I mean that it slides only in ways that make sense, it doesn't fall through, and if required to push away from the collision, it pushes away along 1 overall normal that also makes sense. (For example, the user might simply press jump.)

 

Some intersections might be in front of a tri's face, while some are off to the side, while some triangles register the sphere as closest to one edge, while others find it closest to a vertex shared by earlier triangles, etc etc etc., all at once.

 

Much words. Such confuse. 

 

I spent entire minutes putting together this example of 3 relatively simple embedded situations. On the left is an embedded situation; on the right is a similar realistic situation that my brain thinks matches the embedded one. The goal is for the sphere in the embedded situation to behave as if it were in the realistic situation. These 3 examples are okay demonstrations of this idea, I believe.

 

[attachment=30539:Embedded situations 2.png]

 

Bear in mind these are just 2D; 3D situations must be handled as well.

 

 

I believe I have found a process to get a correct set of normals from any situation at all. But mathematical proofs are tough, and I wouldn't know how to begin unit testing this. So instead, I'll describe my solution/algorithm and challenge any readers who might be really keen to dream up an embedded situation that breaks it. 

 

Hopefully someone will find holes in someone else's ideas. You can use as many triangles as you want. If it's not obvious, specify which side is the front of a tri.

 

 

My solution:

 

Things we know:

* Sphere's centre

* Sphere's radius

* Position of every vertex of every triangle.

* Face intersection == the sphere's centre projects onto the face of a tri that it intersects.

* Edge intersection == the sphere's centre projects onto an edge of a tri that it intersects, but doesn't project onto the face of that tri.

* Vertex intersection == the sphere's centre overlaps a vertex of a tri that it intersects, but doesn't project onto the face of that tri, nor either of that vertex's two edges. 

* Intersection point == closest point on any given intersected triangle to the sphere's centre

* Collision normal (per intersection type) == Direction from intersection point to the sphere's centre

 

[attachment=30540:Intersections.png]

 

 

Here's that algorithm:

 

1. Test every triangle to see if it is intersecting the sphere. If it isn't, discard it. Ignore any back-facing situations (sphere's centre is behind triangle's plane).

2. If a triangle is intersecting the sphere, determine whether it is a face intersection, edge intersection, or vertex intersection.

3. Face intersection? Add its collision normal to "the list" of collision normals discovered so far.

4. Edge intersection? If that edge is not involved in any face intersections, add its collision normal to the list. Else, ignore it.

5. Vertex intersection? If that vertex is not involved in any edge intersections, nor face intersections, add its collision normal to the list. Else, ignore it.

6. To slide the sphere with a new velocity, remove from that velocity any vector component that goes into any collision normal in the list. Then slide.

7. Or to push away from the embedded position, average all the normals' directions and use that as an overall collision normal.

 

 

To help the brain think in 3D, recognise that this image is showing 1 face intersection, 2 edge intersections, and 1 vertex intersection. In this example, only the face intersection's normal would be used, not just because it already involves all intersected edges and vertices, but also because the sphere's centre is behind the other 3 triangles' planes.

 

[attachment=30541:4 ways.png]

Edited by Defend

Share this post


Link to post
Share on other sites
I know the algorithm you describe under 'Voroni region' based collision detection.
The first source i'm aware is Brian Mirtichs V-Clip implementation, he also wrote a paper.
It can find closest features between convex nonintersecting polyhedra, your example (point and convex polyhedra) is a case of this.

The problems will begin when the triangle mesh is not convex, and / or the sphere center crosses its surface,
so it becomes harder to find a correct displacement direction.

Share this post


Link to post
Share on other sites

Thanks for those terms, I'll look them up. That's really helpful. 

 

Regarding crossing surfaces, part of the process of ignoring back-facing collisions is ignoring the tri completely once the sphere's centre goes behind its surface. So I believe that situation is ok. 

 

I'm not sure why you say convex though... my examples are concave. Perhaps I should have made it clearer in the images. The black lines are facing up... as if they're the ground.

 

Regarding finding closest features between a point and a polyhedra, that is something I am already able to do, convex or concave.

Share this post


Link to post
Share on other sites

Hmm after a quick read this seems to be a different thing. Probably because my opening post was rushed, so I didn't spell out exactly what I do and don't have already. 

 

Finding the closest point on any triangle is something I already assume is done. Convex, concave, it doesn't matter.

 

The task that I think I have solved, and am asking for comments on, is to devise an algorithm that knows which normals to accept, and which normals to ignore, from the set of closest points found in any given embedded situtation. An embedded situation can involve multiple tris, and thus, multiple collision normals, some of which should be ignored.

Share this post


Link to post
Share on other sites

is to devise an algorithm that knows which normals to accept, and which normals to ignore, from the set of closest points found in any given embedded situtation


That's what i mean with 'problems'. Those cases are concave, sphere is in multiple voroni regions and multiple collision normals appear.
Rejecting some normals and averaging the rest sounds no good idea, instead you should weight the normals in a way that the resulting normal has no abrupt changes when the moving sphere enters an additional region.
I've done that years ago but can't find the code anymore and forgot the details.
Thinking of it again i would try to weight based on distance to the boundary of the voroni cell (not the distance to the collision mesh).

Going through your 3 examples with this idea (refering the black lines as 'faces'):
1. Sphere is in region of left face, but outside right face region, so use only left edge normal.
2. Sphere is in both face regions. Measure closest distances from sphere center to the thin black normal lines to get 'voroni boundary distances'.
Say Left distance is 3, right is 7, contact normal is left normal * 3 + right normal * 7, left gets less weight bacause sphere is closer to leaving it's region.
3. You would need to draw the normals on the vertices to illustrate voroni boundary, then we see the sphere is not in any face region, but in both vertex regions -> similar example as 2.

Finaly the collision can be resolved by projecting the sphere along collision normal so it does not intersect any involved mesh elements.
In case this is not possible (e.g. sphere is in a room that is too small), average intersection, maybe by a similar weighting.
Downside: The sphere may not contact BOTH mesh elements like in your right side drawing - just the closest, but it will come closer to this after the next simulation step.
That's ok as long as the sphere does not jitter if we move against a sharp corner in a room, like we can see in many games smile.png

That weighting is just an idea - i'm not sure it works (i'm no collision expert).
Keep in mind that the usual way is simply to generate multiple contacts and let a physycs solver take care for the problem.

Share this post


Link to post
Share on other sites

Rejecting some normals and averaging the rest sounds no good idea, instead you should weight the normals in a way that the resulting normal has no abrupt changes when the moving sphere enters an additional region.

 

I don't think that makes it a bad idea, because the abrupt changes are no more abrupt than changes in mesh itself. It seems odd to fear sudden changes in incline while embedded, when they're already fine while not embedded (such as if example 1, realistic situation, were to travel to the right) if that is simply what the mesh dictates.

 

I'm not eager to smooth over all non-embedded changes in direction. I prefer the collision response to be as sharp as the mesh. That said, the way you suggest weighting things is a good idea I'd surely use if I feel differently about smoothing later. 

 

To clarify something also, I wouldn't be averaging anything for moving through the mesh. I would be restricting to all valid sliding planes found (ie., using all selected normals). Averaging was what I suggested for things like jumping off of it.

 

 

And just making it explicit again, the topic is about asking "how to select normals", not "what do I do with normals once selected". I'm not sure if you've noticed, but your suggestion also rejected normals, and we happen to have agreed on which ones. It could be that you're only thinking in terms of face normals, so you have rejected an embedded *edge collision's* normal intuitively. In Example 1 we've both ignored the edge intersection (the bottom corner, since the image is in 2D) that the right-hand triangle would register.

 

That's the kind of thing I'm actually looking for agreement/disagreement with in this thread.

 

In case that is confusing ("Why would the right-hand triangle register an edge intersection?"), it's because the collision detection algorithm checks triangles one by one.

 

But those 3 examples are only simple ones. Here is an example that has no face intersections: 

 

[attachment=30559:Pyramid.png]

 

The orange lines are simply guidelines displaying extrusions of each tri's face along its normal, to show more clearly that the sphere's centre does not project onto any face.

 

The question continues to be: Which normals would you choose, and is there any problem in the process I wrote for selecting them?

 

 

Finally the collision can be resolved by projecting the sphere along collision normal so it does not intersect any involved mesh elements.

 

Actually the whole goal of this is to have no such collision resolution trying to put the sphere "on" a surface. If the sphere is embedded, it stays embedded and moves just as comfortably as a non-embedded sphere. Besides, as you mentioned, cheeky displacements like that are just asking for trouble. 

Edited by Defend

Share this post


Link to post
Share on other sites

But those 3 examples are only simple ones. Here is an example that has no face intersections:

The orange lines are simply guidelines displaying extrusions of each tri's face along its normal, to show more clearly that the sphere's centre does not project onto any face.

The question continues to be: Which normals would you choose, and is there any problem in the process I wrote for selecting them?


That case is very well defined: The sphere center is not in the region of any face, and also it's not in a region of any edge.
It is clearly in the region of the pyramid top vertex, thus the contact normal is the difference from sphere center to vertex.

Also the pyramid is convex, so there is always a single simple answer:
Face: use face normal
Edge: use difference closest point on edge to sphere center
Vertex: use diff. sphere center to vertex

Also the contact normal (equals closest feature on mesh - sphere center) changes smoothly as the sphere moves.
(You may be right and abrupt changes are no problem, i don't remember if i had problems with abrupt changes or if it's just paranoia)

However, i wonder why you have chosen this simple pyramid example, maybe you have missed something on the voroni region stuff.
Let me know if i should raw a image to illustrate it.

Share this post


Link to post
Share on other sites

And just making it explicit again, the topic is about asking "how to select normals", not "what do I do with normals once selected"

Because resolving collisions is the goal, we should keep it in mind even if we actually talk only about detecting collisions.

I've a much better idea for the weighting: Simply use penetration depth.
This also answers the question which normals to use, namely from all mesh features that intersect the sphere.
For your 1st drawing that would mean to mix both face normals and the vertex (or edge in 3d) normal.

Edit:
This may be a good approx. for what i think would be the correct way, which is:
For the pyramid example, which is a solid polyhedra with volume:
Use the sphere to cut the intersection volume from the pyramid.
Assuming uniform density, calculate the volume center (== mass center) of that clipped volume and use difference from volume center to sphere center as seperating vector.

For a nonsolid mesh which has no volume we can take area instead. Edited by JoeJ

Share this post


Link to post
Share on other sites

I don't want to just poop on that idea, but I think that calculating the centres of possibly quite complex shapes, multiple times per frame, is kind of bordering on overkill. It's not that it's complicated (it is though), it's that it's complicated without something in reality to justify the approach. If it were based on a real-life physics situation then I would embrace any complications required to do things the "correct" way, but I don't believe there is one correct way, no more than there is a correct way to solve the moving portal problem. 

 

http://9gag.com/gag/adj90GB/moving-portal-problem

 
If the developers had done it one way, players would have said "Yeah, that feels right."
If the developers had done it the other way, players would have said "Yeah, that feels right."
And people argue both ways using classical physics until the cows come home.

 

Since tennis balls don't slide around inside bricks in real life, if someone were to put volume centre calculation in a guide, readers would question its justification. 

 

So in my opinion the goal is just something that feels consistent and acceptable to the user, but from there I say the developer really has free reign. Averaging, weighting, penetration depths, volume centres... they're all going to give a normal of some kind, and all can feel fine.

 

Personally, averaging is fine to me for what I have in mind. I don't find value in embedded situations giving different results to their non-embedded equivalents, nor in asking the user to factor in penetration when trying to intuit the direction they're going to jump in. But maybe you do. Perhaps you're imagining a scenario where penetration definitely *will* factor into the users intuition. And that's cool too. (But on that note, I don't think your volume-centering idea gives anything to the user that the weighting/penetrating idea didn't already.)

 

By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it. 

 

 

The discussion on what to do with normals is welcome; it's making me think. As long as the thread's topic of how to select them does also get closer to closure. And possibly it has now, because you actually seem quite confident about the normal selection process. We describe it with different language (you say Voroni, I say "hey guys I've got this 7 step diet routine") but it looks like we are thinking the same way.

 

The image I posted last (the pyramid) actually produces an edge normal, but I think you called vertex not because we disagree, but simply because the image isn't clear. I tried to find good angles, but looks like I failed hehe. Here it is again, new angle. 

 

[attachment=30564:Pyramid2.png]

 

So edge, yeah? Between the purple and blue.

 

 

A final one, if you want, about selecting normals:

 

[attachment=30566:Contrived.png]

Edited by Defend

Share this post


Link to post
Share on other sites

By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it.


That justifacation is easy to make:
In graphics you can do a lot of things, nonrealistic renderings, screenspace tricks, and it will be ok.
In physics, either you do it correctly and it will work, or it will fail (jitter, tunneling) - there is not much space in between.
This is what i've learned the hard way (but i talk about harder multibody problems, so ignore my paranoia until it hits you too).

I agree calculating intersecting volume / area is complex, but i would not bother to implement is as reference to compare it with a faster approximate method.

Also agree on pyramid edge case and differnt terminology.

In the final image i see center in two vertex regions and in one face region.
But the more detailed the mesh becomes, the less satisfying our classification seems.
And the more attractive the accurate intersection idea appears: no need for any normals, no need for any classification.


You might wanna look ant this code.
I use it to calculate how much area of each texel in a hemisphere enviroment mip map is in the sphere, so it clips squares against unitcircle in 2D.
A polygon - sphere clipper should have similar complexity.
Calculating area weighted center of a clipped polygon then is pretty fast (cross products along the edge boundary, i already do it to calc area).

void InitRemainingArea ()
		{
			float halfSphereSurface = 2*PI;
			float proofSurface = 0;

			area[GetFirstIndex(LEVELS-1) + 0] = PI * 0.25; // for 2x2 mip map level we already know each texel intersects quarter circle area
			area[GetFirstIndex(LEVELS-1) + 1] = PI * 0.25;
			area[GetFirstIndex(LEVELS-1) + 2] = PI * 0.25;
			area[GetFirstIndex(LEVELS-1) + 3] = PI * 0.25;

			for (int level = 0; level < LEVELS-1; level++) // for each mip map level
			{
				int dim = MAX_RES>>level;
				for (int y=0; y<dim; y++) for (int x=0; x<dim; x++)
				{
					float &texel = area[GetFirstIndex(level) + y*dim+x];

					sVec3 p[4] = { // make a quad for texel
						PlaneVertex(x,	y,	level),
						PlaneVertex(x+1,y,	level),
						PlaneVertex(x+1,y+1,level),
						PlaneVertex(x,	y+1,level),
					};

					int pointOutside = 0;
					for (int i=0; i<4; i++)
					{
						if (p[i].Length() > 1.0) pointOutside |= (1<<i);
					}

					if (pointOutside == 0) // quad entirely in circle
					{
						texel = 2.0f / float(dim);
						texel *= texel;
						continue;
					}

					if (pointOutside == 1+2+4+8)
					{
						texel = 0;
						continue;
					}

					// find texel / circle intersections...

					int clipIndex = 0;
					sVec3 clippedPoly[8];

					sVec3 eis[2];
					int intersectingEdgeIndices[2];
					int edgeIntersects = 0;
					int countIntersections = 0;
					for (int i=0; i<4; i++)
					{
						if (!(pointOutside & (1<<i))) clippedPoly[clipIndex++] = p[i];

						int j=(i+1)&3;

						sVec3 d = p[j] - p[i];
						sVec3 f = p[i];

						float a = d.Dot(d);
						float b = 2.0f * f.Dot(d);
						float c = f.Dot(f) - 1.0f;

						float discriminant = b*b - 4.0f * a * c;
						if (discriminant > 0.0f)
						{
							discriminant = sqrt(discriminant);
							float t1 = (-b - discriminant) / (2.0f * a);
							float t2 = (-b + discriminant) / (2.0f * a);

							if (t1 >= 0.0f && t1 <= 1.0f)
							{
								intersectingEdgeIndices[countIntersections] = i;
								eis[countIntersections] = f + t1 * d;
								clippedPoly[clipIndex++] = eis[countIntersections];
								edgeIntersects |= 1<<i;
//RenderPoint (3, eis[i], 1,0,0, 1);
								countIntersections++;
							}
							else if (t2 >= 0.0f && t2 <= 1.0f)
							{
								intersectingEdgeIndices[countIntersections] = i;
								eis[countIntersections] = f + t2 * d;
								clippedPoly[clipIndex++] = eis[countIntersections];
								edgeIntersects |= 1<<i;
//RenderPoint (3, eis[i], 0,1,0, 1);
								countIntersections++;
							}
						}
					}
//base_debug::logF->Print ("countIntersections = ", (float)countIntersections);

					if (countIntersections == 2)
					{
						// polygon
						for (int i=1; i<clipIndex-1; i++)
						{
//RenderPoint (3, clippedPoly[i], 0,0.5,1, 1);
//float col[] = {0,1,0,0.2};
//simpleVis.RenderTriangle (&clippedPoly[0][0], &clippedPoly[i][0], &clippedPoly[i+1][0], col);
							texel += (sVec3(clippedPoly[i]-clippedPoly[0]).Cross(clippedPoly[i+1]-clippedPoly[0])).Length();
						}
						texel *= 0.5f;

						// circle segment area
						float angle = acos(eis[0].Dot(eis[1]));
						texel += (angle - sin(angle)) * 0.5f;

					}
					else texel = 0; // never happens
				}
			}
		}

Share this post


Link to post
Share on other sites
 

 

By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it.

That justifacation is easy to make:

In physics, either you do it correctly and it will work, or it will fail (jitter, tunneling) - there is not much space in between.

 

That's why I am pointing out that this isn't using physics any more than the moving portal uses physics. The reason there's next to no confusion between working and failing when simulating actual physics is that we simply have reality to give us validation. We don't have that here. Reality won't validate any behaviour you might give to a bowling ball and a diamond slab that occupy the same space at the same time, any more than it will validate ways to push a box through a moving portal.

 

So what validation are you using? You are claiming that one particular behaviour (defining a collision normal the way you do) is validated; is simply the "accurate" behaviour. What causes you to believe this?

 

Meanwhile I think it's easy to argue that the goals of behaviour of unreal situations are at the developer's whim. That's why I'm skeptical. I need to see something that shows why this situation shouldn't fall into the same category as moving portals, jelly blocks in Minecraft, or surfing in Team Fortress 2; the category of behaviour goals for unreal situations that the developer invents because that's what the developer likes.

 

---

 

Regarding the image, thanks for giving it a look. 

I see it hitting 1 "edge" (the top corner) and 1 face (the bottom line). You have called this less satisfying, but I am not sure you're saying that because of that example, or just your general opinion. In that example I am perfectly satisfied with that result. That's what the OP was all about: find any example that doesn't give a satisfying set of normals. 

 

Importantly, bear in mind that we need all those individual normals for restricting sliding motion; that is unavoidable. Regardless of whether we desire a collision normal or not, we aren't saved from having to calculate those sliding planes; we absolutely need them. I believe then that the only thing your approach saves us from is 3 lines of code that find a simple average. Compare those 3 lines to your code; altered for 3D, then altered further to handle completely arbitrary shapes instead of cubes.

 

For all that extra complication, if one is to claim that this is in fact the way things should be done, then something needs to validate that claim that isn't just personal preference. 

 

These are my personal preference factors for the averaging approach (unless my approach breaks somewhere):

 * the collision normal comes from the surfaces involved, not the depth.

 * I don't want penetration slightly altering collision normals,

 * The player will know that the planes along which they feel movement being blocked are going to define their jump, and they can predict this jump by feel, without having to look at the ground.

* Collision normals that match those of the equivalent non-embedded situation (as in the 3 examples shown earlier)

* A character embedded deep in the ground, but only a tiny bit in the wall, will kick off the wall as effectively as a character only touching the ground and only touching the wall.

 

Both our approaches are consistent, predictable, and have something that the player can pick up intuitively, but I wouldn't call yours or mine "the" way it is supposed to be done.... unless I hear of something that supports that claim. Currently I would be comfortable writing in a guide "If you want depth to affect your collision normal as well, here's an approach for that." 

Edited by Defend

Share this post


Link to post
Share on other sites

So what validation are you using? You are claiming that one particular behaviour (defining a collision normal the way you do) is validated; is simply the "accurate" behaviour. What causes you to believe this?


The validation comes from how collisions work in reality: Compression of matter results in contact forces to seperate the bodies.
Thus, because we use rigid bodies as a fast approximization and don't support compression, the intersection volume of both bodies is the most accurate way to calculate contact forces.
(Usecase in practice is buoyancy forces for simulationg objects in water)

I need to see something that shows why this situation shouldn't fall into the same category as moving portals, jelly blocks in Minecraft, or surfing in Team Fortress 2; the category of behaviour goals for unreal situations that the developer invents because that's what the developer likes.


Collision responce is something we experience every moment in our life. We know how it should behave in a game, unlike science fiction like portals.
But that's not what i'm worried about - i'm worried only about technical issues like jitter, getting stuck, pushing something through a wall etc.
You will see what i mean when you use multiple bodies acting on each other and the enviroment.
It depends on what requirements you have in practice if your algorithm is good enough.
Note that i do not intend to push you away from your algo - i've used something very similar and for me it was good enough.

I see it hitting 1 "edge" (the top corner) and 1 face (the bottom line). You have called this less satisfying, but I am not sure you're saying that because of that example, or just your general opinion. In that example I am perfectly satisfied with that result. That's what the OP was all about: find any example that doesn't give a satisfying set of normals.



Importantly, bear in mind that we need all those individual normals for restricting sliding motion; that is unavoidable. Regardless of whether we desire a collision normal or not, we aren't saved from having to calculate those sliding planes; we absolutely need them. I believe then that the only thing your approach saves us from is 3 lines of code that find a simple average. Compare those 3 lines to your code; altered for 3D, then altered further to handle completely arbitrary shapes instead of cubes.


I see one more edge to select, the bottom corner.
I also see the small angled edge would be rejected, and i don't like this.
Now imagine you draw the sphere over the flower you painted in the first image. Would the algorithm give a good result? Maybe.
Would it give a similar result if you move the sphera a bit? Definitively not. It would select different normals, the result could differ 90 degrees.
And this would cause jitter and all the pain i'm worried about.

The intersection area based algorithm i suggested would be perfectly stable even in this worst case example.
Calculating this would be too slow for mesh - mesh, but for sphere - mesh it isn't much additional work, because detecting the collision already delivers most of the necessary data.
I have no doupts it's more accurate and robust, but your's is faster and may do the job.
Try what you like - the learning experience alone is worth the time.

Share this post


Link to post
Share on other sites

Sorry to reply so late, it's been a busy week moving house. And we had no cooling... so I've been avoiding using this furnace (laptop) for long.

 

And sorry but your approach just isn't the right one. It doesn't address several of the things I've said are required -- such as a complete set of sliding plane normals, and normals that change as abruptly as the mesh does -- but I gave it attention anyway because you have been claiming it's definitely "the" accurate way to achieve what the thread asks; accurate, and validated by reality. You've stated that with complete certainty, and even described it as a model upon which approximations should be based.

 

But it simply isn't validated the way you claim it is. It isn't "the" accurate way to do the thing I'm asking; there isn't a real version of it that we can validate by. Your replies to my challenges on this appear to have had little thought put into them. You can't just pluck out the word "compression" by association and use it carry your claim. Intersection doesn't appear in collision algorithms because of real life compression; it's honestly a little disrespectful that you replied with such a shallow response and expected me to take it as an explanation. Intersection in a collision algorithm was never a model for approximation; it is the approximation. We tolerate intersection because it simplifies. 

 

Similarly, you can't just say that since collision exists in real life, any non-real parts of a collision handling algorithm are automatically validated by real life. You have to actually check simulated scenarios against real ones first, and if you had you would have realised that even for a basic concave corner, reality simply doesn't generate the infinite different collision normals that your approach does. In fact, that non-discrete output you value so much (and understandably) is the biggest giveaway. Reality doesn't give us any interpolation for a ball with a fixed velocity smashing into a concave corner. Collision directions are based on the surfaces' gradients, and the transitions between them are abrupt by definition. Where you dismiss my approach (not saying it's a stable idea, just making a point) with the flower example, it's actually abruptness you're identifying, not jitter. I'm well aware of jitter concerns; your comments make it appear that you've assumed I'm not, but what you identified as a problem is not jitter, it's the intended abruptness.

 

At this point you might re-explain or redefine what you're suggesting, but ultimately if you don't put a constraint on how that intersection position comes to be, you've got infinite normals possible from a fixed incoming velocity. That's the continuous yet unreal range of outputs that you value, that I don't. And if you do put constraints on those intersecting positions, you've missed the topic of the thread: ignored back-face collisions leading to any intersection imaginable. 
 
What a confusing mess right? It is definitely annoying having to deconstruct what someone means when it's not clearly described, and I'm not going into further detail with the above paragraphs because I shouldn't be the one having to pull apart quick suggestions before I can tell if they even apply to the thread's question. That should be checked before those suggestions are typed, and the connection to the question made clear. I still really do not know exactly what application you have in mind that your suggestion applies to. But I am very, very sure that it is something. You suggestion applies to something, and I imagine that it is indeed accurate at calculating some piece of that thing. But that application is not this application, and an accurate calculating of some piece in that application does not equate to reality validating intersection response. In this thread, I really think you've tried to make the question fit an answer you already have, instead of the other way around. 
 
In case it sheds any light on things for you, I don't use a posteriori collision detection in the application the thread describes. Maybe that helps, maybe not. I'm guessing that's what you're thinking about but I could be wrong. You're welcome to really clarify what your suggestion applies to, but I know now it's not what I want. If you do, demonstrations, or linked sources, or clear mathematical trails would be better than just more claims without clear reasoning, because it's that absence of specifics that let us go so long thinking X was talking about Y.
 
 
And my approach? I found an example in it I didn't like, so I'm rethinking it. 

Share this post


Link to post
Share on other sites

This topic is 665 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this