• Advertisement

Character Solver

Recommended Posts


So right now for my character solver I compute the time of impact of a character AABB and convex polygons using an algorithm based on "Robust Continuous Collision Detection Between Arbitrary Polyhedra Using Trajectory Parameterization of Polyhedral Features" by J.M.P van Waveren (link)

Doom 3 uses this for all of it's collisions so it never has to deal with penetration. This seems undesireable for a couple of reasons so I've decided I'm going to switch to a position solver instead. After some googling I came across Erin Catto's 2014 GDC presention "Understanding Constraints" (link) where he briefly describes how his character solver works. I've written a simple C program (attached to this post) to play around with these concepts and I have some questions that hopefuly I will be able to get answered here.

Keys for the test program:

  • ESC - quit
  • r - reset
  • s - step
  • 1-8 - switch to setup 1-8

Here's how I intepret the algorithm:

  • Let's call the current position of the character the 'initial position'. This position never changes during the algorithm.
  • Let's call the position we are trying to move the character to the 'target position'. This position also never changes during the algorithm.

 

  1. Scan the environment for overlap with the capsule at the initial position and construct a list of contact planes.
  2. Perform a plane solve on the target position to get the first solve position.
  3. Perform a shape cast from the initial position to the solve position. Move to the first hit point and collect new contact planes there. Add these new planes to the list.
  4. Perform a plane solve on the target position to get the next solve point.
  5. If the distance between the new solve point and the old one is within some epsilon, we are done. Otherwise go back to step 3.

Plane solve code:

vec3_t p = target_position;
vec3_t a = p + local_capsule_a;
vec3_t b = p + local_capsule_b;

for(uint32_t i = 0; i < max_iter; i++)
{
  vec3_t last_p = p;

  for(uint32_t j = 0; j < plane_count; j++)
  {
    const plane3_t &plane = planes[j];

    float d1 = point_distance(plane, a);
    float d2 = point_distance(plane, b);

    float dist;
    if(d1 < d2)
      dist = d1;
    else
      dist = d2;

    dist -= capsule_radius;

    if(dist < 0.0f)
    {
      p -= dist * plane.n;
      a = p + local_capsule_a;
      b = p + local_capsule_b;
    }
  }

  vec3_t delta = p - last_p;
  if(dot(delta, delta) < epsilon)
    break;
}

solve_position = p;

Couple of issues:

1. Is this interpretation correct?
2. In the test program I am only adding one new plane per shape cast, is there a reason to collect more (all touching planes)?
3. In the test program I am using the segment planes directly. In 3D I would directly use the polygon planes. If I use the direction from the closest point on the segment to the circle centre I get into situations like this, where the character ends up away from the collision geometry:
3Mv6OSk.png
4. If I naively add all overlapping planes during step 1 I get into sitations like this:
l8YUhaN.png
The green circle is at the source position. Step 1 will find both planes and the character will end up in the corner, not touching either segment. In the test program I solve this problem by clearing the plane list after step 2, but this is insufficient as I also need to select the proper planes after a shape cast. I can think of some approaches to solving this problem, but instead of experimenting right away, I'd like to know how other people have solved this. What is a good way to determine which planes to actually use?

 

 

test.c

Share this post


Link to post
Share on other sites
Advertisement
3 hours ago, D956 said:
  • Let's call the current position of the character the 'initial position'. This position never changes during the algorithm.

together with

3 hours ago, D956 said:
  • Scan the environment for overlap with the capsule at the initial position and construct a list of contact planes.

and together with

3 hours ago, D956 said:

1. Is this interpretation correct?

Your interpretation is at least at one point not the same as the algorithm outline on slide page 60 and your own implementation, because the algorithm does not inspect the initial position. Instead it inspects a series of target positions, starting with the initial position being the desired target position.

 

3 hours ago, D956 said:

2. In the test program I am only adding one new plane per shape cast, is there a reason to collect more (all touching planes)?

The algorithm / implementation does not really collect planes at all but iteratively effects the position by considering one wall after the other. Other than this wording stuff ... I don't understand your question.

 

3 hours ago, D956 said:

3. In the test program I am using the segment planes directly. In 3D I would directly use the polygon planes. If I use the direction from the closest point on the segment to the circle centre I get into situations like this, where the character ends up away from the collision geometry:

This is because the algorithm / implementation considers all walls being infinite planes. But walls are just plane segments, i.e. they have a beginning and an end. Hence computing just a distance from the plane is not sufficient (at lest when dealing with outside corners); you have to consider the distance from the segment instead.

To illustrate this: Let's say we use a 2D top view on the scene. Then the wall can be described by a line segment between its beginning at position b and its ending at position e as

          w(k) := b + k * ( e - b )   with  0 <= k <= 1

After computing the collision point c with the plane like before, you can solve the above equation for k, so that

         b + k * ( e - b ) == c

The collision point is on the segment if and only if the resolved k is within its definition range 0 <= k <= 1. Otherwise the collision happens "in the air" apart from a wall corner. However, the capsule may still collide because it has a given size, but the collision point so far is just - well - point. For collisions detection with a corner you just need to check whether one of the corner points (i.e. b and e) is within the collider volume.

 

BTW: Using an AABB is not necessarily the easiest way to go.

 

Edited by haegarr

Share this post


Link to post
Share on other sites

The way I am reading the slides is that the algorithm has two parts to it. One part collects contact planes using a shape cast and the other part is a plane solver (NGS) that tries to find the closest point (solve point) to the target position whilst satisfying the plane constraints.

Part one is described on page 59:

Quote

Here's how the algorithm works. I first scan the environment for any overlaps. These used to create a list of collision planes. I then move the character to the target point and then apply NGS until the character is not moving much. This is solver point1. Then I perform a sweep from the start point to solve point1. Any new collision planes are added to the list. Again I move the character to the target point and then apply NGS to the new set of collision planes to get solve point2. I keep repeating this process until consecutive solve points are close together.

Part two, what you are referring to, is described on page 60:

Quote

  So how does the plane solver work? The input data to the character solver is the target position and the constraint planes. We do not need the current position of the character! The current position of the character is only used to gather contact planes. I can now state the job of the character solver: find the position closest to the target position that satisfies the contact plane constraints. We can use a simple NGS solver to solve this. It is incredibly simple because the mass doesn't matter when there is only one movable object that doesn't rotate. Also, the constraints are all linear. With just a couple planes we can compute an exact solution. But as we get many contact planes this becomes more difficult, so I use an iterative solve to get an approximate solution. Also, we don't need an exact solution, we just need a valid solution.

Quote

This is because the algorithm / implementation considers all walls being infinite planes. But walls are just plane segments, i.e. they have a beginning and an end. Hence computing just a distance from the plane is not sufficient (at lest when dealing with outside corners); you have to consider the distance from the segment instead.

To illustrate this: Let's say we use a 2D top view on the scene. Then the wall can be described by a line segment between its beginning at position b and its ending at position e as

          w(k) := b + k * ( e - b )   with  0 <= k <= 1

After computing the collision point c with the plane like before, you can solve the above equation for k, so that

         b + k * ( e - b ) == c

The collision point is on the segment if and only if the resolved k is within its definition range 0 <= k <= 1. Otherwise the collision happens "in the air" apart from a wall corner. However, the capsule may still collide because it has a given size, but the collision point so far is just - well - point. For collisions detection with a corner you just need to check whether one of the corner points (i.e. b and e) is within the collider volume.

I know. "segment_closest_point" in my test program actually accounts for this by returning a vertex of the line segment if the proper barycentric coordinate is negative.

Quote

BTW: Using an AABB is not necessarily the easiest way to go.

I guess I wasn't clear enough. I am currently using an AABB and it is working fine with static geometry. Except that now I want to add dynamic rigid bodies. Realising I will have to deal with penetration anyway, I decided to ditch this method and change to a position-based solver using a capsule shape like the one described by Erin Catto in the presentation above.

Edited by D956

Share this post


Link to post
Share on other sites

One solution to issue 4 is to collect polygons instead of only their planes. Then, during the position correction step I find the polygon with the smallest penetration that is still in front of the capsule and move the capsule in front of it's plane.

Of course this leads to other problems...

Catto doesn't mention any of this though, so I wonder if there's a trick to the algorithm so I only have to use planes.

 

Edited by D956

Share this post


Link to post
Share on other sites

The algorithm works by applying correction on-the-fly on each found penetration. That means that a penetration with the current plane is found, the current position and hence the penetration depth and hence the correction value will depend on all already made corrections. I don't understand why you are speaking of "collecting planes / polygons", because that does not happen anywhere within the algorithm.

Coming back to the problem of overcompensation when colliding with (outside) corners: The problem exists because there are two walls that both contribute to the correction that would better be only one. This is, as already written above, because the algorithm deals with infinite planes only. Now, let's say that your implementation is able to distinguish collisions with planar pieces of a wall and with wall corners (see another post above). If a "planar collision" is on then work on it as usual. But if a "corner collision" is on then construct an imaginary plane that passes through the corner and has its normal computed as average of the normals of the both adjacent planes / walls.

With respect to the structure of the original algorithm, the solution described above will do an adapted correction when the collision with the first of both planes is detected, and it will do no further collision when the second plane is current because there is no longer a collision then.

If you think further then you'll see that the solution above is kind of a generalization. I.e. it would work well even if it would be applied on a planar piece of the wall, because averaging two times the same normal results in the normal itself, of course.

Share this post


Link to post
Share on other sites

Thank you very much for replying!

I've been googling some more and came across documentation for the DigitalRune project (link). They also use a capsule shape and do a similar plane solve as Erin Catto describes. This is how they describe their position correction:

Quote

Slide: The basic routine in our character controller is called a slide. It takes a desired position and computes a valid, non-penetrating position near the desired position like this:

The capsule is set to the desired positions and we detect all intersections. The collision detection returns all contacts at the desired position, including contact positions, contact normal vectors and penetration depths. Each contact indicates that we have moved into a solid object - we have reached a limit of the allowed movement space.

For each contact found we store a bounding plane that represents that movement limit. If the capsule touches a wall, the bounding plane is parallel to the wall surface. If the capsule touches an object like a sphere, the bounding plane is parallel to the tangential plane at the contact point. All bounding planes together limit the allowed space in which the capsule can move.

Now, that we have a description of the allowed space in which the character can move, we compute the allowed position closest to the desired position. We do this by a simple iterative process: If we have no bounding planes or the desired position is in front of all bounding planes, we are done. If the desired position is behind a bounding plane, we project the position onto the surface of the bounding plane. The new position is checked against the next plane. If it is behind the plane it is projected onto the surface. The new position is checked against the other planes, and so forth. This is done in a loop until we have found an allowed position in front of all bounding planes.

Once, we have found the new position, we set the capsule to the new position and detect all intersections. It is possible that we find new bounding planes that we add to the set of existing bounding planes. We repeat the process above until we have a new position that does not violate any bounding plane.

This is very similar to Erin Catto's approach, except that Catto adds a "shape cast".

So the game code decides it wants to move the player from the "initial position" to the "target position", then:

DigitalRune algorithm:

  1. Clear the plane list.
  2. Detect collisions at the target position and collect bounding planes. If we don't find any collisions we are done.
  3. Plane solve to get a new target position.
  4. Go back to step 1.

Erin Catto's algorithm as I now interpret it:

  1. Clear the plane list.
  2. Detect collisions at the target position and add bounding planes to the plane list.
  3. Plane solve to get the first solve position (target position -> solve position).
  4. Clear the plane list.
  5. Perform a shape cast from the initial position to the solve position.
  6. Set the target position to the shape cast position.
  7. Detect collisions at the target position and add bounding planes to the plane list.
  8. Plane solve to get a new solve position (target position -> solve position).
  9. If the distance between the new solve position and the old one is within some epsilon we are done. Otherwise go back to step 4.

Erin Catto has as a first step "I first scan the environment for any overlaps". I assume he means the target position.

It seems the plane solver is pretty much the same in both algorithms.

I have attached a new test program. ESC to quit, arrow keys to move the circle. Seems to work fine, except I am currently not doing the shape cast. I guess the shape cast is there in case the plane solver moves past an object/polygon that wasn't detected during steps 2 & 7.

As for the overcompensation; I will look into using normal averaging. Right now I simply use the normalised direction from the closest point on the segment to the circle centre as normal, which seems to work fine.

Now I just have to implement it in my 3D game/engine and see how well it works there.

test2.c

Edited by D956

Share this post


Link to post
Share on other sites

It's possible to use a much simpler algorithm for characters. Catto's algorithm is more complicated and overkill, intended to be able to solve the entire physics timestep continuously.

A simpler Seidel solver can work for characters, along with something akin to Conservative Advancement.

Share this post


Link to post
Share on other sites

Oh my mistake! I thought you were referring to "bilateral advancement", and didn't realize you were linking to a completely different algorithm. Yes, Catto is talking about the exact same thing I was on tigsource forums :)

To clarify, I think you are on the right track by using a position level plane solver. They are really simple and efficient. Also, using a rounded shape like sphere/capsule is another good choice for characters.

It would be wise to let your algorithm handle the case of multiple simultaneous planes. Since the plane solver presses the character along a given plane normal, which does not necessarily coincide with the character's previous path of motion, it is completely possible to "back up" into a configuration hitting multiple simultaneous planes.

Gathering up planes is a matter of implementation. One idea would be to use Conservative Advancement (which will be very efficient if your capsule cannot rotate at run-time, and only translates) to find a TOI. At the TOI you should have a plane normal (e.g. from a previous GJK call, or perhaps from some collision detection algorithm; it's up to you). Press away from the plane to create a numeric buffer for subsequent Conservative Advancement calls, and zero out the velocity along the plane normal. Look for a colliding configuration, and then carry on with the remaining velocity if all is OK.

Edited by Randy Gaul

Share this post


Link to post
Share on other sites

Thanks for your advice. I will experiment and see what works best. Right now I'm only doing discrete collision detection using GJK and then plane solve. The next step is to add conservative advancement, which should be very straightforward. I'll also have to add a separating axis test for when the capsule's segment penetrates a polygon/object.

 

Share this post


Link to post
Share on other sites

I implemented Erin's code and it worked really well. IIRC I only projected a single point and the solver for the capsule is really just point to plane since the capsule is invariant under rotation around the up axis. The trick is to adjust the planes you get from the sweeps accordingly.

E.g. you can choose any point any point on the capsule as reference point and simply shift the plane there. I chose the red point here, but the choice is really up to you.

 

image.png.75a0c1053e3d5515ce87bc46b702ffc7.png

Edited by Dirk Gregorius

Share this post


Link to post
Share on other sites

Thanks for the tip, Dirk. Are there any physics talks planned for the next GDC?

For anyone else implementing a character controller as described above, the following paper on avoiding the "fear of the wireframe" problem is pretty good: http://www.codercorner.com/MeshContacts.pdf

Now I just have to look into modifying my GJK implementation to return feature IDs/types.

 

Share this post


Link to post
Share on other sites

I have a couple of ideas, but didn't get anything done for GDC next year. Maybe 2019 or during the Valve Steam Devdays 2018. 

Out of curiosity, what would be the  topics you would be mostly interested?

Share this post


Link to post
Share on other sites

I remember this post by you on the Bullet forums:

Quote

I gave three talks over the last three years at GDC which describe essentially the Rubikon collision pipeline. I am thinking to give a talk on mesh and continuous contacts next year and then close with a talk about the solver following this. The solver is essentially SI with post-projection, but there are some nice implementation details I would like to share (e.g. central friction, quaternion constraints, etc). Sergiy gave a talk about physics debugging at GDC in 2014 which also shares a bunch of information about Rubikon.

Personally I think we miss actually more talks about physics authoring. The technical information is all out there. E.g. we have a complete Maya integration which we used to bake a bunch of the destruction shots in the VR demo. These large scale shots were a great stress test for the engine.

I thought it sounded pretty good and was sad to see no physics talks at GDC 2016.

Some topics I'm currently most interested in:

  1. Once I get this character controller stuff working properly I plan on looking into hull/mesh contacts with contact clustering and dealing with internal edges etc. Besides the character capsule, I'm currently only using convex polyhedra.
  2. Continous collision detection for small and/or fast moving objects. I've yet to seriously look into Catto's Bilateral Advancement algorithm.
  3. Block solvers. Haven't really looked into these much either. I see Erwin Coumans did a talk on MCLP solvers at GDC 2014.
Edited by D956

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By francoisdiy
      So I wrote a programming language called C-Lesh to program games for my game maker Platformisis. It is a scripting language which tiles into the JavaScript game engine via a memory mapper using memory mapped I/O. Currently, I am porting the language as a standalone interpreter to be able to run on the PC and possibly other devices excluding the phone. The interpreter is being written in C++ so for those of you who are C++ fans you can see the different components implemented. Some background of the language and how to program in C-Lesh can be found here:

      http://www.codeloader.net/readme.html
      As I program this thing I will post code from different components and explain.
    • By isu diss
      I'm trying to duplicate vertices using std::map to be used in a vertex buffer. I don't get the correct index buffer(myInds) or vertex buffer(myVerts). I can get the index array from FBX but it differs from what I get in the following std::map code. Any help is much appreciated.
      struct FBXVTX { XMFLOAT3 Position; XMFLOAT2 TextureCoord; XMFLOAT3 Normal; }; std::map< FBXVTX, int > myVertsMap; std::vector<FBXVTX> myVerts; std::vector<int> myInds; HRESULT FBXLoader::Open(HWND hWnd, char* Filename, bool UsePositionOnly) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) { hr = E_FAIL; MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); } FBXS = FbxScene::Create(FBXM, "REALMS"); if (!FBXS) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); } if (!(FBXI->Import(FBXS))) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); } FbxAxisSystem OurAxisSystem = FbxAxisSystem::DirectX; FbxAxisSystem SceneAxisSystem = FBXS->GetGlobalSettings().GetAxisSystem(); if(SceneAxisSystem != OurAxisSystem) { FbxAxisSystem::DirectX.ConvertScene(FBXS); } FbxSystemUnit SceneSystemUnit = FBXS->GetGlobalSettings().GetSystemUnit(); if( SceneSystemUnit.GetScaleFactor() != 1.0 ) { FbxSystemUnit::cm.ConvertScene( FBXS ); } if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); if (UsePositionOnly) { NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new XMFLOAT3[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j] = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices MyI = (DWORD*)Mesh->GetPolygonVertices();//index array } else { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); int idx = 0; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly triangle) count { for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FBXVTX myVert; int p_index = 3*i+j; int t_index = Mesh->GetTextureUVIndex(i, j); FbxVector4 Vertex = Mesh->GetControlPointAt(p_index);//Gets the control point at the specified index. myVert.Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); FbxVector4 Normal; Mesh->GetPolygonVertexNormal(i, j, Normal); myVert.Normal = XMFLOAT3((float)Normal.mData[0], (float)Normal.mData[1], (float)Normal.mData[2]); FbxVector2 uv = uvVertices->GetAt(t_index); myVert.TextureCoord = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); if ( myVertsMap.find( myVert ) != myVertsMap.end() ) myInds.push_back( myVertsMap[ myVert ]); else { myVertsMap.insert( std::pair<FBXVTX, int> (myVert, idx ) ); myVerts.push_back(myVert); myInds.push_back(idx); idx++; } } } } } } } else { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); } return hr; } bool operator < ( const FBXVTX &lValue, const FBXVTX &rValue) { if (lValue.Position.x != rValue.Position.x) return(lValue.Position.x < rValue.Position.x); if (lValue.Position.y != rValue.Position.y) return(lValue.Position.y < rValue.Position.y); if (lValue.Position.z != rValue.Position.z) return(lValue.Position.z < rValue.Position.z); if (lValue.TextureCoord.x != rValue.TextureCoord.x) return(lValue.TextureCoord.x < rValue.TextureCoord.x); if (lValue.TextureCoord.y != rValue.TextureCoord.y) return(lValue.TextureCoord.y < rValue.TextureCoord.y); if (lValue.Normal.x != rValue.Normal.x) return(lValue.Normal.x < rValue.Normal.x); if (lValue.Normal.y != rValue.Normal.y) return(lValue.Normal.y < rValue.Normal.y); return(lValue.Normal.z < rValue.Normal.z); }  
    • By Descent
      Wow what a wild game by GalaXa Games Entertainment Interactive. Play now... it's really fun but IF you have epilepsy then don't play. It does not feature flashing pictures, but there is lots of animated stuff that might get ya. Anyway, 4 levels, 2 endings, insane action, BY INFERNAL. Please play it, right nao! Also , nice midi music composed by me is in the game.
       
      demons.rar
    • By Stewie.G
      Hi,
       
      I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far:
      float gaussian(float x, float sigma)
      {
          float pi = 3.14159;
          float sigma_square = sigma * sigma;
          float a = 1 / sqrt(2 * pi*sigma_square);
          float b = exp(-((x*x) / (2 * sigma_square)));
          return a * b;
      }
      My problem is that I don't quite know what sigma should be.
      It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1.
      So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur.
      Here is what my kernel looks like with sigma == 1
              [0]    0.0033238872995488885    
              [1]    0.023804742479357766    
              [2]    0.09713820127276819    
              [3]    0.22585307043511713    
              [4]    0.29920669915475656    
              [5]    0.22585307043511713    
              [6]    0.09713820127276819    
              [7]    0.023804742479357766    
              [8]    0.0033238872995488885    
       
      I would have liked it to be more "rounded" at the top, or a better spread instead of wasting [0], [1], [2] with values bellow 0.1.
      Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur.
      I've found this post 
      ... which helped me a bit, but I am really confused with this the part where he divide sigma by 3.
      Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect...
       
      Thanks :-)
    • By Aleksandr Adamenko
      Characters design for a mobile game promotional artwork.
      Project stages include: 
      - Concept art 
      - Character modeling 
      - Texturing 
      - Rendering and post processing.
      More 3d works are at:
      https://fgfactory.com/en/works/3d
  • Advertisement