# Character Solver

## Recommended Posts

So right now for my character solver I compute the time of impact of a character AABB and convex polygons using an algorithm based on "Robust Continuous Collision Detection Between Arbitrary Polyhedra Using Trajectory Parameterization of Polyhedral Features" by J.M.P van Waveren (link)

Doom 3 uses this for all of it's collisions so it never has to deal with penetration. This seems undesireable for a couple of reasons so I've decided I'm going to switch to a position solver instead. After some googling I came across Erin Catto's 2014 GDC presention "Understanding Constraints" (link) where he briefly describes how his character solver works. I've written a simple C program (attached to this post) to play around with these concepts and I have some questions that hopefuly I will be able to get answered here.

Keys for the test program:

• ESC - quit
• r - reset
• s - step
• 1-8 - switch to setup 1-8

Here's how I intepret the algorithm:

• Let's call the current position of the character the 'initial position'. This position never changes during the algorithm.
• Let's call the position we are trying to move the character to the 'target position'. This position also never changes during the algorithm.

1. Scan the environment for overlap with the capsule at the initial position and construct a list of contact planes.
2. Perform a plane solve on the target position to get the first solve position.
3. Perform a shape cast from the initial position to the solve position. Move to the first hit point and collect new contact planes there. Add these new planes to the list.
4. Perform a plane solve on the target position to get the next solve point.
5. If the distance between the new solve point and the old one is within some epsilon, we are done. Otherwise go back to step 3.

Plane solve code:

vec3_t p = target_position;
vec3_t a = p + local_capsule_a;
vec3_t b = p + local_capsule_b;

for(uint32_t i = 0; i < max_iter; i++)
{
vec3_t last_p = p;

for(uint32_t j = 0; j < plane_count; j++)
{
const plane3_t &plane = planes[j];

float d1 = point_distance(plane, a);
float d2 = point_distance(plane, b);

float dist;
if(d1 < d2)
dist = d1;
else
dist = d2;

if(dist < 0.0f)
{
p -= dist * plane.n;
a = p + local_capsule_a;
b = p + local_capsule_b;
}
}

vec3_t delta = p - last_p;
if(dot(delta, delta) < epsilon)
break;
}

solve_position = p;

Couple of issues:

1. Is this interpretation correct?
2. In the test program I am only adding one new plane per shape cast, is there a reason to collect more (all touching planes)?
3. In the test program I am using the segment planes directly. In 3D I would directly use the polygon planes. If I use the direction from the closest point on the segment to the circle centre I get into situations like this, where the character ends up away from the collision geometry:

4. If I naively add all overlapping planes during step 1 I get into sitations like this:

The green circle is at the source position. Step 1 will find both planes and the character will end up in the corner, not touching either segment. In the test program I solve this problem by clearing the plane list after step 2, but this is insufficient as I also need to select the proper planes after a shape cast. I can think of some approaches to solving this problem, but instead of experimenting right away, I'd like to know how other people have solved this. What is a good way to determine which planes to actually use?

test.c

##### Share on other sites
3 hours ago, D956 said:
• Let's call the current position of the character the 'initial position'. This position never changes during the algorithm.

together with

3 hours ago, D956 said:
• Scan the environment for overlap with the capsule at the initial position and construct a list of contact planes.

and together with

3 hours ago, D956 said:

1. Is this interpretation correct?

Your interpretation is at least at one point not the same as the algorithm outline on slide page 60 and your own implementation, because the algorithm does not inspect the initial position. Instead it inspects a series of target positions, starting with the initial position being the desired target position.

3 hours ago, D956 said:

2. In the test program I am only adding one new plane per shape cast, is there a reason to collect more (all touching planes)?

The algorithm / implementation does not really collect planes at all but iteratively effects the position by considering one wall after the other. Other than this wording stuff ... I don't understand your question.

3 hours ago, D956 said:

3. In the test program I am using the segment planes directly. In 3D I would directly use the polygon planes. If I use the direction from the closest point on the segment to the circle centre I get into situations like this, where the character ends up away from the collision geometry:

This is because the algorithm / implementation considers all walls being infinite planes. But walls are just plane segments, i.e. they have a beginning and an end. Hence computing just a distance from the plane is not sufficient (at lest when dealing with outside corners); you have to consider the distance from the segment instead.

To illustrate this: Let's say we use a 2D top view on the scene. Then the wall can be described by a line segment between its beginning at position b and its ending at position e as

w(k) := b + k * ( e - b )   with  0 <= k <= 1

After computing the collision point c with the plane like before, you can solve the above equation for k, so that

b + k * ( e - b ) == c

The collision point is on the segment if and only if the resolved k is within its definition range 0 <= k <= 1. Otherwise the collision happens "in the air" apart from a wall corner. However, the capsule may still collide because it has a given size, but the collision point so far is just - well - point. For collisions detection with a corner you just need to check whether one of the corner points (i.e. b and e) is within the collider volume.

BTW: Using an AABB is not necessarily the easiest way to go.

Edited by haegarr

##### Share on other sites

The way I am reading the slides is that the algorithm has two parts to it. One part collects contact planes using a shape cast and the other part is a plane solver (NGS) that tries to find the closest point (solve point) to the target position whilst satisfying the plane constraints.

Part one is described on page 59:

Quote

Here's how the algorithm works. I first scan the environment for any overlaps. These used to create a list of collision planes. I then move the character to the target point and then apply NGS until the character is not moving much. This is solver point1. Then I perform a sweep from the start point to solve point1. Any new collision planes are added to the list. Again I move the character to the target point and then apply NGS to the new set of collision planes to get solve point2. I keep repeating this process until consecutive solve points are close together.

Part two, what you are referring to, is described on page 60:

Quote

So how does the plane solver work? The input data to the character solver is the target position and the constraint planes. We do not need the current position of the character! The current position of the character is only used to gather contact planes. I can now state the job of the character solver: find the position closest to the target position that satisfies the contact plane constraints. We can use a simple NGS solver to solve this. It is incredibly simple because the mass doesn't matter when there is only one movable object that doesn't rotate. Also, the constraints are all linear. With just a couple planes we can compute an exact solution. But as we get many contact planes this becomes more difficult, so I use an iterative solve to get an approximate solution. Also, we don't need an exact solution, we just need a valid solution.

Quote

This is because the algorithm / implementation considers all walls being infinite planes. But walls are just plane segments, i.e. they have a beginning and an end. Hence computing just a distance from the plane is not sufficient (at lest when dealing with outside corners); you have to consider the distance from the segment instead.

To illustrate this: Let's say we use a 2D top view on the scene. Then the wall can be described by a line segment between its beginning at position b and its ending at position e as

w(k) := b + k * ( e - b )   with  0 <= k <= 1

After computing the collision point c with the plane like before, you can solve the above equation for k, so that

b + k * ( e - b ) == c

The collision point is on the segment if and only if the resolved k is within its definition range 0 <= k <= 1. Otherwise the collision happens "in the air" apart from a wall corner. However, the capsule may still collide because it has a given size, but the collision point so far is just - well - point. For collisions detection with a corner you just need to check whether one of the corner points (i.e. b and e) is within the collider volume.

I know. "segment_closest_point" in my test program actually accounts for this by returning a vertex of the line segment if the proper barycentric coordinate is negative.

Quote

BTW: Using an AABB is not necessarily the easiest way to go.

I guess I wasn't clear enough. I am currently using an AABB and it is working fine with static geometry. Except that now I want to add dynamic rigid bodies. Realising I will have to deal with penetration anyway, I decided to ditch this method and change to a position-based solver using a capsule shape like the one described by Erin Catto in the presentation above.

Edited by D956

##### Share on other sites

One solution to issue 4 is to collect polygons instead of only their planes. Then, during the position correction step I find the polygon with the smallest penetration that is still in front of the capsule and move the capsule in front of it's plane.

Of course this leads to other problems...

Catto doesn't mention any of this though, so I wonder if there's a trick to the algorithm so I only have to use planes.

Edited by D956

##### Share on other sites

The algorithm works by applying correction on-the-fly on each found penetration. That means that a penetration with the current plane is found, the current position and hence the penetration depth and hence the correction value will depend on all already made corrections. I don't understand why you are speaking of "collecting planes / polygons", because that does not happen anywhere within the algorithm.

Coming back to the problem of overcompensation when colliding with (outside) corners: The problem exists because there are two walls that both contribute to the correction that would better be only one. This is, as already written above, because the algorithm deals with infinite planes only. Now, let's say that your implementation is able to distinguish collisions with planar pieces of a wall and with wall corners (see another post above). If a "planar collision" is on then work on it as usual. But if a "corner collision" is on then construct an imaginary plane that passes through the corner and has its normal computed as average of the normals of the both adjacent planes / walls.

With respect to the structure of the original algorithm, the solution described above will do an adapted correction when the collision with the first of both planes is detected, and it will do no further collision when the second plane is current because there is no longer a collision then.

If you think further then you'll see that the solution above is kind of a generalization. I.e. it would work well even if it would be applied on a planar piece of the wall, because averaging two times the same normal results in the normal itself, of course.

##### Share on other sites

Thank you very much for replying!

I've been googling some more and came across documentation for the DigitalRune project (link). They also use a capsule shape and do a similar plane solve as Erin Catto describes. This is how they describe their position correction:

Quote

Slide: The basic routine in our character controller is called a slide. It takes a desired position and computes a valid, non-penetrating position near the desired position like this:

The capsule is set to the desired positions and we detect all intersections. The collision detection returns all contacts at the desired position, including contact positions, contact normal vectors and penetration depths. Each contact indicates that we have moved into a solid object - we have reached a limit of the allowed movement space.

For each contact found we store a bounding plane that represents that movement limit. If the capsule touches a wall, the bounding plane is parallel to the wall surface. If the capsule touches an object like a sphere, the bounding plane is parallel to the tangential plane at the contact point. All bounding planes together limit the allowed space in which the capsule can move.

Now, that we have a description of the allowed space in which the character can move, we compute the allowed position closest to the desired position. We do this by a simple iterative process: If we have no bounding planes or the desired position is in front of all bounding planes, we are done. If the desired position is behind a bounding plane, we project the position onto the surface of the bounding plane. The new position is checked against the next plane. If it is behind the plane it is projected onto the surface. The new position is checked against the other planes, and so forth. This is done in a loop until we have found an allowed position in front of all bounding planes.

Once, we have found the new position, we set the capsule to the new position and detect all intersections. It is possible that we find new bounding planes that we add to the set of existing bounding planes. We repeat the process above until we have a new position that does not violate any bounding plane.

This is very similar to Erin Catto's approach, except that Catto adds a "shape cast".

So the game code decides it wants to move the player from the "initial position" to the "target position", then:

DigitalRune algorithm:

1. Clear the plane list.
2. Detect collisions at the target position and collect bounding planes. If we don't find any collisions we are done.
3. Plane solve to get a new target position.
4. Go back to step 1.

Erin Catto's algorithm as I now interpret it:

1. Clear the plane list.
2. Detect collisions at the target position and add bounding planes to the plane list.
3. Plane solve to get the first solve position (target position -> solve position).
4. Clear the plane list.
5. Perform a shape cast from the initial position to the solve position.
6. Set the target position to the shape cast position.
7. Detect collisions at the target position and add bounding planes to the plane list.
8. Plane solve to get a new solve position (target position -> solve position).
9. If the distance between the new solve position and the old one is within some epsilon we are done. Otherwise go back to step 4.

Erin Catto has as a first step "I first scan the environment for any overlaps". I assume he means the target position.

It seems the plane solver is pretty much the same in both algorithms.

I have attached a new test program. ESC to quit, arrow keys to move the circle. Seems to work fine, except I am currently not doing the shape cast. I guess the shape cast is there in case the plane solver moves past an object/polygon that wasn't detected during steps 2 & 7.

As for the overcompensation; I will look into using normal averaging. Right now I simply use the normalised direction from the closest point on the segment to the circle centre as normal, which seems to work fine.

Now I just have to implement it in my 3D game/engine and see how well it works there.

Edited by D956

##### Share on other sites

It's possible to use a much simpler algorithm for characters. Catto's algorithm is more complicated and overkill, intended to be able to solve the entire physics timestep continuously.

A simpler Seidel solver can work for characters, along with something akin to Conservative Advancement.

##### Share on other sites

Oh my mistake! I thought you were referring to "bilateral advancement", and didn't realize you were linking to a completely different algorithm. Yes, Catto is talking about the exact same thing I was on tigsource forums

To clarify, I think you are on the right track by using a position level plane solver. They are really simple and efficient. Also, using a rounded shape like sphere/capsule is another good choice for characters.

It would be wise to let your algorithm handle the case of multiple simultaneous planes. Since the plane solver presses the character along a given plane normal, which does not necessarily coincide with the character's previous path of motion, it is completely possible to "back up" into a configuration hitting multiple simultaneous planes.

Gathering up planes is a matter of implementation. One idea would be to use Conservative Advancement (which will be very efficient if your capsule cannot rotate at run-time, and only translates) to find a TOI. At the TOI you should have a plane normal (e.g. from a previous GJK call, or perhaps from some collision detection algorithm; it's up to you). Press away from the plane to create a numeric buffer for subsequent Conservative Advancement calls, and zero out the velocity along the plane normal. Look for a colliding configuration, and then carry on with the remaining velocity if all is OK.

Edited by Randy Gaul

##### Share on other sites

Thanks for your advice. I will experiment and see what works best. Right now I'm only doing discrete collision detection using GJK and then plane solve. The next step is to add conservative advancement, which should be very straightforward. I'll also have to add a separating axis test for when the capsule's segment penetrates a polygon/object.

## Create an account

Register a new account

• 10
• 17
• 9
• 14
• 41
• ### Similar Content

• Hi I read some tutorials on STL as many people on this forum say it's faster than the most selfwritten container classes and somehow i can't even declare a list in VC .net 2003...the compiler says: Compiling... VertexManager.cpp c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2501: 'CVertexManager::list' : missing storage-class or type specifiers c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2238: unexpected token(s) preceding ';' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2501: 'CVertexManager::list' : missing storage-class or type specifiers c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2238: unexpected token(s) preceding ';' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(35) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(35) : error C2501: 'CVertexManager::list' : missing storage-class or type specifiers my code: class CVertexManager { private: list<VertListEntry> VertList; list<VertGroup> VertGroup; list<int> TextureChange; CVertexManager(); public: ~CVertexManager(); void addEntry(VertListEntry Entry); static CVertexManager& Instance() { static CVertexManager TheOneAndOnly; return CVertexManager; } }; btw what does the list.insert function want as the first parameter? it says something with iterator...what is that? can i just pass an int as the number where i want to have the new entry? regards, m4gnus

• I'm making render just for fun (c++, opengl)
Want to add decals support. Here what I found
A couple of slides from doom
http://martindevans.me/game-development/2015/02/27/Drawing-Stuff-… space-Decals/
No implementation details here
https://turanszkij.wordpress.com/2017/10/12/forward-decal-rendering/
As I see there should be a list of decals for each tile same as for light sources. But what to do next?
Let assume that all decals are packed into a spritesheet. Decal will substitute diffuse and normal.
- What data should be stored for each decal on the GPU?
- Articles above describe decals as OBB. Why OBB if decals seem to be flat?
- How to actually render a decal during object render pass (since it's forward)? Is it projected somehow? Don't understand this part completely.
Are there any papers for this topic?

• Here is the original blog post.
Edit: Sorry, I can't get embedded LaTeX to display properly.
The pinned tutorial post says I have to do it in plain HTML without embedded images?
I actually tried embedding pre-rendered equations and they seemed fine when editing,
but once I submit the post it just turned into a huge mess.
So...until I can find a proper way to fix this, please refer to the original blog post for formatted formulas.
I've replaced the original LaTex mess in this post with something at least more readable.
Any advice on fixing this is appreciated.
This post is part of my Game Math Series.
Source files are on GitHub.
Shortcut to sterp implementation.
Shortcut to code used to generate animations in this post.
An Alternative to Slerp
Slerp, spherical linear interpolation, is an operation that interpolates from one orientation to another, using a rotational axis paired with the smallest angle possible.
Quick note: Jonathan Blow explains here how you should avoid using slerp, if normalized quaternion linear interpolation (nlerp) suffices. Long store short, nlerp is faster but does not maintain constant angular velocity, while slerp is slower but maintains constant angular velocity; use nlerp if you’re interpolating across small angles or you don’t care about constant angular velocity; use slerp if you’re interpolating across large angles and you care about constant angular velocity. But for the sake of using a more commonly known and used building block, the remaining post will only mention slerp. Replacing all following occurrences of slerp with nlerp would not change the validity of this post.
In general, slerp is considered superior over interpolating individual components of Euler angles, as the latter method usually yields orientational sways.
But, sometimes slerp might not be ideal. Look at the image below showing two different orientations of a rod. On the left is one orientation, and on the right is the resulting orientation of rotating around the axis shown as a cyan arrow, where the pivot is at one end of the rod.

If we slerp between the two orientations, this is what we get:

Mathematically, slerp takes the “shortest rotational path”. The quaternion representing the rod’s orientation travels along the shortest arc on a 4D hyper sphere. But, given the rod’s elongated appearance, the rod’s moving end seems to be deviating from the shortest arc on a 3D sphere.
My intended effect here is for the rod’s moving end to travel along the shortest arc in 3D, like this:

The difference is more obvious if we compare them side-by-side:

This is where swing-twist decomposition comes in.

Swing-Twist Decomposition
Swing-Twist decomposition is an operation that splits a rotation into two concatenated rotations, swing and twist. Given a twist axis, we would like to separate out the portion of a rotation that contributes to the twist around this axis, and what’s left behind is the remaining swing portion.
There are multiple ways to derive the formulas, but this particular one by Michaele Norel seems to be the most elegant and efficient, and it’s the only one I’ve come across that does not involve any use of trigonometry functions. I will first show the formulas now and then paraphrase his proof later:
Given a rotation represented by a quaternion R = [W_R, vec{V_R}] and a twist axis vec{V_T}, combine the scalar part from R the projection of vec{V_R} onto vec{V_T} to form a new quaternion: T = [W_R, proj_{vec{V_T}}(vec{V_R})]. We want to decompose R into a swing component and a twist component. Let the S denote the swing component, so we can write R = ST. The swing component is then calculated by multiplying R with the inverse (conjugate) of T: S= R T^{-1} Beware that S and T are not yet normalized at this point. It's a good idea to normalize them before use, as unit quaternions are just cuter. Below is my code implementation of swing-twist decomposition. Note that it also takes care of the singularity that occurs when the rotation to be decomposed represents a 180-degree rotation. public static void DecomposeSwingTwist ( Quaternion q, Vector3 twistAxis, out Quaternion swing, out Quaternion twist ) { Vector3 r = new Vector3(q.x, q.y, q.z); // singularity: rotation by 180 degree if (r.sqrMagnitude < MathUtil.Epsilon) { Vector3 rotatedTwistAxis = q * twistAxis; Vector3 swingAxis = Vector3.Cross(twistAxis, rotatedTwistAxis); if (swingAxis.sqrMagnitude > MathUtil.Epsilon) { float swingAngle = Vector3.Angle(twistAxis, rotatedTwistAxis); swing = Quaternion.AngleAxis(swingAngle, swingAxis); } else { // more singularity: // rotation axis parallel to twist axis swing = Quaternion.identity; // no swing } // always twist 180 degree on singularity twist = Quaternion.AngleAxis(180.0f, twistAxis); return; } // meat of swing-twist decomposition Vector3 p = Vector3.Project(r, twistAxis); twist = new Quaternion(p.x, p.y, p.z, q.w); twist = Normalize(twist); swing = q * Quaternion.Inverse(twist); } Now that we have the means to decompose a rotation into swing and twist components, we need a way to use them to interpolate the rod’s orientation, replacing slerp.
Swing-Twist Interpolation
Replacing slerp with the swing and twist components is actually pretty straightforward. Let the Q_0 and Q_1 denote the quaternions representing the rod's two orientations we are interpolating between. Given the interpolation parameter t, we use it to find "fractions" of swing and twist components and combine them together. Such fractiona can be obtained by performing slerp from the identity quaternion, Q_I, to the individual components. So we replace: Slerp(Q_0, Q_1, t) with: Slerp(Q_I, S, t) Slerp(Q_I, T, t) From the rod example, we choose the twist axis to align with the rod's longest side. Let's look at the effect of the individual components Slerp(Q_I, S, t) and Slerp(Q_I, T, t) as t varies over time below, swing on left and twist on right:
And as we concatenate these two components together, we get a swing-twist interpolation that rotates the rod such that its moving end travels in the shortest arc in 3D. Again, here is a side-by-side comparison of slerp (left) and swing-twist interpolation (right):

I decided to name my swing-twist interpolation function sterp. I think it’s cool because it sounds like it belongs to the function family of lerp and slerp. Here’s to hoping that this name catches on.
And here’s my code implementation:
public static Quaternion Sterp ( Quaternion a, Quaternion b, Vector3 twistAxis, float t ) { Quaternion deltaRotation = b * Quaternion.Inverse(a); Quaternion swingFull; Quaternion twistFull; QuaternionUtil.DecomposeSwingTwist ( deltaRotation, twistAxis, out swingFull, out twistFull ); Quaternion swing = Quaternion.Slerp(Quaternion.identity, swingFull, t); Quaternion twist = Quaternion.Slerp(Quaternion.identity, twistFull, t); return twist * swing; } Proof
Lastly, let’s look at the proof for the swing-twist decomposition formulas. All that needs to be proven is that the swing component S does not contribute to any rotation around the twist axis, i.e. the rotational axis of S is orthogonal to the twist axis. Let vec{V_{R_para}} denote the parallel component of vec{V_R} to vec{V_T}, which can be obtained by projecting vec{V_R} onto vec{V_T}: vec{V_{R_para}} = proj_{vec{V_T}}(vec{V_R}) Let vec{V_{R_perp}} denote the orthogonal component of vec{V_R} to vec{V_T}: vec{V_{R_perp}} = vec{V_R} - vec{V_{R_para}} So the scalar-vector form of T becomes: T = [W_R, proj_{vec{V_T}}(vec{V_R})] = [W_R, vec{V_{R_para}}] Using the quaternion multiplication formula, here is the scalar-vector form of the swing quaternion: S = R T^{-1} = [W_R, vec{V_R}] [W_R, -vec{V_{R_para}}] = [W_R^2 - vec{V_R} ‧ (-vec{V_{R_para}}), vec{V_R} X (-vec{V_{R_para}}) + W_R vec{V_R} + W_R (-vec{V_{R_para}})] = [W_R^2 - vec{V_R} ‧ (-vec{V_{R_para}}), vec{V_R} X (-vec{V_{R_para}}) + W_R (vec{V_R} -vec{V_{R_para}})] = [W_R^2 - vec{V_R} ‧ (-vec{V_{R_para}}), vec{V_R} X (-vec{V_{R_para}}) + W_R vec{V_{R_perp}}] Take notice of the vector part of the result: vec{V_R} X (-vec{V_{R_para}}) + W_R vec{V_{R_perp}} This is a vector parallel to the rotational axis of S. Both vec{V_R} X(-vec{V_{R_para}}) and vec{V_{R_perp}} are orthogonal to the twist axis vec{V_T}, so we have shown that the rotational axis of S is orthogonal to the twist axis. Hence, we have proven that the formulas for S and T are valid for swing-twist decomposition. Conclusion
That’s all.
Given a twist axis, I have shown how to decompose a rotation into a swing component and a twist component.
Such decomposition can be used for swing-twist interpolation, an alternative to slerp that interpolates between two orientations, which can be useful if you’d like some point on a rotating object to travel along the shortest arc.
I like to call such interpolation sterp.
Sterp is merely an alternative to slerp, not a replacement. Also, slerp is definitely more efficient than sterp. Most of the time slerp should work just fine, but if you find unwanted orientational sway on an object’s moving end, you might want to give sterp a try.

• Do you know any papers that cover custom data structures like lists or binary trees implemented in hlsl without CUDA that work perfectly fine no matter how many threads try to use them at any given time?

• Hi there.
I'm looking for some quick opinions, advice or other comments on my custom engine architecture.
For better or for worse, I have ended up with an ECS engine. I didn't intend to go this way, but countless searched through Google and the forum seem to confirm that this is the case. I have entities (mere Ids), components (pure data) and systems (holding raw resources and functionality) to operate on them. To be honest, I'm fairly happy with it.
However, I have yet to implement any actual logic into my 'game', and have been looking around for details on the various ways of handling interactivity, specifically, interactively between entities and components.
A topic that comes up a lot is events and event queues. I have not liked these. I don't want to add functionality to entities or components, and I don't like the idea of callbacks or event calling firing all over the place. So, I have been puzzling over this for the last two or so days. Eventually, I gave up on the musing and came to accept that some kind of event system is going to be needed. So, I had another look at the bitSquid blog (recommended on this forum), and something occurred to me. Isn't an event really just another form of entity? If it isn't, why isn't it?
I also realised that I already have something pretty similar running in my engine now. Specifically, my (admitted quite naive) implementation works more or less like this. The scene hands a list of physicalComponents and their corresponding placementComponents, and the collisionDetection sub-system iterates through them, looking for collisions. If it finds one, it creates a collision, adds it to the list, and moves on to the next one. Once it is finished, the collisionResolution sub-system goes through the list, and handles the collisions - again, currently very naively, by bouncing the objects off of one another.
So, I am wondering if I can just use this same approach to handle logical interactions. Entities with logical requirements have a collection of components related to interactivity (the range, the effect, and so on), and the various sub-systems iterate through potential candidates. If it notices an interaction, it creates an interactionEntity (with the necessary data) and the interactions are processed by the next sub-system.
I guess I'm looking for some feedback on this idea before I start implementing it. The hope i for more granularity in the components, and the ability to add a logical scripting system which combines various components into potential interactions, and omits the need for any kind of event system. Or am I just repeating the general idea of events and event queues in a slightly more complicated way?
Additionally, any comments or commentary on this approach (ECS, and so on), would be very gratefully received. I've pretty much run out of resources at this point.
Regards,
Simon

• Thanx to @Randy Gaul, I succesfully implemented cube/cube collision detection and response.
1- substract the center of each AABB = 3d vector a.
2- if |x| of a is the biggest, this represents a face on each AABB.
3- if x is pointing at the same(or exact opposte) direction of the normal(of a face), two AABB are colliding on those faces.
But these steps only work if two colliders are cubes, because the size of each half-lengths are different in a right square prism.
Thank you!

• I've been digging around online and can't seem to find any formulas for 3D mesh simplification. I'm not sure where to start but I generally want to know how I could make a function that takes in an array of vertices, indices, and a float/double for the decimation rate. And could I preserve the general shape of the object too?
Thanks for the help!
P.S. I was hoping to do something with Quadric Error / Quadric Edge Collapse if that's possible.
• By Dromo
I am about to start a PhD that will investigate ways of replicating creativity in the AI systems of simulated people in virtual environments. I will research which psychology theories and models to use in order to achieve this, with a focus on creative problem solving.

The aim of this project is to create virtual characters and NPCs that can create new solutions to challenges, even if they have never encountered these before. This would mean that not every possible action or outcome would need to be coded for, so less development resources are required. Players would encounter virtual people that are not bound by rigid patterns of pre-scripted behaviour, increasing the replay value and lifespan of games, and the accuracy of simulations.

I am looking for companies or organisations that would be interested in working with me on my PhD, and I think computer games companies might be the most likely. I am trying to think of ways in which this new AI system might benefit games companies, or improvements and new types of games that might be possible. I am on this forum to ask for your thoughts and suggestions please, so I can approach games companies with some examples.

Thank you for your time and interest.

• INTRODUCTION - It's the following, for a university chair I'm making an application that will be an accessory to assist in face-to-face RPG tables, it has rooms in which the master and players can manage tokens, I just want to do one more thing, a map where all the players in the room could see and move their "miniatures" in it the master could put their monsters, but without animation, it would be just to replace the paper map that is normally used in face-to-face sessions.
PROBLEM - To do the map explained in the introduction, I looked for some tools but I did not find any good, so I would ask to be told APIs to do this, if someone has already played table RPG knows more or less how I want to do, I will leave an image of the roll 20 that is more or less similar, and some more images of how I want it to stay on the cell phone for those who never played understand , I just need to know a good tool to do this.
I want the master himself to put the image that will be used when starting the map (photo or maybe tiled map).
It can be solutions for web to and call the website into application, literally anything that does what I want will help.
If you can help me, I'll be grateful, sorry for the awful English.

• Hey everybody,
I'm currently working on a simple HTML5 game and my javascript collision detection function isn't working. The game features a little man that runs from side to side at the bottom of the screen, while a meteor falls from the sky. The function is supposed to detect a collision between meteor and man.
In the routine, the top left corner of the man is at (player.x, player.y) and the top left corner of the meteor is at (meteor.x, meteor.y). The man is 25 pixels wide by 35 pixels tall. The meteor is 50 pixels wide by 50 pixels tall.
Any idea where I've screwed in up this function?
// ============================================================================= // Check for a collision between the 50 x 50 meteor and the 25 wide x 35 tall // main character // // Main character is drawn at 540 and is 35 tall, so the top of the character // is at y = 540 and the bottom is at y = 575. // // Function returns 1 if there has been a collision between the main // character and the meteor, otherwise it returns 0. // ============================================================================= function check_for_meteor_player_collision () { // edge positions for player and meteor var player_top = player.y; var player_bottom = player.y + 34; var player_left = player.x; var player_right = player.x + 24; var meteor_top = meteor.y; var meteor_bottom = meteor.y + 49; var meteor_left = meteor.x; var meteor_right = meteor.x + 49; var vertical_overlap = 0; var horizontal_overlap = 0; // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check for vertical overlap // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check if meteor bottom overlaps player if ((meteor_bottom >= player_top) && (meteor_bottom <= player_bottom)) { vertical_overlap = 1; } // Check if meteor top overlaps player if ((meteor_top >= player_top) && (meteor_top <= player_bottom)) { vertical_overlap = 1; } // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check for horizontal overlap // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check if meteor left side overlaps player if ((meteor_left >= player_left) && (meteor_left <= player_right)) { horizontal_overlap = 1; } // Check if meteor right side overlaps player if ((meteor_right >= player_left) && (meteor_right <= player_right)) { horizontal_overlap = 1; } // console.log("vertical_overlap = " + vertical_overlap); // console.log("horizontal_overlap = " + horizontal_overlap) // If we've got both a vertical overlap and a horizontal overlap, // we've got a collision if ((vertical_overlap == 1) && (horizontal_overlap == 1)) { return 1; } // if we've fallen through, we haven't detected a collision return 0; } // =============================================================================
• ### Forum Statistics

• Total Topics
631067
• Total Posts
2997734
×