# Gumgo

Member

477

968 Good

• Rank
Member
1. ## Techniques to avoid skinny triangles with constrained delaunay triangulation

I am creating procedurally generated levels by stitching together "tilesets" created in a 3D modeling program such as 3DS max. Each tile consists of a 2x2x2 "section" of geometry, such as a wall, floor, corner, etc. These tiles are placed next to each other to form rooms and hallways.   After all the tiles are placed, I run a mesh simplification algorithm over the resulting geometry to reduce polygon counts for rendering and physics (and eventually NavMesh generation). The algorithm goes something like this: 1) Form groups of adjacent coplanar triangles that all have the same UV barycentric parameterizations (e.g. removing vertices wouldn't cause "warping"). 2) Combine each group into a single polygon, possibly with holes. 3) Remove excess collinear vertices from the boundaries. 4) Triangulate the polygons using constrained delaunay triangulation.   The issue is that step 4) is prone to producing long skinny triangles, which is causing problems everywhere (e.g. breaking my thresholds used to detect collinearity). Can anyone provide some advice on how to approach this problem, or point me to some resources or algorithms that deal with avoiding this?
2. ## Robust algorithm to determine prediction time

Ah thanks, that sounds like a way better idea! It makes more sense for the server to simply tell each client how far off they are, rather than the clients trying to guess. I'll go ahead and implement that!

4. ## Memory allocation in practice

Thanks for the replies. My target platforms are PC/Mac and iPad. I'm certainly glad to hear that these allocation issues are less of an issue on platforms with virtual memory. From what I've read, it seems like the iPad virtual memory setup is fairly similar to that of a PC, with the exception of swapping memory to disk (which shouldn't happen in a game anyway). Can anyone comment on this?
5. ## Memory allocation in practice

So from everything I've read, game engines are one type of software where having fine control over how memory is allocated is often very important. I often read that during gameplay you ideally don't want to actually allocate any memory - it should all be preallocated when the level loads. This seems like a good idea, but in practice there are many cases where it seems very difficult to judge what the allocation limits should be. I've got some specific examples in mind which I'll list below. Could someone give examples on how these decisions might be made in practice?   (One thing to keep in mind is that in the particular project I'm working on, the levels are procedurally generated.)   - Enemy limits. Normally the player might be fighting maybe 5-6 enemies at once. But what if the player runs around the level and gathers a huge crowd? Would you allocate for worst possible case?   - Similarly, with items. Five players fire bow and arrow as rapidly as possible, and there are a ton of arrow items stuck in walls. Should I calculate "max fire rate" and determine the maximum possible amount of arrows that could be fired and stuck in the wall before they despawned? It seems like it might be really hard to determine these limits on certain gameplay elements. And networked objects just complicate things further, since their ordering isn't guaranteed.   - Network buffers. Messages that are guaranteed are queued up until the ACK has been received. But if there's a network blip, the queue might briefly have to get big.   - Objects that have variable sizes. For example, suppose one enemy has a skeletal mesh with 3 bones, and another has 20 bones. Each requires a "state" buffer. But if I allocate space for at most, say, 100 "enemies", I'd have to assume worst case (they all have 20 bones) if I didn't do some sort of on-demand allocation.   - Strings. Perhaps they should be avoided altogether. Right now for example, I have a file which describes the "sword" item, and an enemy might spawn "sword" when killed. Should these all be preprocessed into indices?   - Components in an entity component system. If it's possible to dynamically add components (e.g. an enemy is OnFire, Frozen, and Poisoned), should you assume the worst and allocate in case all enemies obtain the largest possible combination of dynamic components?   It seems like a lot of these "worst cases" could potentially be pretty hard to spot. Especially if there's any sort of user-provided scripting system. What is usually done to deal with these types of issues?
6. ## Deciding when packets are dropped

Thanks, this information has been very helpful! I ended up going with hplus0603's method and just dropping packets which come out of order, and resending the moment I detect the missing ACK - greatly simplifies things. On the topic of reliability/ordering of messages, I had a thought today. This would probably have way too much overhead/be impractical, but have any games implemented a dependency graph for message ordering, rather than simply an ordered stream? An approach might work somewhat as follows: - Each message would have an ID (probably 2 bytes, maybe with some upper bits being used as flags) - Each message would be sent with a list of IDs it depends on (this would be the overhead-y part!) - When messages are received, their dependency graph is evaluated, and they are only executed after all dependencies have been executed - When the sender determines that all messages that a message is dependent on have been processed, the dependencies are removed to reduce dependency list length. To do this, examine incoming ACKs and figure out which messages must have been executed. Example of why the last point is important: suppose you spawn an enemy and then much later it gets killed. The "enemy killed" message should be dependent on the "enemy spawned" message, but in order for that to work the receiver needs to keep track of the "enemy spawned" message for a long time! But if you can confirm using ACKs that the "enemy spawned" message was already executed, you can just forget about it. And if you hang onto dependencies for a long time, you'll have issues when the 2-byte message IDs wrap around. This is just the more general extension of having multiple message channels each with independent ordering. Of course, the cost is probably extremely high. Anyone done something like this before?
7. ## Deciding when packets are dropped

Thanks for the replies. From your posts, it sounds like UDP "isn't as bad" as I had expected. I was under the impression that packets are out of order and delayed fairly regularly. But hplus0603, if you're simply dropping packets any time they come late and it hasn't been an issue, then that probably doesn't happen too often. I know the answer is probably dependent on a lot of things, but about how often do issues such as out of order/dropped packets occur with UDP under a regular network load?

9. ## Dividing force between multiple contact points

Not looking for anything sophisticated, just keeping the player from falling through the world. Though after this phase of the update, I will have to do some discrete solving to push overlapping entities apart (everything is capsules or spheres though).
10. ## Dividing force between multiple contact points

Thanks for the reply. I'm currently doing something similar to what you describe in #1 - continuous collision detection, where I start with t = 0, solve for the first contact, step to that point, add to t, adjust velocity, and keep doing this until t = 1. The issue is in the "adjust velocity" step. Usually, when coming in contact with a new surface, the velocity should be adjusted by subtracting n*(n.v) from the current velocity - that is, you remove the component of the velocity which is parallel to the normal of the surface you just came in contact with. However, if you're actually in contact with several surfaces, it's more complicated. Consider the following case, as illustrated in the attached image: your character is walking along surface A, and then comes in contact with a sloped ceiling, surface B. The desired reaction in this case is that the initial velocity vector, v1, is projected along the line formed by where the the two surfaces meet - i.e. the cross product of their normal vectors - and the velocity after the intersection is v2. However, if when processing contacts you only take a single contact into account, the following happens: 1) The object is moving along surface A 2) The object hits surface B, and the component of the velocity which is parallel to surface B's normal is removed. This causes the object's new velocity to be slightly downward. 3) IMMEDIATELY after this ("0 time later"), an intersection with surface A is detected. The component of the velocity which is parallel to surface A's normal is removed, and the object's new velocity is, again, pointing into surface B. 4) IMMEDIATELY after this ("0 time later"), an intersection with surface B is detected... ... And the process keeps repeating until you reach an iteration cap. Essentially, there are two contacts, and they each alter the object's velocity so that it's going into the other object, and so no progress is ever made. In my initial post, what I was trying to get at was a general solution to this. You first build a list of all surfaces that the object is in contact with. Then, you compute the amount of force that would be applied to each surface when attempting to move the object by it's current velocity. Finally, you cause each surface to push back with a force equal to that which was exerted on it. This would result in the true final altered velocity of the object. However, now I realize that there are really 3 cases to deal with: 1) The object is only in contact with one surface, e.g. walking up a hill. In this case, you simply remove the velocity projected onto the normal. 2) The object is in contact with two surfaces, e.g. the example image. In this case, if the velocity direction is toward the "crease" created by the two surfaces, you project the velocity onto the vector onto the cross product of the two normals, and take that as your new velocity; otherwise, you revert to case 1, using the plane which the velocity vector points most toward. 3) The object is "pinned" by 3 or more points of contact (e.g. attempting to walk into a sharp corner). In this case, the object won't be able to move at all, so just set the new velocity to 0 and end the timestep. If you have just one contact point, it's always case 1. If you have 2 contact points, it could be case 1 or case 2, depending on where the velocity vector is pointed, but it's not too hard to tell. However, case 3 seems trickier. If you have N contact points, any of these cases could be produced. Consider the case where you're standing in a corner, contacting two walls and the floor. If you walk directly OUT of the corner, it's case 1: you're only "pushing" against one contact point, the floor. If you walk OUT of the corner against one of the walls, it's case 2: you're walking away from one of the walls, but pushing against the floor and one other wall. If you're walking INTO the corner, it's case 3: you're being "pinned" by all 3 surfaces. Any ideas on how to robustly distinguish between these cases?
11. ## Dividing force between multiple contact points

Here is my situation: I have a rigid body with finite mass in contact with N surfaces with infinite mass. I want to apply a force f to the rigid body. This force is divided among the N surfaces: for each surface s[sub]i[/sub], a certain proportion of this force will be applied, w[sub]i[/sub]*f, where the sum of all the w[sub]i[/sub]s is at most 1. The resulting force pushing back from each surface is -w[sub]i[/sub]*f·n[sub]i[/sub], i.e. the normal component of the force applied to surface. How do I compute the weights w[sub]i[/sub] for each surface? I attached an example image.
12. ## Normal mapping without tangent space

That is it! Thanks!