Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your feedback on a survey! Each completed response supports our community and gives you a chance to win a $25 Amazon gift card!


Gumgo

Member Since 10 Jan 2007
Offline Last Active Dec 01 2014 01:40 AM

Topics I've Started

Techniques to avoid skinny triangles with constrained delaunay triangulation

30 July 2013 - 10:35 PM

I am creating procedurally generated levels by stitching together "tilesets" created in a 3D modeling program such as 3DS max. Each tile consists of a 2x2x2 "section" of geometry, such as a wall, floor, corner, etc. These tiles are placed next to each other to form rooms and hallways.

 

After all the tiles are placed, I run a mesh simplification algorithm over the resulting geometry to reduce polygon counts for rendering and physics (and eventually NavMesh generation). The algorithm goes something like this:

1) Form groups of adjacent coplanar triangles that all have the same UV barycentric parameterizations (e.g. removing vertices wouldn't cause "warping").

2) Combine each group into a single polygon, possibly with holes.

3) Remove excess collinear vertices from the boundaries.

4) Triangulate the polygons using constrained delaunay triangulation.

 

The issue is that step 4) is prone to producing long skinny triangles, which is causing problems everywhere (e.g. breaking my thresholds used to detect collinearity). Can anyone provide some advice on how to approach this problem, or point me to some resources or algorithms that deal with avoiding this?


Robust algorithm to determine prediction time

22 June 2013 - 01:53 AM

I'm performing client side prediction, so the clients need to run ahead of the server. The clients need to run ahead far enough so that their input packets are received by the server slightly before they actually need to be executed. If they arrive too late, the server would either need to back up the world (which I'm not doing) or make some estimations (e.g. the player was moving forward so they probably still are) and try to fix things up as much as possible once the input packet does arrive (e.g. the player might jump 1 frame late).

 

I'm having some trouble determining the amount to predict by in a robust way that deals well with jitter and changing network conditions. Sometimes I "get it right" and the game runs well, but other times the prediction is a bit too short and there are a lot of input packets arriving too late, resulting client/server discrepancies (where the server is actually "wrong" in this case) and lots of client correction.

 

Call the prediction time P. A summary of the ideal behavior is:

P should be the smallest value which is large enough so that under "stable" conditions (even stable bad conditions), all input packets arrive to the server in time.

- If there is a brief network glitch, it should be ignored and should not be readjusted.

- If network conditions change (e.g. RTT or jitter goes up) P should be readjusted.

 

Currently, my algorithm to determine prediction time P is as follows:

 

- Packets are sent at a fixed rate R. Let S = 1/R be the spacing between two packets (e.g. R = 30 packets/sec means S = 33ms).

- The ping time and jitter of each outgoing packet is measured. The ping time is measured by taking the difference between the time the packet was sent and the time the ACK for the packet was received. Note that since packets (which contain ACKs) are sent with a spacing S, this means that the ping times will be somewhat "quantized" by S. To get an average ping time (RTT), I use an exponential moving average with a ratio of 0.1. To compute average jitter J, I take the differences between each packet's ping time and the current average RTT and compute the exponential moving variance (section 9 of this article).

- I have RTT and J - even though they are averages they still fluctuate a bit, especially J.

- In each packet, there is a server time update message. Call this T_p, for packet time. I could just let PRTT T_p but then P would be jittery because both RTT is changing and T_p has jitter to it. Instead, to compute P I do the following:

- I compute P_f - the prediction value associated with this single frame, which will be jittery. My algorithm is P_fT_pRTT + 2*JS. My reasoning for each term: RTT because we want to predict ahead of the server's time by RTT/2, 2*J because we want to add some padding for jitter, and S because we want to add some more padding for the quantization error described above.

- Each time I compute a new P_f for the frame, I update P_a, which is an exponential moving average of P_f with ratio = 0.1.

- On the very first frame, I set P to P_f, since I only have one data point. On subsequent frames, I increment P by 1, so that it is constantly and smoothly increasing, and I check whether P needs to be "re-synced":

- I take the value D = abs( PP_a ), the difference between the prediction time P and the average per-frame prediction times computed P_a, and test whether D > 2*J + S. If that expression is true, I "re-sync" P by assigning PP_a. I picked the value 2*JS with the reasoning that if our predicted time differs from the average per-frame predicted time by more than twice the jitter (with padding for quantization), then it's likely wrong.

 

Can anyone suggest improvements to this algorithm, or suggest a better algorithm? My main problem right now seems to be determining when to re-sync P. Performing too many re-sync results in glitches and correction, but if P is too low and I don't re-sync (because it is "close enough" to P_a) then that also results in glitches and correction.


Memory allocation in practice

27 April 2013 - 04:00 PM

So from everything I've read, game engines are one type of software where having fine control over how memory is allocated is often very important. I often read that during gameplay you ideally don't want to actually allocate any memory - it should all be preallocated when the level loads. This seems like a good idea, but in practice there are many cases where it seems very difficult to judge what the allocation limits should be. I've got some specific examples in mind which I'll list below. Could someone give examples on how these decisions might be made in practice?

 

(One thing to keep in mind is that in the particular project I'm working on, the levels are procedurally generated.)

 

- Enemy limits. Normally the player might be fighting maybe 5-6 enemies at once. But what if the player runs around the level and gathers a huge crowd? Would you allocate for worst possible case?

 

- Similarly, with items. Five players fire bow and arrow as rapidly as possible, and there are a ton of arrow items stuck in walls. Should I calculate "max fire rate" and determine the maximum possible amount of arrows that could be fired and stuck in the wall before they despawned? It seems like it might be really hard to determine these limits on certain gameplay elements. And networked objects just complicate things further, since their ordering isn't guaranteed.

 

- Network buffers. Messages that are guaranteed are queued up until the ACK has been received. But if there's a network blip, the queue might briefly have to get big.

 

- Objects that have variable sizes. For example, suppose one enemy has a skeletal mesh with 3 bones, and another has 20 bones. Each requires a "state" buffer. But if I allocate space for at most, say, 100 "enemies", I'd have to assume worst case (they all have 20 bones) if I didn't do some sort of on-demand allocation.

 

- Strings. Perhaps they should be avoided altogether. Right now for example, I have a file which describes the "sword" item, and an enemy might spawn "sword" when killed. Should these all be preprocessed into indices?

 

- Components in an entity component system. If it's possible to dynamically add components (e.g. an enemy is OnFire, Frozen, and Poisoned), should you assume the worst and allocate in case all enemies obtain the largest possible combination of dynamic components?

 

It seems like a lot of these "worst cases" could potentially be pretty hard to spot. Especially if there's any sort of user-provided scripting system. What is usually done to deal with these types of issues?


Deciding when packets are dropped

12 December 2012 - 04:30 PM

I've implemented a system on top of UDP similar to the one described here. I have the system working well, and now I'm trying to add some "higher level functionality" on top of it to make it easier to do things like send guaranteed messages. I have a few questions about how dropped packets are handled. (Sorry they're a bit lengthy, but network code has lots of subtle issues that can be hard to explain!)

1) First of all, what is the best approach to detect likely dropped packets? The article's suggestion is basically the following: if a packet can't be ACKed anymore (i.e. the ACK bitfield window has moved past that packet), it's probably been dropped. So for example, suppose packets are being sent at a rate of 30 per second and the ACK bitfield contains 30 ACKs. If you send packet A at time t = 1, it will take about 1 second to receive ACKs for the last 30 packets. If packet A is never ACKed in this 1 second, it can NEVER be ACKed (the ACK bitfield window no longer contains packet A), so A should be considered dropped.

But this seems weird to me. If you were sending packets at a rate of 10 per second and the ACK bitfield was still 30 long, it would take 3 seconds for packet A to fall off the ACK bitfield. But 3 seconds seems too long to wait for detecting dropped packets. Or if you were sending packets at a rate of 60 per second and the ACK bitfield was 30 long, packet A would only be on the bitfield for 0.5 seconds.

The article doesn't mention using RTT to determine dropped packets, but that seems like a better indicator to me. Obviously if a packet is no longer in the ACK bitfield window it should be dropped (since it CANNOT be ACKed), but maybe the additional criteria of, say, a maximum ACK time of RTT*2 should be included as well? How is this usually done?

2) When you detect a packet that's likely dropped, how is that usually treated? I can see two options here:
a) Decide that the packet IS dropped and ignore it, even if it comes in later
b) If the packet comes in later, still process it
Obviously, option a) could be more wasteful, since once you've decided that a packet is dropped, you can't ever "change your mind" upon receiving an ACK. However, the reason I'm considering it is that it simplifies things a lot.

For example, suppose you send guaranteed message M in packet A, and after RTT*2 (or whatever your criteria is), you haven't received an ACK and decide it's probably dropped. You then resend M in packet B. If you go with option a), you can forget all about packet A and you only need to wait for an ACK from packet B. The downside is that if you end up receiving an ACK for packet A after you've determined it's been dropped, you can't actually use this ACK. A potential issue I see with this is if the RTT suddenly jumps up a lot, you might go through a brief period where you decide a bunch of packets are dropped when in reality they're just a bit more delayed. This would occur until the dropped packet cutoff time is adjusted to take the new RTT into account. I'm also not sure how this would work with network jitter - if there was high jitter then I guess more packets would be considered dropped.

But if you go with option b), it's still possible that you could receive an ACK for packet A! So the issue described above wouldn't occur, and you'd have the ACK information available sooner than if you waited for packet B's ACK. But in order to respond in this case, you'd need to keep some information for packet A around; i.e. you'll need to store "message M is pending in both packet A and packet B"; for each guaranteed message M, you need to store a list of packets it's been sent in, rather than just a single packet (the most recent one). You don't want this list to grow forever though, so you need to eventually "forget" about old packets, and deciding when to do this isn't exactly clear.

Right now what I'm doing is keeping track of each packet containing guaranteed messages until all guaranteed message in that packet have been ACKed. So for example, message M has a reference to packets A and B, and packets A and B have references to message M. If B is successfully ACKed, I notice that B contains M, and so I remove M from both A and B, and now I notice that A is empty, so I free A. But this just seems... "messy" and somewhat unbounded. If there are unexpected network issues and I end up resending message M several times, I'd suddenly be keeping track of a lot of old packets which probably didn't get though. And the worst case is if a (virtual) connection is dropped - if I have a 10 second timeout and a bunch of packets containing M never get through, I'd have to keep track of 10 seconds worth of resent packets until the connection timeout is detected!

Does anyone have a suggestion on this issue? To summarize: when a packet is likely dropped, should I a) decide that it IS dropped, or b) still allow it to change its status to "delivered" if it's ACKed at a later time?

Dividing force between multiple contact points

22 November 2012 - 09:15 PM

Here is my situation: I have a rigid body with finite mass in contact with N surfaces with infinite mass. I want to apply a force f to the rigid body. This force is divided among the N surfaces: for each surface si, a certain proportion of this force will be applied, wi*f, where the sum of all the wis is at most 1. The resulting force pushing back from each surface is -wi*f·ni, i.e. the normal component of the force applied to surface.

How do I compute the weights wi for each surface? I attached an example image.

PARTNERS