• Create Account

Gumgo

Member Since 10 Jan 2007
Offline Last Active Dec 01 2014 01:40 AM

#5072116Robust algorithm to determine prediction time

Posted by on 22 June 2013 - 07:36 PM

Ah thanks, that sounds like a way better idea! It makes more sense for the server to simply tell each client how far off they are, rather than the clients trying to guess. I'll go ahead and implement that!

#5071951Robust algorithm to determine prediction time

Posted by on 22 June 2013 - 01:53 AM

I'm performing client side prediction, so the clients need to run ahead of the server. The clients need to run ahead far enough so that their input packets are received by the server slightly before they actually need to be executed. If they arrive too late, the server would either need to back up the world (which I'm not doing) or make some estimations (e.g. the player was moving forward so they probably still are) and try to fix things up as much as possible once the input packet does arrive (e.g. the player might jump 1 frame late).

I'm having some trouble determining the amount to predict by in a robust way that deals well with jitter and changing network conditions. Sometimes I "get it right" and the game runs well, but other times the prediction is a bit too short and there are a lot of input packets arriving too late, resulting client/server discrepancies (where the server is actually "wrong" in this case) and lots of client correction.

Call the prediction time P. A summary of the ideal behavior is:

P should be the smallest value which is large enough so that under "stable" conditions (even stable bad conditions), all input packets arrive to the server in time.

- If there is a brief network glitch, it should be ignored and should not be readjusted.

- If network conditions change (e.g. RTT or jitter goes up) P should be readjusted.

Currently, my algorithm to determine prediction time P is as follows:

- Packets are sent at a fixed rate R. Let S = 1/R be the spacing between two packets (e.g. R = 30 packets/sec means S = 33ms).

- The ping time and jitter of each outgoing packet is measured. The ping time is measured by taking the difference between the time the packet was sent and the time the ACK for the packet was received. Note that since packets (which contain ACKs) are sent with a spacing S, this means that the ping times will be somewhat "quantized" by S. To get an average ping time (RTT), I use an exponential moving average with a ratio of 0.1. To compute average jitter J, I take the differences between each packet's ping time and the current average RTT and compute the exponential moving variance (section 9 of this article).

- I have RTT and J - even though they are averages they still fluctuate a bit, especially J.

- In each packet, there is a server time update message. Call this T_p, for packet time. I could just let PRTT T_p but then P would be jittery because both RTT is changing and T_p has jitter to it. Instead, to compute P I do the following:

- I compute P_f - the prediction value associated with this single frame, which will be jittery. My algorithm is P_fT_pRTT + 2*JS. My reasoning for each term: RTT because we want to predict ahead of the server's time by RTT/2, 2*J because we want to add some padding for jitter, and S because we want to add some more padding for the quantization error described above.

- Each time I compute a new P_f for the frame, I update P_a, which is an exponential moving average of P_f with ratio = 0.1.

- On the very first frame, I set P to P_f, since I only have one data point. On subsequent frames, I increment P by 1, so that it is constantly and smoothly increasing, and I check whether P needs to be "re-synced":

- I take the value D = abs( PP_a ), the difference between the prediction time P and the average per-frame prediction times computed P_a, and test whether D > 2*J + S. If that expression is true, I "re-sync" P by assigning PP_a. I picked the value 2*JS with the reasoning that if our predicted time differs from the average per-frame predicted time by more than twice the jitter (with padding for quantization), then it's likely wrong.

Can anyone suggest improvements to this algorithm, or suggest a better algorithm? My main problem right now seems to be determining when to re-sync P. Performing too many re-sync results in glitches and correction, but if P is too low and I don't re-sync (because it is "close enough" to P_a) then that also results in glitches and correction.

#5057340Memory allocation in practice

Posted by on 27 April 2013 - 04:00 PM

So from everything I've read, game engines are one type of software where having fine control over how memory is allocated is often very important. I often read that during gameplay you ideally don't want to actually allocate any memory - it should all be preallocated when the level loads. This seems like a good idea, but in practice there are many cases where it seems very difficult to judge what the allocation limits should be. I've got some specific examples in mind which I'll list below. Could someone give examples on how these decisions might be made in practice?

(One thing to keep in mind is that in the particular project I'm working on, the levels are procedurally generated.)

- Enemy limits. Normally the player might be fighting maybe 5-6 enemies at once. But what if the player runs around the level and gathers a huge crowd? Would you allocate for worst possible case?

- Similarly, with items. Five players fire bow and arrow as rapidly as possible, and there are a ton of arrow items stuck in walls. Should I calculate "max fire rate" and determine the maximum possible amount of arrows that could be fired and stuck in the wall before they despawned? It seems like it might be really hard to determine these limits on certain gameplay elements. And networked objects just complicate things further, since their ordering isn't guaranteed.

- Network buffers. Messages that are guaranteed are queued up until the ACK has been received. But if there's a network blip, the queue might briefly have to get big.

- Objects that have variable sizes. For example, suppose one enemy has a skeletal mesh with 3 bones, and another has 20 bones. Each requires a "state" buffer. But if I allocate space for at most, say, 100 "enemies", I'd have to assume worst case (they all have 20 bones) if I didn't do some sort of on-demand allocation.

- Strings. Perhaps they should be avoided altogether. Right now for example, I have a file which describes the "sword" item, and an enemy might spawn "sword" when killed. Should these all be preprocessed into indices?

- Components in an entity component system. If it's possible to dynamically add components (e.g. an enemy is OnFire, Frozen, and Poisoned), should you assume the worst and allocate in case all enemies obtain the largest possible combination of dynamic components?

It seems like a lot of these "worst cases" could potentially be pretty hard to spot. Especially if there's any sort of user-provided scripting system. What is usually done to deal with these types of issues?

#4973171gl_VertexID and gl_InstanceID seem to be broken

Posted by on 24 August 2012 - 08:17 PM

Updated drivers (hoping to avoid crashes and BSODs) and the issue is gone. So I guess it was just a driver problem.

#4937200Voxel "chunk" storage

Posted by on 03 May 2012 - 02:10 PM

I'm attempting to write a density voxel system using marching cubes for terrain. For large scenes, each N by N by N (N = 64 for example) of the scene should correspond to a "chunk" of voxels, and there should be a way to query each "chunk" individually (e.g. each chunk has a file on disk, or you generate a single chunk at a time).

My issue is the following: in games like MineCraft where the voxels are cubes, N by N by N voxels actually contains (and completely defines) exactly N by N by N units of space. However, in systems where a mesh is extracted, it actually takes 8 voxels (centered on corners of a cube) to define a "patch" of the mesh. This means that to completely generate the region of N by N by N voxels, (N+1) by (N+1) by (N+1) voxels are actually required. This means that if each chunk is stored/generated in isolation, there is a duplicated row (or rather, "slice") of voxels stored. Furthermore, it is nice to use a power of 2 for N since it works well with LOD data structures (e.g. chunked-LOD) or octrees. However, this means that the actual stored voxel volume won't be a power of 2, and won't have nice cache alignment properties. It also means that updating voxels isn't as simple - you might need to update the data in multiple places - potentially in up to 8 chunks if it's a corner!

Actually though, the problem doesn't stop there - to generate smooth normals for lighting, you need to examine the surrounding voxel density values, so you actually need an extra row of voxels on each edge - this means you actually need (N + 3)^3 voxels!

A few possible approaches I've thought about:
1) Just deal with the extra storage. I really don't like this solution though, it seems messy and probably slow.
2) Require that 3x3x3 chunks be present in memory before generating the center chunk. I don't really like this though because it seems very wasteful and adds weird dependencies between chunks that shouldn't be there.
3) When generating the mesh for a chunk, leave off a row - that is, only generate a mesh filling (N-1)^3 units of space. When there are two adjacent chunks present, generate the strip connecting them separately.

I think solution 3 is the best, and it also allows for LOD stitching between chunks to be handled very nicely, since if one of the chunk's LOD level changes, only the strips connecting chunks need to be regenerated, and algorithms like http://www.terathon.com/voxels/ could be easily implemented. However, it still does not solve the problem of needing an extra row of voxels for normal generation! I can think of 2 possible solutions for this: either regenerate the normals on the edges of meshes when the adjacent chunk is loaded, or make the connecting "strips" 3 units wide instead of 1 unit wide, and just leave off an extra strip from the "main" chunk meshes. Neither of these seem that good though: the first solution would require re-uploading the entire mesh, or portions of the mesh, to the GPU, and would require keeping track of which vertices need to have their normals recomputed; in the second solution, an "isolated" chunk would be smaller in size, and the size of the strips connecting chunks would be larger and take longer to generate, but this seems like the better of the two.

So, how is this usually handled? Is solution 3 the way to go? And how are normals dealt with?

#4803469Dx10 Render Target Problem

Posted by on 27 April 2011 - 03:37 AM

I believe you need to set up a 2048x2048 viewport, at least that's what you do in dx11 so it's probably the same in 10. Here's some info on viewports, it should be pretty straightforward: http://msdn.microsoft.com/en-us/library/bb205126(v=vs.85).aspx

#47855413D format that supports vertex tangents.

Posted by on 14 March 2011 - 04:30 AM

If you're looking for a good format, I'd recommend FBX, which supports tangents, binormals, multiple texture layers, and a bunch of other stuff like hierarchies and skin weights (might seem like overkill but you don't have to use all those features). You can use the FBX SDK to parse the format. The only downside is that the SDK is a bit tricky to use and seems to be lacking in documentation. If you decide to do this I can save you some time by sending you some code that extracts positions, texcoords, normals, tangents, and binormals (and even bone indices/weights if you want).

#4674818FBX SDK normals

Posted by on 09 July 2010 - 03:09 PM

I actually am using index buffers. What I'm doing is I'm making a big list with 3 vertices per triangle first, then reducing it down and indexing by finding vertices that are equivalent after I've read all the vertices.

Unfortunately, I don't think your method will work for me because the normals aren't mapped using eBY_CONTROL_POINT mapping (if they were, my program would display an error, but it does not).

EDIT: I believe I may have discovered the issue, though I'm not sure how to fix it. It appears that calling GetPolygonVertex() returns the index of a control point. However, there may be multiple normals specified at each control point, so using GetPolygonVertex() and using the index returned to find the normal clearly won't work. It looks like what I need is something like GetPolygonVertexNormal(), which returns the vertex normal of the ith vertex of the polygon. However, there aren't matching GetPolygonVertexUVCoordinate(), GetPolygonVertexTangent(), and GetPolygonVertexBinormal(), so this won't work.

EDIT2: Ah hah, got it! GetPolygonVertex( polygon )+vertex_in_polygon is the way to find the polygon's index. A fair amount of people seem to have trouble with this, so I'll post the whole code (it might not be entirely right, but it seems to work):
```
for (int l = 0; l < mesh->GetLayerCount(); ++l) {
KFbxLayerElementUV * texCoords = texCoordsFound ? NULL : mesh->GetLayer( l )->GetUVs();
KFbxLayerElementNormal * normals = normalsFound ? NULL : mesh->GetLayer( l )->GetNormals();
KFbxLayerElementTangent * tangents = tangentsFound ? NULL : mesh->GetLayer( l )->GetTangents();
KFbxLayerElementBinormal * binormals = binormalsFound ? NULL : mesh->GetLayer( l )->GetBinormals();

// loop through triangles
for (int i = 0; i < triCount; ++i) {
// get a reference to it
Triangle & tri = triangles[i];

// for each vertex in the triangle
for (int v = 0; v < 3; ++v) {
// get a reference to it
Vertex & vert = tri.vertices[v];

//int positionIndex = mesh->GetPolygonVertex( i, v );
int positionIndex = mesh->GetPolygonVertexIndex( i ) + v;

// set its texture coordinate
if (texCoords != NULL) {
texCoordsFound = true;
switch (texCoords->GetMappingMode()) {
case KFbxLayerElement::eBY_CONTROL_POINT:
switch (texCoords->GetReferenceMode()) {
case KFbxLayerElement::eDIRECT:
for (int p = 0; p < 2; ++p)
vert.texCoord[p] = (float)texCoords->GetDirectArray().GetAt( positionIndex )[p];
break;
case KFbxLayerElement::eINDEX_TO_DIRECT:
{
int index = texCoords->GetIndexArray().GetAt( positionIndex );
for (int p = 0; p < 2; ++p)
vert.texCoord[p] = (float)texCoords->GetDirectArray().GetAt( index )[p];
}
break;
default:
cout << "        Error: invalid texture coordinate reference mode\n";
error = true;
}
break;
case KFbxLayerElement::eBY_POLYGON_VERTEX:
{
int index = mesh->GetTextureUVIndex( i, v );
switch (texCoords->GetReferenceMode()) {
case KFbxLayerElement::eDIRECT:
case KFbxLayerElement::eINDEX_TO_DIRECT:
for (int p = 0; p < 2; ++p)
vert.texCoord[p] = (float)texCoords->GetDirectArray().GetAt( index )[p];
break;
default:
cout << "        Error: invalid texture coordinate reference mode\n";
error = true;
}
}
break;
case KFbxLayerElement::eBY_POLYGON:
case KFbxLayerElement::eALL_SAME:
case KFbxLayerElement::eNONE:
cout << "        Error: invalid texture coordinate mapping mode\n";
error = true;
}
}

// set its normal
if (normals != NULL) {
normalsFound = true;
if (normals->GetMappingMode() == KFbxLayerElement::eBY_POLYGON_VERTEX) {
switch (normals->GetReferenceMode()) {
case KFbxLayerElement::eDIRECT:
for (int p = 0; p < 3; ++p)
vert.normal[p] = (float)normals->GetDirectArray().GetAt( positionIndex )[p];
break;
case KFbxLayerElement::eINDEX_TO_DIRECT:
{
int index = normals->GetIndexArray().GetAt( positionIndex );
for (int p = 0; p < 3; ++p)
vert.normal[p] = (float)normals->GetDirectArray().GetAt( index )[p];
}
break;
default:
cout << "        Error: invalid normal reference mode\n";
error = true;
}
} else {
cout << "        Error: invalid normal mapping mode\n";
error = true;
}
}

// set its tangent
if (tangents != NULL) {
tangentsFound = true;
if (tangents->GetMappingMode() == KFbxLayerElement::eBY_POLYGON_VERTEX) {
switch (tangents->GetReferenceMode()) {
case KFbxLayerElement::eDIRECT:
for (int p = 0; p < 3; ++p)
vert.tangent[p] = (float)tangents->GetDirectArray().GetAt( positionIndex )[p];
break;
case KFbxLayerElement::eINDEX_TO_DIRECT:
{
int index = tangents->GetIndexArray().GetAt( positionIndex );
for (int p = 0; p < 3; ++p)
vert.tangent[p] = (float)tangents->GetDirectArray().GetAt( index )[p];
}
break;
default:
cout << "        Error: invalid tangent reference mode\n";
error = true;
}
} else {
cout << "        Error: invalid tangent mapping mode\n";
error = true;
}
}

// set its binormals
if (binormals != NULL) {
binormalsFound = true;
if (binormals->GetMappingMode() == KFbxLayerElement::eBY_POLYGON_VERTEX) {
switch (binormals->GetReferenceMode()) {
case KFbxLayerElement::eDIRECT:
for (int p = 0; p < 3; ++p)
vert.binormal[p] = (float)binormals->GetDirectArray().GetAt( positionIndex )[p];
break;
case KFbxLayerElement::eINDEX_TO_DIRECT:
{
int index = binormals->GetIndexArray().GetAt( positionIndex );
for (int p = 0; p < 3; ++p)
vert.binormal[p] = (float)binormals->GetDirectArray().GetAt( index )[p];
}
break;
default:
cout << "        Error: invalid binormal reference mode\n";
error = true;
}
} else {
cout << "        Error: invalid binormal mapping mode\n";
error = true;
}
}

if (error)
break;
}
if (error)
break;
}
if (error)
break;
}
if (error)
continue;

if (!texCoordsFound) {
cout << "        Error: " << mesh->GetName() << " has no texture coordinates\n";
continue;
}
if (!normalsFound) {
cout << "        Error: " << mesh->GetName() << " has no normals\n";
continue;
}
if (!tangentsFound) {
cout << "        Error: " << mesh->GetName() << " has no tangents\n";
continue;
}
if (!binormalsFound) {
cout << "        Error: " << mesh->GetName() << " has no binormals\n";
continue;
}

```

[Edited by - Gumgo on July 10, 2010 12:09:43 AM]

PARTNERS