Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ollhax

Member Since 22 Jan 2012
Offline Last Active Feb 05 2016 06:03 AM

Topics I've Started

Using whitelisting to run untrusted C# code safely

04 October 2015 - 08:36 AM

Hi there! I've been working on a proof-of-concept for a game-maker idea I've had for a while. It boils down to running user-written, untrusted C# code in a safe way. I've gone down the path of AppDomains and sandboxes, using Roslyn to build code on the fly and running the code in a separate process. I have a working implementation up and running, but I've hit some snags.

My biggest issue is that it seems like Microsoft have given up on sandboxing code. See https://msdn.microsoft.com/en-us/library/bb763046%28v=vs.110%29.aspx. They added the "Caution" box a few months back, including this gem: "We advise against loading and executing code of unknown origins without putting alternative security measures in place". To me, it feels like they've deprecated the whole thing.

There is also the issue that AppDomain sandboxing isn't very well supported across platforms. There's no support in Mono. I had hopes for a fix from the CoreCLR, but then I found this: https://github.com/dotnet/coreclr/issues/642 - so no luck there.

So! I've started exploring whitelisting as a security measure instead. I haven't  figured out how big a part of the .NET library I need to include yet, but it feels like I mainly need collections and some reflection stuff (probably limited to messing with public fields). I think I can do all this by examining the code with Roslyn and not allowing namespaces/classes that aren't explicitly listed.
 

I'm comparing my approach with Unity, which does more or less the same thing, e.g exposing only a safe subset of the framework. In their case it's an actual stripped down version of Mono (if I've understood it right), but seems to me the results would be pretty much the same if I get it right.

 

TLDR:

If you have experience with these kind of problems, would you say that is a safe approach? Am I missing something big and obvious here?


Exclusive maximum in random functions

15 February 2015 - 02:28 AM

Hi there! I'm working on some library functions and I've hit a problem. Why do people insist on having random functions (like Random(min, max)) where the max value is excluded?

 

Con's:

  • It's counter-intuitive. Most rookies trip on this at least once.
  • It's less efficient in the approach I was going for.
  • It has weird states like Random(0, 1) being the same as Random(0, 0) (unless one should forbid the latter, which seems only worse). You also can't Randomize through the full positive int range [0-Int32.MaxValue] (which kinda isn't a problem, but still inelegant).

Pro's:

  • Bit simpler when you're randomizing an element out of an array (array[Random(array.Size)]). Basically fits in better in a world where 0 is the starting index.
  • Everyone does it this way, including the System.Random functions (working in C#).

I really don't like a randomization lib being modelled only for indexing arrays, but the fact that everyone does it this way is a much more convincing argument. Before I go make a decision and move on, is there something I'm not thinking about?


Figuring out the cause of jitter

12 May 2013 - 12:19 PM

Hi there! I'm having some trouble dealing with jitter in my networked game. I've created a simplified example that should explain the problem:

 

Client A updates its position, P, each frame. For simplicity, say that P is incremented by one each frame. The game updates 60 frames per second, and every 3 frames (20 updates / second) the client sends this position to Client B.

 

B receives the position and uses this value. However, since we expect less than 60 updates per second from A, B will try to predict the increment to P by adding one to it each frame. This works just fine - most of the time.

 

What I'm seeing is some jittery movement on B's end, caused by packets arriving at an uneven pace. For example, here's a printout of P at the end of each frame:

 

P: 4

P: 5

P: 6 <- Got update from A, value = 6. All is well since we expect one update every three frames.

P: 7

P: 8

P: 9 <- Got update from A, value = 9.

P: 10

P: 11

P: 12 <- Missed an update!

P: 12 <- Got update from A, value = 12. Since it was one frame late, P stands still and we get jitter.

P: 13

...

 

I guess I can't expect packets to arrive exactly on time, even though I'm running the test on a single computer and with all clients at 60 FPS. However, what to do about it? The obvious answer is smoothing, but I'd need a lot of it to cover up the jitter - the difference is often more than a frame or two. Are these results to be expected or would you suspect that something is wrong here?


Handling inputs server-side

06 May 2012 - 11:37 AM

Hi there,

I'm working on a 2d platform game using the quake 3 networking model. The client sends inputs 20 times a second, i.e. three inputs per packet. To avoid problems with "bursty" connections (see here), I process the received inputs directly on the server during a single frame. Since the inputs as well as the physics (gravity etc) of the game affects the player entities, I essentially run the entire physical update for the specific entity three times in one frame when the server gets the packet.

Now, to my problem. The above model worked very well before I added gravity to the mix, but since then I realized that I need to update the player entity on the server even when there aren't any player inputs queued up. Otherwise, lagging players would just hang in the air, as they do in my current implementation. Running physics without inputs has proven to be troublesome because depending on the timing when inputs are received, the server may be a few physics frames ahead or behind of the client. It may start off with another starting point , causing a misprediction when returning the updated position.

I've read a lot of articles and many dance around this subject, for example:
http://gafferongames...worked-physics/
- The physics is only updated when input is received, ignoring the problem what to do when inputs are missing for a longer period.

http://www.gabrielgambetta.com/?p=22
- No physics is applied in this example; all state change depends on input so the problem does not exist.

I see some alternatives:
1. Keep track on when the server starts "extrapolating" the entity, i.e. runs updates without fresh client input. When new inputs arrive, reset to the original position and re-simulate the entity with the new inputs for the duration in which it was previously extrapolated.
2. The server stops updating the entity if it runs out of client inputs. Instead, the other clients goes on to extrapolate the movement of the client that is missing inputs.
3. Something entirely different.

Number 1 is attractive since it seems the "correct" way to go about this, but I'm having trouble getting it exactly right because of the jitter; i.e. I can't fix the mispredictions entirely. Also, I feel it's somewhat over-complicated.

Number 2 is nice since it's basically the model I have, but with the additional extrapolation added on. A problem with this model is that the clients would see a very old position of the lagging player, and since all hit detection etc is done server-side the laggers would become ghosts.

Anyone got a number 3? How do you usually solve this?

EDIT: Actually, I realized while writing this that #2 is a pretty acceptable solution to this. I'll keep the post up in case someone has a better idea or someone wants to read this for reference.

Player prediction across clients, minimizing latency

13 April 2012 - 02:25 PM

Hi there! I'm working on my network code and I've run into a minor snag that someone perhaps can help me with. I went with the Quake 3 network model and I've used Gabriel Gambetta's excellent blog posts as a reference when implementing the finer points such as server reconciliation.

However, there's a situation that occur when there's more than one client in the picture. Lengthy example ahead:

Say that Client A runs three frames, then decides to send an update to the server. I.e, three input structures are sent to the server.

The server receives the update after a few moments, say SendTime(Client A) or ST(A) time units later. The three inputs are queued for Client A for three server update ticks in the future, meaning that the server will be fully up to date with the client after ST(A) + 3 ticks. This is all fine and dandy, since Client A's prediction and server reconciliation will hide all this latency for Client A.

What bothers me is when Client B enters the picture. One could argue that he should be able to know Client A's newest position after ST(A) + ST(B) time units, but if the system is implemented exactly as described above, the input may not show until ST(A) + ST(B) + 3 ticks. This is because the server would have to update the state in order for the input's effect to show. Exactly how much delay Client B would experience also depends on how often the server sends updates.

My question is, do I have a fault in this design or is this how it usually is? One improvement I can see now would be for the server to send over A's remaining inputs to B when doing an update, letting B predict some of A's "future" inputs also. Another thing to try out would be to let Client B just extrapolate A's previous input until the server sends fresher updates. Any more takes on this?

PARTNERS