• Content count

  • Joined

  • Last visited

Community Reputation

428 Neutral

About Ollhax

  • Rank
  1.   Yes, that's right. I'm just using it to figure out what can potentially do that encryption (or whatever) - or rather, only enable stuff that I'm pretty sure cannot.     I'm trying to protect the user's systems. They'll download mods in the form of code, compile it and run it locally. There won't be a central server that keeps the mods, at least not at first, so any security measures have to be done on the users' local machines.   As you say, CAS (or whatever the new security model is called) is still useful. I'll probably leave it in place as an added precaution for PC builds. However, I don't want to be limited to only PC releases, so I need an alternative as well.   You're completely right about runtime checking via assembly resolves, I have that check in place already. As far as I know, those are the only assemblies you'll be able to touch, in addition to the ones given to the compiler.   Reflection is tricky, agreed. Private member access may be hard to stop, so I'll have to think about that closely. I can probably make tools that let you do "safe" reflection, or just disallow it entirely.   Peer-reviewing is definitely a safeguard too. If a mod messes with your computer, you will probably report it, or at least not recommend it to others. But this is obviously only a last resort.
  2. Thanks for the replies so far! I should have explained my situation a bit more. It's about the same as BitMaster's example of WC3 maps. I want to use C# for scripting-type of work. Even when limited, I expect it to be very useful. Some points for context: Users will download mods as code and compile+run them locally. There's no downloading/running of arbitrary .exes or other files. I can examine the code thoroughly before running it. I'll examine the actual semantic model of the code through Roslyn, not match raw code strings. Disallowing the unsafe keyword should avoid problems with buffer overruns, etc. (Well, if I haven't missed something, which is why I'm posting this!) Crashing isn't an issue. I can't help if a mod crashes the sandbox process, but it won't bring down the entire application at least. I imagine mods that crashes the game for you won't be that popular. Allowing reflection isn't a requirement. I'm interested to hear about specific ideas/examples for how you'd be able to attack this setup, given the constrains I mentioned above. I know it's a tricky thing to guarantee that something is secure, but at the same time I can't come up with a single concrete example in my setup where this would be an actual problem. If you'd like, consider it a challenge!   Side note: I use C# instead of Lua because I prefer that language, and I'm hoping to ride a bit on the XNA/Unity-wave. I can use Roslyn for real-time compiling, giving error messages, providing with intellisense-like tooling, etc. Also, it lets me use a common framework for my engine code and mod code. Basically, it saves me a *ton* of works, which makes this a feasible (more or less...) project for me.
  3. Hi there! I've been working on a proof-of-concept for a game-maker idea I've had for a while. It boils down to running user-written, untrusted C# code in a safe way. I've gone down the path of AppDomains and sandboxes, using Roslyn to build code on the fly and running the code in a separate process. I have a working implementation up and running, but I've hit some snags. My biggest issue is that it seems like Microsoft have given up on sandboxing code. See https://msdn.microsoft.com/en-us/library/bb763046%28v=vs.110%29.aspx. They added the "Caution" box a few months back, including this gem: "We advise against loading and executing code of unknown origins without putting alternative security measures in place". To me, it feels like they've deprecated the whole thing. There is also the issue that AppDomain sandboxing isn't very well supported across platforms. There's no support in Mono. I had hopes for a fix from the CoreCLR, but then I found this: https://github.com/dotnet/coreclr/issues/642 - so no luck there. So! I've started exploring whitelisting as a security measure instead. I haven't  figured out how big a part of the .NET library I need to include yet, but it feels like I mainly need collections and some reflection stuff (probably limited to messing with public fields). I think I can do all this by examining the code with Roslyn and not allowing namespaces/classes that aren't explicitly listed.   I'm comparing my approach with Unity, which does more or less the same thing, e.g exposing only a safe subset of the framework. In their case it's an actual stripped down version of Mono (if I've understood it right), but seems to me the results would be pretty much the same if I get it right.   TLDR: If you have experience with these kind of problems, would you say that is a safe approach? Am I missing something big and obvious here?
  4.   Mm, totally with you on that point. That's what I was looking for; whether this is just a convention or if there's something I've not thought about. Good arguments on either side. But, in line with what you said, I've made my problem easier by just calling it a gameplay-oriented randomizer instead (throwing in some helpers for random directions, colors, etc). I can definitely stand behind inclusive maximums for that use case.
  5.   This pretty much nails it. I agree that neither way (exclusive or inclusive max) is inherently better than the other. So unless I'm missing something, it boils down to my arguments above (counter-intuitive and weird states) vs usefulness for indexing arrays and convention.   Thanks for taking your time to answer, I'll go pounder this a bit more  
  6. But you can turn that argument around and ask, why would one design the random function's range only to index stuff in arrays smoothly when it's just as likely to be used for simulating die rolls? Stated differently, I *am* designing for general use, and I get the feel that exclusive maximum is a weirdness designed for indexing arrays, a very specific problem.
  7. Thanks for the link! It's pretty much what I expected. I agree with that SO reply, but I'm not really sure if it applies to the interface of a random number generator. Expressing "I want a dice roll between 1-6" as Random(1, 6) just seems more natural to me.   Having Random(0, 0) mean Random(0, Int32.MaxValue-1) (or maybe Random(0, Int32.MaxValue)?) does not really feel natural either. If I don't go with a convention of my own, I'll just stick to the one in System.Random, where Random(0, 1) equals Random(0, 0).
  8. Hi there! I'm working on some library functions and I've hit a problem. Why do people insist on having random functions (like Random(min, max)) where the max value is excluded?   Con's: It's counter-intuitive. Most rookies trip on this at least once. It's less efficient in the approach I was going for. It has weird states like Random(0, 1) being the same as Random(0, 0) (unless one should forbid the latter, which seems only worse). You also can't Randomize through the full positive int range [0-Int32.MaxValue] (which kinda isn't a problem, but still inelegant). Pro's: Bit simpler when you're randomizing an element out of an array (array[Random(array.Size)]). Basically fits in better in a world where 0 is the starting index. Everyone does it this way, including the System.Random functions (working in C#). I really don't like a randomization lib being modelled only for indexing arrays, but the fact that everyone does it this way is a much more convincing argument. Before I go make a decision and move on, is there something I'm not thinking about?
  9. I think I'm making some progress with queueing up the updates in case they are early, and applying them at their designated time. It definitely removed the worst of the jitter. However, there's some inherent problems with this: * Queuing up = introducing latency. Not much to do about this. * How large queue should one allow? I've set an arbitrary number right now (3!) but I think it needs some more thought... * What happens to packets that are too late? I think I'll try extrapolating the position in case this happens.
  10. Hi there! I'm having some trouble dealing with jitter in my networked game. I've created a simplified example that should explain the problem:   Client A updates its position, P, each frame. For simplicity, say that P is incremented by one each frame. The game updates 60 frames per second, and every 3 frames (20 updates / second) the client sends this position to Client B.   B receives the position and uses this value. However, since we expect less than 60 updates per second from A, B will try to predict the increment to P by adding one to it each frame. This works just fine - most of the time.   What I'm seeing is some jittery movement on B's end, caused by packets arriving at an uneven pace. For example, here's a printout of P at the end of each frame:   P: 4 P: 5 P: 6 <- Got update from A, value = 6. All is well since we expect one update every three frames. P: 7 P: 8 P: 9 <- Got update from A, value = 9. P: 10 P: 11 P: 12 <- Missed an update! P: 12 <- Got update from A, value = 12. Since it was one frame late, P stands still and we get jitter. P: 13 ...   I guess I can't expect packets to arrive exactly on time, even though I'm running the test on a single computer and with all clients at 60 FPS. However, what to do about it? The obvious answer is smoothing, but I'd need a lot of it to cover up the jitter - the difference is often more than a frame or two. Are these results to be expected or would you suspect that something is wrong here?
  11. Hi there, I'm working on a 2d platform game using the quake 3 networking model. The client sends inputs 20 times a second, i.e. three inputs per packet. To avoid problems with "bursty" connections (see [url="http://www.gamedev.net/topic/609888-dealing-with-bursts-of-delayed-packets-from-the-client-to-the-server/page__hl__%2Bqueue"]here[/url]), I process the received inputs directly on the server during a single frame. Since the inputs as well as the physics (gravity etc) of the game affects the player entities, I essentially run the entire physical update for the specific entity three times in one frame when the server gets the packet. Now, to my problem. The above model worked very well before I added gravity to the mix, but since then I realized that I need to update the player entity on the server even when there aren't any player inputs queued up. Otherwise, lagging players would just hang in the air, as they do in my current implementation. Running physics without inputs has proven to be troublesome because depending on the timing when inputs are received, the server may be a few physics frames ahead or behind of the client. It may start off with another starting point , causing a misprediction when returning the updated position. I've read a lot of articles and many dance around this subject, for example: [url="http://gafferongames.com/game-physics/networked-physics/"]http://gafferongames...worked-physics/[/url] - The physics is only updated when input is received, ignoring the problem what to do when inputs are missing for a longer period. [url="http://www.gabrielgambetta.com/?p=22"]http://www.gabrielgambetta.com/?p=22[/url] - No physics is applied in this example; all state change depends on input so the problem does not exist. I see some alternatives: 1. Keep track on when the server starts "extrapolating" the entity, i.e. runs updates without fresh client input. When new inputs arrive, reset to the original position and re-simulate the entity with the new inputs for the duration in which it was previously extrapolated. 2. The server stops updating the entity if it runs out of client inputs. Instead, the other clients goes on to extrapolate the movement of the client that is missing inputs. 3. Something entirely different. Number 1 is attractive since it seems the "correct" way to go about this, but I'm having trouble getting it exactly right because of the jitter; i.e. I can't fix the mispredictions entirely. Also, I feel it's somewhat over-complicated. Number 2 is nice since it's basically the model I have, but with the additional extrapolation added on. A problem with this model is that the clients would see a very old position of the lagging player, and since all hit detection etc is done server-side the laggers would become ghosts. Anyone got a number 3? How do you usually solve this? EDIT: Actually, I realized while writing this that #2 is a pretty acceptable solution to this. I'll keep the post up in case someone has a better idea or someone wants to read this for reference.
  12. [quote name='papalazaru' timestamp='1334835016' post='4932766'] Article along those lines. [url="http://gafferongames.com/game-physics/networked-physics/"]http://gafferongames...worked-physics/[/url] Maybe contains some extra stuff that would be useful to you. [/quote] Yup, read them all. It's a good link though!
  13. [quote name='hplus0603' timestamp='1334704303' post='4932309'] [quote]Do you actually send three game states per packet bunched together?[/quote] No, you probably send three sets of simulation inputs, and perhaps one game state. You don't generally want to just dump full network states across the wire at full frame rate, because of the bandwidth usage. However, by sending the inputs, you can get a very good replica of what the entity is doing, without using much bandwidth. Typically, you can get away with sending a baseline/checkpoint for an entity only once every few seconds, and do inputs for all the simulation steps in between. To avoid burstiness, you could send the inputs for all entities for the three steps, and a state dump of one entity, per packet. Keep a queue of entities, and rotate through which one gets state-dumped per packet. [/quote] From the server, I send only the latest game state and all queued (and not yet used) inputs. I decided to fix this and yes, it did improve the responsiveness of the game.
  14. [quote name='hplus0603' timestamp='1334352984' post='4931062'] You have to be careful about your "frames." There may be render frames, physics frames, and network tick frames. The latency in question will be a combination of network tick frames. For example, if you have three physics frames per network frame, then the batching of commands will add three physics frames' worth of latency. Also, if the data for a player arrives "early" on the server, then the server has the choice to immediately forward it to the other clients, or to only forward it after it's been simulated and verified. How much latency there is from player A to player B depends on this choice as well. [/quote] Yes, one of the problem is that network ticks run slower than physic frames, which is why I have to queue up the inputs. Good idea about forwarding inputs immediately, but I'm a bit worried over the overhead in a situation where lots of player inputs arrive spread out over different frames, this would trigger a lot of data sending. I guess I could also let clients send this kind of info directly to all other clients in the game, but that would complicate things a bit I think. Anyhow, am I being paranoid about this thing or is it something you generally need to deal with?
  15. Hi there! I'm working on my network code and I've run into a minor snag that someone perhaps can help me with. I went with the [url="http://nclabs.org/articles/5"]Quake 3[/url] network model and I've used [url="http://www.gabrielgambetta.com/?p=22"]Gabriel Gambetta's[/url] excellent blog posts as a reference when implementing the finer points such as server reconciliation. However, there's a situation that occur when there's more than one client in the picture. Lengthy example ahead: Say that [i]Client A[/i] runs three frames, then decides to send an update to the server. I.e, three input structures are sent to the server. The server receives the update after a few moments, say [i]SendTime(Client A)[/i] or ST(A) time units later. The three inputs are queued for Client A for three server update ticks in the future, meaning that the server will be fully up to date with the client after ST(A) + 3 ticks. This is all fine and dandy, since Client A's prediction and server reconciliation will hide all this latency for Client A. What bothers me is when [i]Client B[/i] enters the picture. One could argue that he should be able to know Client A's newest position after ST(A) + ST(B) time units, but if the system is implemented exactly as described above, the input may not show until ST(A) + ST(B) + 3 ticks. This is because the server would have to update the state in order for the input's effect to show. Exactly how much delay Client B would experience also depends on how often the server sends updates. My question is, do I have a fault in this design or is this how it usually is? One improvement I can see now would be for the server to send over A's remaining inputs to B when doing an update, letting B predict some of A's "future" inputs also. Another thing to try out would be to let Client B just extrapolate A's previous input until the server sends fresher updates. Any more takes on this?