• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

172 Neutral

About fjholmstrom

  • Rank
  1. I've got to say that the new mono develop version is pretty damn awesome, loving the 3.x branch! #mono #monodevelop #dotnet
  2. First, note that I mainly use C# together with XNA and Unity3D, so maybe the rules for games and engines using C++ are different. Anyway, I strongly prefer the first option. While there are quite a few classes that needs to be created, I do not see this as a major drawback. It creates a very clear separation of "data that is put on the wire" and the other code which deals with this data. Everything gets encapsulated into one neat little package. And I suspect that even if you use something which is more akin to your second example (the SWG emulator one), you would still end up with some sort of encapsulation similar to an "Message" or equivalent to be able to do reliable sends, sequencing, etc.    Just as an example, this is a "Message" (my networking lib calls them "Events") which is used to send the click position and target object in a physics based puzzle game:   public class ShootBallEvent : NetEvent { public static readonly ushort Id = 0; public NetObject Target; public Vector3 Position; public ShootBallEvent() : base(Id, NetEventDelivery.Unreliable, NetEventDirection.ClientToServer) { } public override void Pack(NetConnection conn, NetPacket packet) { packet.WriteVector3(Position); packet.WriteNetObject(Target); } public override void Unpack(NetConnection conn, NetPacket packet) { Position = packet.ReadVector3(); Target = packet.ReadNetObject(); } public override void Process(NetConnection conn) { if (Target == null) return; if (!ReferenceEquals(Target.Owner, conn)) return; // ... do stuff to Target and Position } }   I find this encapsulation to be very clear and easy to work with.
  3.   Yes, I suppose the Poke/What thing is sort of like a heartbeat. But my library is built like the Tribes Networking paper described and uses a fixed (configurable per client) rate. So a heartbeat is completely unnecessary, as you say. And if one of the connections drop WINDOW_SIZE (defaults to 32 for me) packets *in a row* - their experience will most likely be horrible anyway.   While the way you do is kind of "aggressive" (lack of a better word), it  seems like a really clean approach - BTW when you say "slack", what exactly do you mean? Like I've said my window size is 32 packets, would "slack" be more packets on-top of that or?
  4. I've implemented a networking library with uses the techniques described in the "Tribes Networking" paper, now in my current design I have a packet window size of 32 (aka. I can have 32 packets "in transit" at any point in time, if I have not heard about the 32:nd oldest packet, I can't send any new packets out).   I also have a time-out that says "if I have not heard anything from this peer in X seconds we should send something", so I send a special "Poke" package from the peer that hasn't heard anything for a while and if the "Poke" does reach the other end it will respond with a "What" package (It's called a "Poke... What?" sequence in code).   But my instinct tells me this would not be needed, if there comes a case where I actually end up with the entire window full the peer is most likely gone already? I'm not sure the way I deal with it using Poke/What is ideal either... I suppose in theory (at least assuming non-mobile) stuff you could just disconnect them once the entire packet window fills up with un-acked packages.   So basically my question is: How do you usually deal with detecting timed-out peers or peers that are gone? 
  5. papalazaru:   If the simulation runs on a fixed timestep (60Hz) - I would take that as a guarantee of 60Hz, it might jitter a *tiny* bit back and forth but at least the whole simulation will "jitter" instead of individual connections send intervals. And yes, the slight aliasing which will happen with the first method is what hit me when I implemented it - but like you say the aliasing could be countered by simply doing some slight rounding or something equivalent.    One thing which trips me up about using the second method of "frame hopping" (never heard that expression before) is this: What happens when I do get jitter on the server, what if I get HUGE jitter and I need to run 4 frames to "catch up" to real time. Then I will send two packets in quick succession... hm, but then again if that happens the client will just perceive it as network jitter anyway. The difference here I suppose is that timing it manually using (lastSend - currentTime) you would only send *one* packet for those four updates.
  6. Assume my game is running at a rate of 60Hz (~16.66 ms) on both the server and client and updates from the server to the client is sent at 20Hz (50ms). Now since the game is running at 60Hz, there are two ways of calculating the interval for the client updates. I can either just do this:  if((currentTime - lastSendTime) >= 0.05) { SendUpdate(); lastSendTime = currentTime; }   And just check this every frame for each client. But to me it seems like a cleaner solution would be to specify the client send rate in "every n:th" frame instead, so if I want to send at 20Hz when running at a 60Hz simulation rate I would just do this: if(frameNumber % 3 == 0) { SendUpdate(); }   It just feels a lot cleaner, you don't need to keep a separate timer or have to deal with small inaccuracies in the local time of the server, etc. It also becomes nicely synchronized with updates to the game state on the server, etc. The reason I'm asking is because in the few examples of "professional" networking code you can find on the web, I see *everyone* using a the first method of a separate timer for the send time for each client, so I'm thinking I must be missing something here?
  7. Where to put the jitter buffer

      Ah yes, maybe I was not clear - assuming a Client/Server model, is it common place to de-jitter things sent form the server to the client and also de-jitter things sent from the clients to the server. Or is it usually just applied in one direction? I would assume both.       As always a very good point, and I'm leaning towards putting the entire packet in a jitter-buffer, as it just makes the entire code easier and cleaner (like you say). Also I worry about actually untangling what *can* be delivered instantly and what has to be de-jittered, it just seems like it would lead to a slew of possible de-sync issues and weird behavior.
  8. So, I have been working on my networking code more and more, and to be honest it's starting to look pretty nicely :). But, enough talk... my question is pretty straight. When implementing a jitter buffer, where in the packet delivery chain is common place to put it? I see a few different options: You apply the jitter buffer to the entire packet "receive" mechanism, basically buffering each packet as it comes in and then hands it out at the proper interval to the rest of the code. Since some data might be time critical, you put the jitter buffer "after" the packet receive mechanism and move it to the specific parts of the application that need the buffering - for example rendering of positions. Also is it common place to use a jitter buffer on both ends of the connection (server and client), or do you usually just run it on one (the client I would guess) ?
  9. I've implemented large parts (~90% of the functionality) of the networking techniques described in the now famous paper "The TRIBES Engine Networking Model" (http://www.pingz.com/wordpress/wp-content/uploads/2009/11/tribes_networking_model.pdf) in my C# networking library. All in all I have to say that I'm incredibly pleased with the flexibility and performance that this networking scheme offers, but I've run into a bit of a problem. Since one of the features of the way the tribes model works is that the not all objects are updated every time, but rather a prioritization scheme picks which object updates should be written to each packet. What this means in practice is that position/rotation/velocity updates for your networked objects will not be received at a constant pace. This creates a bit of a problem with displaying objects that move nice and smooth on screen, now even though I am no expert at this subject, the three main ideas behind rendering remote objects on screen are these:Interpolation/"Valve"-style, where you hold back on rendering for update_rate * 2, basically introducing some artificial lag but always (except in case of a lot of packet drop) rendering nice smooth movement on screen. Interpolation and then Extrapolate, basically interpolate from current to next known position as it comes in, and then extrapolate based on velocity from last known position until a new packet comes in, there are a few ways to go about this (linear, cubic splines, etc.).And, in all honesty - I'm just not sure which model fits best for the type of "Most Recent State Data" model that the tribes networking paper describes. Using interpolation/valve could work, but it could lead to pretty large "delays". And using interpolation/extrapolation, while usable would probably lead to a lot of visible corrections, etc. Any help on reasoning about this would be most appreciated :)
  10. I really tried, I did... but #windows8 is just horrible.
  11. [quote name='GeniusPooh' timestamp='1356011499' post='5012792'] OMG too long video can you capture that specific effect? [/quote]I linked with a time/second mark, but apparently the embedded youtube player didnt pick up on that, no need to be a douche about it.
  12. I've been messing around with creating my own "shield" shader, the basic idea is something like this - which can be seen in planetside 2: [url="http://www.youtube.com/watch?v=gLHxPZu4blI&t=20m36s"]http://www.youtube.com/watch?v=gLHxPZu4blI&t=20m36s[/url] now rendering the basic shield on a mesh is not a problem, the problematic parts come when I want to create the "borders" you can see in the video where the shield touches the building itself. The same borders also appear when say a vehicle or player passes through the shield. Other effects im looking into creating is a "wave/circle" type of effect when a bullet hits the shield for example. Anyway, I'm not looking for cut n paste shader code, but more for general pointers in the right direction to achieve these effects.
  13. So, I've re-read the CDLOD paper two times now, and I think I got the hang of the general algorithm, it does indeed seem way simpler to implement then geoclipmaps, but in the end there's a section called "granularity issues" that read like this: [quote] One limitation of the algorithm is that a single quadtree node can only support transition between two LOD layers. This limits the minimum possible viewing range, or the minimum quadtree depth, because only one smooth transition can be performed between two LOD layers over the area of the same node. Increasing the viewing range will move LOD transition areas further away from each other and solve the problem at the expense of having more render data to process. The other options are to reduce the number of LOD levels, which reduces the benefits of the LOD system, or to increase the quadtree depth to increase the granularity, which increases quadtree memory and CPU use. The size of the morph area can also be decreased to mitigate this problem, but that can make the transition between levels noticeable. Since the LOD works in three dimensions, this problem will be enhanced when using extremely rough terrain with large height differences: thus, different settings might be required for each dataset. In the provided data examples, LOD settings are tuned so that the ranges are always acceptable. In the case where different datasets and settings are used, these LOD transition problems can appear in the form of seams between LOD levels. Debug builds of the demo code will always detect a possibility of such a situation and display a warning so that settings can be corrected (detection code is overzealous, so a warning does not guarantee that the seam will be observable - just that it is theoretically possible). [/quote] And I just wanted to ask those people that have implemented CDLOD if this effected any real world implementations? And if so, how much? And what considerations did you have to take for your type of environment/game? The section is very vague on the details, exactly when this issue shows up, et
  14. [quote name='gjaegy' timestamp='1348644857' post='4983913'] I would second CDLOD as well, this is what I implemented. It's much easier to understand, and performance are great - TBH I never really understood why geo clipmap would achieve better results than a chunk based method (but I might be dumb [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] ) Also, CDLOD will give you geomorphing without any further effort, which is a very nice-to-have feature. [/quote]I've been looking into CDLOD, I read the paper published, but I felt it was missing a few very important details, like how the height map is updated, how to deal with *really large* terrains, etc. Edit: Also, I'm not a super fan of relying the fact that the GPU has bilinear filtering of vertex textures, even though it's common now.
  15. #unity3d rage, example of what I mean: http://t.co/ok5OhPhF this is so silly.
  • Advertisement