Jump to content

  • Log In with Google      Sign In   
  • Create Account


fholm

Member Since 13 Aug 2011
Offline Last Active Jun 17 2014 11:35 PM

#5157171 I just released my own networking solution, Bolt, for the Unity3D game engine.

Posted by fholm on 31 May 2014 - 11:03 AM

Figured I would post a quick notice in here also, considering all of the asking and talking I've done here over the past two years or so smile.png. I just released my own networking solution, called Bolt, for Unity3D. You can find it over at http://www.boltengine.com


#5123007 How do i determine send/recv buffer size

Posted by fholm on 12 January 2014 - 01:40 AM

I don't know if you're referring to Nagle's algorithm, but i've never had a situation where a remote client would recv() partial bytes, like 2 bytes of 100 bytes for example. Either the client receives the server's send() message in full or a winsock error message (ie: error code 10035) is created, per the return value of recv(). Or maybe you're referring to a client that has a network connection that's cutting in and out 

 

 

Maybe it's rare, but it does happen and it is a case that needs to be handled.




#5104297 GeoClipMaps - splitting geometry into smaller meshes or using fewer large meshes

Posted by fholm on 25 October 2013 - 04:00 AM

One vertexbuffer per level is not necessary.

I did s.th. similar. I have one central mesh, one ring mesh and one transition mesh (also a ring but thinner), the latter two being rendered multiple times with different scales. The memory impact is negligible but since all meshes are always visible, you loose the ability to frustrum cull the terrain behind the camera. On the up side, you have fewer drawcalls.

If I find the time, I'll try to rearrange the indices in the indexbuffer in a way, that I can render only specific index ranges in each ring, where each index range corresponds to one section of the ring. That way, I should gain the ability to frustrum cull the terrain, but without needing significantly more draw calls (at least when the index ranges can be merged).

 

Ah yes, of course - one vertex buffer is only needed + one for the center. But yes, of course - you loose the ability to do frustum culling. Interesting idea bout the index buffers tho and to do frustum culling through that.

 

I can't notice how much different this approach is from, say, a quad tree based LOD. Unless I am missing something...

 

Well, compared to something like a quad-tree based geo mipmapping which is usually done on the CPU, this has a few benefits:

 

1) All of the heavy computation is done on the GPU

2) A lot less draw-calls




#4891762 Structuring a simulation over several threads

Posted by fholm on 08 December 2011 - 06:07 AM

So, I've been working on the server for my hobby game, so far I've done the simple thing of just running it all in one single thread - doing the "simplest thing that will work", sadly - it does not work any more as one thread is not enough to handle all the sending and receiving of data plus the simulation at once. My initial though was to split the sending and receiving to their separate threads, and run the simulation in one thread, and then communicating between the three threads using message queues. This works decently well, but there is still going to be a point where one thread for the simulation is not going to be enough.

The whole task of truly multi-threading any application is not something undertaken lightly, I have written several big system running on N-core systems, where N is pretty big (32+). But all of these problem domains have not been as complex as a real time simulation are and have had problem domains which are easy to partition over several threads as they were highly isolated.

My first though and the initial implementation I built was to split the server into N threads, where each thread gets a certain subset of the players and actors on the server, and this thread is responsible for receiving data, sending data and running the simulation for the players and actors that it has been assigned. Now, this does actually work pretty well and I've been able to run a huge amount of actors on the server. The problem then comes when one actor in thread T0 needs to access an actor in thread T1.

I once again tried to go with the "simplest thing that works" and just do a simple locking scheme, while you do something to actor X which is not owned by your current thread you have to lock and release him with a normal Monitor.Enter / Monitor.Exit pair. This obviously does not scale and the lock contention becomes insane, when having a lot of interactions the server is barely faster then a single threaded, which is to be expected as doing locking on a large scale essentially makes your application sequential in the same way that a single thread does (not quite true, but the gains are not nearly as big as a truly concurrent implementation), on top of this add all the context switching excessive locking causes and you will not gain so much performance.

So, I decided try to revamp the locking and implemented a thread safe reference counter for the objects shared between threads, so that one thread could say "I need this object right now" or even "I need this object for some time" - but it would not grant it exclusive access to the object, it would just make sure that the object does not vanish under it from being deleted from the world (for example an NPC that could get despawned from it's owning thread while another thread is trying to buy an item from it) .

This still leaves a problem though, with two threads possibly modifying the object at the same time (currently two different threads could say "I need this object right now", and they both will be allowed to access it), which obviously does not work. And this is where I'm at currently, I have a couple of ideas on how to solve this though. I'm looking for feedback on these ideas, or general comments or explanation of how problems like this have been solved before in commercial games or any literature that I could read.

IDEA 1: Creating an even more fine-grained locking scheme which allows for both read and write locks to be held on shared objects.

Basically implementing a fine-grained locking scheme which would allow threads to both lock an actor for reading and writing, only allowing one writer but several readers obviously, there are several ways this could be accomplished by using wait handles built into windows or more light weight methods using Monitor.Enter/Exit and pair of sync objects.

IDEA 2: Allow each thread to be fully self containing, exposing no shared objects to other threads and instead have read-only or some type of auto updating proxy objects which the other threads can use

The basic idea is to not share any writable data between the threads, and instead expose read only objects to the other threads which either act as proxies for the main objects or contain the data needed by the other thread in a read-only format. This has the added benefit of removing the different between "a different thread" and "a different process" and "a different machine" as it would in theory be possible to create an abstraction layer which removes the different communication mechanisms for "between threads", "between processes" and "between machines". How updates to objects would be handled I'm not 100% sure of, probably some incoming message queue that other threads can push wanted changes to objects onto, which then is de-queued in sequential order on the owning thread.

As I said I'm interested in any feed back on this type of problem, I posted it in Multiplayer / Networking because it's specifically for a multi-player server, hope I picked the right forum!


#4854777 Synchronizing server and client time

Posted by fholm on 28 August 2011 - 11:57 AM

So, if at this point, you have a system that works, then I suggest you keep it that way until there's evidence that you need to change it :-)


Thanks for all your help, and this was probably the best advice I got in this thread. I got something that works, I need to stop over-thinking it :)


#4854383 Synchronizing server and client time

Posted by fholm on 27 August 2011 - 05:12 AM

This is my final code, which seems to be working very well:


using System;
using System.IO;
using UnityEngine;

public static class NetworkTime
{
    public static float offset = 0.0f;
    public static float gameTime = 0.0f;
    public static float localTime = 0.0f;

    public static void UpdateTime()
    {
        localTime = Time.time;
        gameTime = localTime + offset;
    }

    public static void UpdateOffset(float remoteTime, float rtt)
    {
        var newOffset = (remoteTime - Time.time) + (rtt * 0.5f);

        if (offset == 0.0f)
        {
            offset = newOffset;
        }
        else
        {
            offset = (offset * 0.95f) + (newOffset * 0.05f);
        }

        UpdateTime();
    }
}


  • Time.time is the current local time this simulation step was started (supplied by Unity as a built in property)
  • NetworkTime.UpdateTime is called on both the server and client to set the local time (Time.time isn't used directly anywhere in my code, I always go through NetworkTime.gameTime), the offset will always be 0 on the server so localTime == gameTime on the server. UpdateTime is called at the start of every physics step and frame render to keep the time up to date
  • NetworkTime.UpdateOffset is called on the client only, every package that is received from the server has the servers latest gametime as the first four bytes, which is sent into UpdateOffset through the remoteTime paramter. The avarage roundtrip time I get from the lidgren library automatically and is sent in through the rtt parameter. I then use hplus0603's formula (I think, maybe I'll be corrected) to calculate the offset. UpdateOffset also calls UpdateTime as the last thing it does to update the time with the latest offset.
This seems to be working very reliable for me, albeit over only simulated latency so far, but the client seems to be in sync to as close as 5-15ms with the server, which feels good enough for me at least :)


Slight update: I also calculate the current timestep using a simple calculation like this: timeStep = (int)(gameTime * stepsPerSecond);


#4853612 Synchronizing server and client time

Posted by fholm on 25 August 2011 - 07:01 AM

So, the "best solution" in my eyes seems to be this:

  • On every world state (every 45ms, 22.22Hz) that is sent from the server, the current servers game time is attached
  • The client sets it's local gametime clock like this: gametime = server game time + (avarage roundtrip/2)
  • Every tick the client doesn't get an update from the server (every 15ms, 66.66Hz) it increments its local gametime with 15ms (the tick time)
  • The server also keeps track of the last gametime that was sent to each client
  • When the client issues a command such as "FORWARD" it attaches it's local, estimade gametime: lastReceivedServerTime + (avarageRoundtrip/2) + (currentTime - timeSinceLastServerUpdate)
  • When the server receives a command from a client it verifies that the attached gametime falls within a valid range, say +/- (100 + avarageRoundTrip/2) ms of current time, if it falls outside the command is discarded, if not it gets snapped to the nearest tick-size and put in the queue of commands to be processed.
Is this scheme plausible? Good? bad?


PARTNERS