Jump to content

  • Log In with Google      Sign In   
  • Create Account

Issues with a seamless world


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 origamii   Members   -  Reputation: 116

Like
0Likes
Like

Posted 20 February 2014 - 08:12 PM

Hi all,

I've been planning to start running some tests (for hardware/bandwidth usage) to determine the feasibility of my game design since it is multiplayer (100-200+ players depending on various factors and the server's hardware), and I want to use authoritative servers. The engine I'm using is Unity and I am thinking of using uLink for the networking aspect.

Anyway I want the game world to be decently large, and seamless. I've been doing a lot of research on this (including in regards to the floating point issues, etc) and have been working out some solutions before I get to coding, but I have reached a sort of hitch.

The solution I have at the moment is a general method that breaks the world up into chunks that exist in a database only and are pulled up on demand as a player gets near them. This could be small chunks near the player or larger chunks.

With uLink, each server instance should be able to handle a 10km2 or smaller area (depending on the physics tests I run for FP issues). This will act as a container for a number of chunks inside. Multiple containers can potentially exist in the same server instance if they are on different physics layers.

The issue is, for example, if one player is in chunk 57 in instance (or layer) A and at the edge of instance A's container, and another player is in chunk 58 in instance B and at the edge of instance B's container and they shoot each other, how is it handled?

The idea of having your projectiles/lasers/whatever hit the edge of your chunk, and then having them be recreated in the other chunk with their state intact seems far from optimal, if even feasible for fluid gameplay.

I did think about dynamically grouping players that come near each other into their own container to avoid this altogether, but even then this issue can arise depending on the layout of the players. One nasty example of this is if there is a chain of players spread out, each ~1-2km apart, the rules for grouping would basically crumble and then it goes back to the cross-boundary shooting issue again. Actually with any case where there are a number of players spread out over an area that is larger than 10km2 and are in decent range of each other, this becomes an issue.

If anyone has any thoughts on this, I would greatly appreciate it.

TL;DR - In a seamless universe that is divided into isolated containers with various chunks in each container, how do you handle the issue of shooting into an adjacent chunk that happens to be in another container (another container could be on a separate server instance even though in the world they are adjacent).



Sponsor:

#2 wodinoneeye   Members   -  Reputation: 876

Like
0Likes
Like

Posted 21 February 2014 - 06:43 AM

It gets nasty.  Each chunk needs to maintain a border area (to a depth of any significant visible actions/events) for all of its adjacent chunks.  You then maintain a shadow copy there of any significant objects within that border area of that other chunk (you have some kind of subscription to channel updates to THIS server machine...) .   Every time something happens to those potentailly visible objects in the other chunks, the current chunk has those shadow objects updated with whatever significant state/event happenings are specified by your game mechanics..    You can now do basic visibility checks and projectile pathing and send client update info )object positions/events) without having to constantly transmit requests for that info to the other chunk's server.   Needless to say, that the quicker the updates are maintained between servers, the more accurate the interactions will be.   

 

When a players client institutes actions against a shadow object that action event would be (pre?) routed to the other server to take effect (it may would maintain full details to do all game mechanics calculations including with object beyond that border area which may have effects)  Results of the inetactions then get forwarded to the subscribing chunks.

 

Chunks completely out of range of a players viewing could be LOD'd down in running priority (with catch up and switching back to full detail when the player does move within range).

 

 

Now it gets fun when you have chunks with players in them being 'high priority realism' and others (adjacent)  with 'low priority abstraction'  and beyond them chunks being rolled out or being ready to be rolled into main memory when a player gets close enough.  There then would be breaking/establishing of the 'subscriptions' in a fluid manner.

 

Note-  at a grid 'corner' you can have you object in three other chunk's "shadow" lists (or worse if you have irregular chunk layouts)

 

 

 

A little more fun is : if you have seperate CPU cores running the chunks and having the update pipes either go to a whole nuther machine or just (shunt) to a different core/process space on the same machine.   At one time I did some research on lockless queues to try to minimize overhead for this kind of thing. But then I also had (many) seperate server AI nodes to run NPC units, and like a player's client, each 'smart' object could traverse between chunks and potentially maintain its own world view of many chunks it had visible.


Edited by wodinoneeye, 21 February 2014 - 06:49 AM.

--------------------------------------------Ratings are Opinion, not Fact

#3 hplus0603   Moderators   -  Reputation: 5693

Like
3Likes
Like

Posted 21 February 2014 - 10:16 AM

Having built and maintained a system like this, I'd like to suggest that trying to do it on top of an existing physics engine will be a challenge.

It can be done, but if I were to build it "for real" again, I'd either re-build my own physics engine, or use some open source and significantly re-architect it to suit the problem.

I would possibly also not use the border/cell method, but instead use a system where multiple servers can share entities in the same instance, and simply make a hard rule that entity/entity interactions must have at least one tick's worth of latency, so that effects from entities can be transmitted to all other servers that serve entities in the same area in time.

 

If you are a small business/studio or even a hobbyist, I would suggest building your world as separate areas (city districts, islands, whatever) and use some kind of "neutral, no-effect zone" between the areas, so you didn't have to worry about A shooting at B. Yes, this changes the game design a bit, and does so in the interest of actually finishing a game. If it turns out to be very successful, perhaps you can add the hard technical challenge in a later update/version.


enum Bool { True, False, FileNotFound };

#4 wodinoneeye   Members   -  Reputation: 876

Like
0Likes
Like

Posted 22 February 2014 - 11:22 AM

"but instead use a system where multiple servers can share entities in the same instance,"

 

Not sure what you mean by this.

 

You could get rid of the borders on the system I describe (if its too complicated) and just get 'shadows' from the entirety of each of the adjacent 'chunks'  (the chunks being spread out on multiple servers and THEY do the bookkeeping for the world)

 

To do behavior AI decisions for NPC objects,

execution of client avatar commands/actions, 

and for all the object Bookkeeping (arbitration of effects/command action results) 

 

All of those would  need authoritative processing locations (however distributed for entities involved)

 

and being distributed for scaling, will have to keep others (entities) informed with COPIES of each authoritative decision/outcome(and current states)

it has to be a prearranged 'push' flow because you dont know what has happened elsewhere to know to 'pull' anything

 

 

unless you have fully deterministic game mechanics you cant arbitrate equivalently in several locations for every decision/outcome


Edited by wodinoneeye, 22 February 2014 - 11:25 AM.

--------------------------------------------Ratings are Opinion, not Fact

#5 hplus0603   Moderators   -  Reputation: 5693

Like
2Likes
Like

Posted 22 February 2014 - 11:52 PM

"but instead use a system where multiple servers can share entities in the same instance,"

Not sure what you mean by this.


What I mean by that is that I'd probably allow multiple servers to serve the same area, and be "masters" for different entities within that same area. A cheap ghost of each remote entity needs to be created on each server for each remote entity, but this can be very, VERY cheap -- it doesn't even need physics, other than whatever the collision shell is.

Then, when there is player/player interaction, you simply make the rule that interactions take one tick -- this gives the interactor time to send the message to the interactee about the interaction for the next simulation tick.

This way, I believe I could get to denser areas than an area-derived server split, because many servers can overlay various areas, and if you need more entities, you add more servers, and don't have to worry about the area.

The problem with area-based servers that use physics-based ghosts is that there exists an upper limit to the density (in players-per-square-meter) you can achieve, because you'll end up with too much load just serving border areas if you make each area-based cell too small.
enum Bool { True, False, FileNotFound };

#6 wodinoneeye   Members   -  Reputation: 876

Like
0Likes
Like

Posted 23 February 2014 - 12:50 AM

OK, like I saw likely solutions : a 'chunked' Bookkeeping server(s) (at least to maintain data if not decide/arbitrate changes to the data), and other 'servers' to control NPCs/props and then the front ends for clients.

 

The physics 'events' (living whereever - now on their own server subset....) would be their own 'temporary entity' which are invoked and pull data from all the other involved entities (terrain data, action originator, targets directly/indirectly involved  whose states might be in flux from simultaneous effects in the simulation...).   I would expect you would need a one 'tick' delay anyway to let all the different factors being involved to be consolidated so a proper decision of the result would be made and then the result posted (possibly preciptating follow up events from sideeffects - cascade of secondary events).

 

That interaction 'tick' itself could be divorced from alot of the other timings happening on servers(and of course clients), but fast enough for a low  feedback delay rendered on the client end (results of their actions visible without too much obviously/intrusively delayed time)

 

 

Depending on game mechanics -- like long range projectiles crossing many 'chunks', alot of data has to suddenly be pulled between servers (OR (when appropriate) a 'contination event' like the next segment of a ballistic path that is small data but then will be executed on that chunks locally (like collision check against the chunks terrain mesh and set of dynamic objects).  

 

Either alot of remote data has to be pulled, or maintained cached (with update flow) or the interaction has to be farmed out to the locality(s) it immediately interacts with. 

 

SO its a balancing of how shallow/simple you can make the 'ghost/shadow' data - very frequently used data like position/appearance/current animations might be 'echoed' in the shadow/ghost  on a server  - just to trigger interactions  (and like a client front end that might serve hundreds of clients  all looking at a similar scene -- so each doesnt have to indpendantly pull across the same cross-chunk data - instead it is cached locally (once) and shared between them all)    Still a regularized subscription mechanism in 'chunks'...

 

More detailed data which is used only for infrequent interactions (like physics) would then be pulled only when needed.

 

-

 

Ive been examining mechanisms like this because of my interest in heavy weight AI for NPCs who would require their own server nodes to run on and thus would pull alot of data (much like player clients do) being highly reactive to situation changes when near the players, and have to maintain a continuous local world view for the AI to work upon.   Server-to-server network traffic from that is greatly increased (smart objects of the type Im thinking of   need to continuously see ALOT of their changing surroundings) thus my interest in optimizing server-to-server network data flows         (an obvious optimization is players 'sidekick' AI NPCs actually running on the players client where they have direct access to the same data the player is seeing, besides farming the AI processing out to far more processing capacity than centralized servers)


--------------------------------------------Ratings are Opinion, not Fact

#7 origamii   Members   -  Reputation: 116

Like
0Likes
Like

Posted 08 March 2014 - 02:21 PM

Sorry for the very delayed response, but I wanted to thank you guys so much for your responses as they were very helpful.

 

After some thought I've decided to prototype another game idea I have instead. The reason is that with the one that this thread is in regards to, there is a lot of technical groundwork I would have to lay down just to get to the gameplay development.. and at the moment, I would much rather work on something where I can start prototyping the gameplay sooner rather than later.



#8 Norman Barrows   Crossbones+   -  Reputation: 2308

Like
0Likes
Like

Posted 09 March 2014 - 12:31 PM

the problem you describe comes from using a level based design paradigm. you may want to research how large simulators are made. games like red baron 3d - where the game world is the entire western front and they model every aircraft in flight along the entire front from switzerland to the channel. or silent hunter where the game world is the entire world, and they model every ship in the pacific theater of operations (maybe more, never left the pacific). or my current project CAVEMAN, where the game world is 2500x2500 miles in size, with NO load screens whatsoever. and yes, you can shoot across terrain chunks and 5x5 mile map squares with no problem.

 

i really should do an article about the evils of level based design. doom is not the only way to build a game, and is a very sub-optimal way to build a simulation (as you're discovering).


Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

 

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

 

 


#9 hplus0603   Moderators   -  Reputation: 5693

Like
0Likes
Like

Posted 09 March 2014 - 09:45 PM

Two comments:

1) There are very legit game design reasons to use levels for certain designs. Having players be able to escape a too-hard monster by simply changing levels to run away is a fine design choice, for example. if the world is fully roaming, there's really no clear way to "run away" when you get in over your head.

2) Regarding your signature: "Changing textures" isn't actually all that bad. "Changing formats" is worse. And "changing vertex formats" is even worse than changing texture formats. Changing vertex formats may very well require a re-compile of your shaders, which is more expensive than pretty much anything else. Changing texture formats likely requires a stall of some sort, so fairly expensive. Changing from one vertex buffer to another with the same vertex format and stride, or changing from one texture to another with the same size, pixel format, and MIP layout, is likely to be very cheap these days.
enum Bool { True, False, FileNotFound };

#10 Norman Barrows   Crossbones+   -  Reputation: 2308

Like
0Likes
Like

Posted 10 March 2014 - 12:48 PM


1) There are very legit game design reasons to use levels for certain designs. Having players be able to escape a too-hard monster by simply changing levels to run away is a fine design choice, for example. if the world is fully roaming, there's really no clear way to "run away" when you get in over your head.

 

therin lies the difference between a game and a simulation. in a shooter, game play comes before realism. so the fact that your opponent is no longer modeled, because the game only simulates the active level, and you just changed the active level map, leaving your opponent on the previous level, is ok. not very realistic, but ok (matter of opinion there actually, i personally would consider it unacceptably unrealistic - but then again, i'm an old school hard core gamer.).   In a simulation, if you get in over your head, you're _supposed_ to die.

 

re the signature:

the quote refers to runtime overhead costs only, and is drawn from experience with dx9 fixed function. however, it seems that the bind times in later versions of directx have the same relative order of overhead cost, IE binding a new texture is slowest, etc.


Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

 

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

 

 


#11 hplus0603   Moderators   -  Reputation: 5693

Like
0Likes
Like

Posted 10 March 2014 - 03:36 PM

Right. Having integrated with a variety of DIS and HLA systems, I'm familiar with simulations where blue death is a serious thing.
This is gamedev.net, though :-)

binding a new texture is slowest


In my experience, binding a new shader with a different input stream signature is more expensive than binding a texture with the same size and format as the previous texture.
But, probably a different forum for that discussion :-)
enum Bool { True, False, FileNotFound };

#12 Norman Barrows   Crossbones+   -  Reputation: 2308

Like
0Likes
Like

Posted 11 March 2014 - 07:15 PM


In my experience, binding a new shader with a different input stream signature is more expensive than binding a texture with the same size and format as the previous texture.
But, probably a different forum for that discussion :-)

 

i have yet to get seriously into shader's - i know! i'm lazy! but i haven't had a desperate need yet.  so far, a little alpha blending for clouds and flames, and some 2 stage texture blending for snow cover, along with optimized state changes has gotten me by.  as for later versions of directx, i defer to your greater experience. but as you say, a good topic for another forum.


Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

 

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

 

 





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS