# Voxel based world, best practice for sending information

This topic is 1922 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

So, to give you some background.

I have a game that we are making. ( the core elements of the game are already in place and we were working from a Single player perspective only ) Unfortunately, we found that to get the multiplayer to work properly we would need to redesign most or all of the information. We have completed that process of handling how the server sends information to the client the question is now "What data to send the to client? and how often?"

We have a very massive world that is created with a perlin noise algorithm. The map is created in 16x16x255 regions know as "chunks". We want to figure out the best possible method for sending this information from the server to the client application. We would like to avoid the pitfalls that minecraft had with loading chunks outside of the players immediate zone, or having chunks missing all together for an extended period of time. What, if any, methods can we use?

Things of interest:

• We dont want to spam the client with redundant regions
• We want to prioritize regions closer to the client so they load first
• We want to limit how much information is sent and optimize that with the premise that the player cant and doesnt need to see everything.
• Possible use of super regions ( regions grouped together that have not been modified since last seen )
• Storing the data, once sent, best possible option for storing it.

We have tried a few methods but wanted to see what people thought was the best route. Also, i am not sure if this post belongs here, it tech falls under game development as well as Networking/Multiplayer. If this needs to be moved please do so as we have no problem with that.

##### Share on other sites
If I got it right, the world is randomly generated by the server, the client player will appear at some starting position and you need to send the world information to it?
Is it a 2d game? If it is, I would define a character vision radius and send the information of character vision radius only (probably a little more), then as it moves I would send the information of the region of the direction it is moving.
On update:
In my own hobby project I had a zones approach, the zones were as big as the character vision radius.
In short, I would check the position of the character, then use its vision radius to check what regions it needed information.
At those regions I would check every monster and character in it and see it they were visible and inside the vision radius, then send the information update.
I didn't have any filter on monsters/players that hadn't change, but if I had to implement it, I would use a timestamp (seconds/useconds) of the last change.
My update rate was 20 messages/second, which was pretty smooth for my needs (in my tests, less than 18 started getting bad). But this will depend a lot on how is the player iteraction and how if you use any prediction algorithm in your game.
As of data storing, I used the same objects in the server/client and created/changed then by messages. Edited by KnolanCross

##### Share on other sites
First: Is the algorithm deterministic? If so, could you generate the chunks on the client as well, without sending data, and only send deltas for things that are changed?

Second, quote:
We would like to avoid the pitfalls that minecraft had with loading chunks outside of the players immediate zone, or having chunks missing all together for an extended period of time.[/quote]

Why do you think that is? And, given what you know about it, what, if anything could be done differently?

##### Share on other sites

If I got it right, the world is randomly generated by the server, the client player will appear at some starting position and you need to send the world information to it?
Is it a 2d game? If it is, I would define a character vision radius and send the information of character vision radius only (probably a little more), then as it moves I would send the information of the region of the direction it is moving.
On update:
In my own hobby project I had a zones approach, the zones were as big as the character vision radius.
In short, I would check the position of the character, then use its vision radius to check what regions it needed information.
At those regions I would check every monster and character in it and see it they were visible and inside the vision radius, then send the information update.
I didn't have any filter on monsters/players that hadn't change, but if I had to implement it, I would use a timestamp (seconds/useconds) of the last change.
My update rate was 20 messages/second, which was pretty smooth for my needs (in my tests, less than 18 started getting bad). But this will depend a lot on how is the player iteraction and how if you use any prediction algorithm in your game.
As of data storing, I used the same objects in the server/client and created/changed then by messages.

This is a 3d game,

We have tested out several methods so far, but we need to figure out the best option. None of us are network coders so messing with this and being optimized it our main concern. Should we create everything on the server, or should we have the clients do some of the leg work and only pass updates to the client? security is also an issue as we dont want to make it easy to hack things.

##### Share on other sites

First: Is the algorithm deterministic? If so, could you generate the chunks on the client as well, without sending data, and only send deltas for things that are changed?

Second, quote:
We would like to avoid the pitfalls that minecraft had with loading chunks outside of the players immediate zone, or having chunks missing all together for an extended period of time.

Why do you think that is? And, given what you know about it, what, if anything could be done differently?
[/quote]

Yes it is, and yes we have thought of making seeds on the main server, sending that seed to the client on first initial login, having them generate the world ( or at least a decent size from where their location is ) and then having them update the client as they walk around.

As for the pitfalls of minecraft, they randomly would choose chunks to load, didnt have any priority to them ( they do now ), and the overall chunk loading seemed slow. On our single player version we could update a chunk in less than 5ms so we want to try and keep that as close as possible while going over a network. ( i know that speed of internet has an issue with this ) So the goal is to try to come up with the best possible choice for what the client should handle to what the server should, and how best to pass between the two. Are there any methods to this that are normally used? a library we could look at to understand? That is all we are trying to do is find some form of direction. So many possible avenues and choices! Edited by riuthamus

##### Share on other sites
If the world generation is deterministic, the first optimization is to have the server share the generation seed with the client and have client and server generate the base map identically.

From that point, think about how often a "chunk" or "object changes.

If you have a world where objects/chunks rarely change once generated, then you can leverage a hybrid update range system where the player has an update range established in which change objects will be sent and then "remembered" on the server so they don't ever need resent BUT can be updated to the player in the event that object or chunk happens to change (so when the player comes wandering back they see the updated chunk even if they were miles away when the chunk changed.). This means chunks have to know who knows about them in order to update all players accordingly. This optimizes the initial loading of dynamic chunks that changed from the base generation since players are only sent info about chunks as they first come across them. Assuming these chunks don't change often, then, you never have to send the info for that chunk again once it's sent the first time.

If you have objects or chunks that change frequently, then you want to go with a standard update radius approach where dynamic objects are sent to the client when the player comes in range and then "forgotten" about when the player leaves range and resent when they come back in range. This strategy works well for objects that change a lot (like other players running around for instance) on large maps.

##### Share on other sites
Well, some chunks may never change, others will change often. The places that the players live and grow will be the most changed chunks in all of the world. Where chunks that you might mine or travel for quests would be less likely to change.

##### Share on other sites
I generally break my chunks/objects down into 3 basic categories:

1. Chunks that the client and server can generate identically with little to no shared information. (static chunks).
2. Chunks that are dynamic but don't tend to change much or at all once they are created at runtime. For instance, a terrain block that is created and never moves or changes state.
3. Chunks that are dynamic and change a lot (state info updates, position updates, etc). Player objects as an example. They change state a lot and move around a lot.

Then I build a system that can tackle these three major areas. Edited by Runesabre

##### Share on other sites
Problem with shared seed approach is that if you let client be able to generate and cache full terrain data, players will be able to easily check the map for resources (if your game is going to be based on finding them). Usually you generate a lot more than can be seen because without generating its hard to determine which chunks are completly surrounded by others. There is no easy algorithm to categorize chunks as being seen or not seen because it depends on camera, terrain shape etc, so usually you generate full square area around player and only generate smooth mesh / cubes where there is surface border.

Otherwise having seed and sharing only deltas is very tempting. I'm actually interested in any ideas that could prevent players from accessing information about the map easily - think of Minecraft and how players can load map in world viewer and look for resources. If only server knows this information and player receives only visible portion (so the one on the surface) there is no such possibility.

That depends on the game of course, if you don't care about players having advantage because they were able to access world information thats not visible for normal player, then its probably the best route.

##### Share on other sites
You could use a separate seed for placing resources, so you send the normal terrain seed, and then send the visible resources individually (not by sending the seed)

##### Share on other sites

You could use a separate seed for placing resources, so you send the normal terrain seed, and then send the visible resources individually (not by sending the seed)

That is a very interesting concept... separate them out and never share that seed with the client. Send resources to the client as they get near the chunk... very interesting indeed.

##### Share on other sites

We have a very massive world that is created with a perlin noise algorithm.

First: Is the algorithm deterministic?

No. Floating point arithmetic is not really deterministic. It is highly dependent on compiler settings, compiler mood (storing an intermediary value in memory causes a rounding while no rounding is done if compiler choosed/managed to use/reuse a register instead) and current FPU state. If you have any code in form of "if(noise_float < 0.5) spawn_block()" then sooner or later it will give different results if not the exact same binary is running.

Playing with fire.

##### Share on other sites
No. Floating point arithmetic is not really deterministic.[/quote]

I'd like to correct this statement on two parts:

First, not all algorithms need floating point.

Second, floating point is fully deterministic within the restrictions of a particular instruction set architecture.
If all CPUs run the same code, they will all come to the EXACT same result.

If some CPUs run an SSE2 path, and others run a x87 path, then no, they will not be deterministic. So don't do that.
If some CPUs run an x86 path, and others run a x86_64 version, then no, they will not be deterministic. So don't do that, either.
If some CPUs run with 80 bits internal precision, and others run with 64 bits internal precision, then on, they will not be deterministic. You have to avoid that, too. Set the floating point control word (rounding, precision) to a known value before you start computation to control for this.

Once all that's done, you can use floating point just as deterministically as any other user-level instruction set architecture feature.

##### Share on other sites

No. Floating point arithmetic is not really deterministic.

I'd like to correct this statement on two parts:

First, not all algorithms need floating point.

Second, floating point is fully deterministic within the restrictions of a particular instruction set architecture.
If all CPUs run the same code, they will all come to the EXACT same result.

If some CPUs run an SSE2 path, and others run a x87 path, then no, they will not be deterministic. So don't do that.
If some CPUs run an x86 path, and others run a x86_64 version, then no, they will not be deterministic. So don't do that, either.
If some CPUs run with 80 bits internal precision, and others run with 64 bits internal precision, then on, they will not be deterministic. You have to avoid that, too. Set the floating point control word (rounding, precision) to a known value before you start computation to control for this.

Once all that's done, you can use floating point just as deterministically as any other user-level instruction set architecture feature.
[/quote]

um, would floats be deterministic in a C# program compiled for "any cpu"?

##### Share on other sites
would floats be deterministic in a C# program compiled for "any cpu"? [/quote]

I don't know; you have to check what guarantees the CLR makes regarding that. My guess would be "no."

My point is that the CPU itself and instruction set architecture is deterministic. The only non-deterministic parts come in simplifying assumptions sometimes made by software developers, but you don't *have* to make those assumptions, and those the CPUs (with floating point) *can* be deterministic.

##### Share on other sites

No. Floating point arithmetic is not really deterministic.

I'd like to correct this statement on two parts:

First, not all algorithms need floating point.
[/quote]
Erm, if it is not using floating point then it is not floating point arithmetic. What on earth are you correcting? Some form of straw-man?

PS. OP specifically mentioned Perlin noise, whose implementations almost exclusively use floating point as fixed point is just too slow there (coincidentally, some time ago i explored options to drop floating point in regards with simplex/perlin noise, but gave it up as it is just way too slow). Of course, whether or not OP uses floating point does not invalidate what i said either way.

Second, floating point is fully deterministic within the restrictions of a particular instruction set architecture.
If all CPUs run the same code, they will all come to the EXACT same result.
... then sooner or later it will give different results if not the exact same binary is running.

Which is what i said :/. So, no complaints here either.

Once all that's done, you can use floating point just as deterministically as any other user-level instruction set architecture feature.

Your list omitted the primary source of inconsistencies (which is odd as i specifically mentioned it in my post): stack vs register usage. Compiler decides when and where stack is used when it runs out of full precision registers. Even with VC "precise" compiler flag - platform / OS limitations remain:
* Different OS's have different FPU precision defaults (Win x86 - varying, Win x64 - 64bit, Unix/linux usually - 80bit). Also, VC "precise" allows compiler to ignore _controlfp.
* Even with VC "precise" (which requires extra rounding code to enforce specific precision in addition to other more restrictive/slower semantics) - rounding behavior is CPU architecture specific and differs between x86, ia64 and amd64 (Where x86 and amd64 are specifically mentioned as "This particular semantic is subject to change." Ie. you are essentially in the "unspecified" land).

Playing with fire. Might help roast beef - or roast you. I am not stopping anyone - just warning "if exact repeatability is required - stay away from floating point". Edited by tanzanite7

##### Share on other sites
To avoid unnecessary updates one could also cache chunk data on the client side and essentially version the chunks. So before resending a whole new chunk (after the player walked away and back) you can first check if the "version" on the server has changed. Most of the other things like only sending deltas etc has already been said.

##### Share on other sites

Erm, if it is not using floating point then it is not floating point arithmetic. What on earth are you correcting? Some form of straw-man?

[quote name='tanzanite7']No. Floating point arithmetic is not really deterministic.

So, it seemed to me that you were jumping to conclusions, and/or attacking a straw man, hence I aclled that out.
Also, the statement you made as quoted right here is clearly not factual.

PS. OP specifically mentioned Perlin noise, whose implementations almost exclusively use floating point as fixed point is just too slow there[/quote]

That's funny, given that the original Perlin noise was specified in terms of byte-size integer values, and it has been implemented on GPUs with 8 bits per component fixed-point colors. It can also be implemented with MMX or plain integer registers in fixed point with good performance. Thus, I have to also say that I don't agree that fixed-point implementations of Perlin noise are slower than floating point.

No, I explicitly included it:

[quote name='hplus0603']If some CPUs run with 80 bits internal precision, and others run with 64 bits internal precision, then on, they will not be deterministic. You have to avoid that, too. Set the floating point control word (rounding, precision) to a known value before you start computation to control for this.

[quote name='tanzanite7']I am not stopping anyone - just warning "if exact repeatability is required - stay away from floating point".
[/quote]

You were saying that floating point arithmetic is not deterministic, and I disagree, and I think I have literature and facts to back me up. If you do it right, it is totally deterministic. Many shipping games rely on this fact, including some I've worked on.

I think we can both agree that it's not for beginners. It seems like riuthamus is not a beginner, though, and thus may benefit from the actual facts to make an informed decision.

##### Share on other sites
Thank you all, what you have provided is some very valuable insight. It has, at the very least, helped to affirm some of the methods we have chosen to take. I will update you guys with any further questions and progress we obtain. This should help us to get the multiplayer alpha test started.

##### Share on other sites
In general it's best to stay away from floating points if you need determinism, although it can be made deterministic. Some algorithms can work fine off fixed point maths, which will be deterministic, but limited in precision. You can certainly use pseudo-random generators to build your world that way.

I've used two deterministic system based on floating point vector maths. One for a replay system, one for a network FPS. Both involved rather complex collision systems, one rigid body physics, so it is possible. None were cross platform, one was on PC supporting AMD and Intel architectures.

Once you start relying on determinism, you need to be very careful and provide debugging tools to detect determinism breaking up as soon as possible. Once it goes wrong, it stays wrong.

##### Share on other sites
@hplus0603: seems to me we just had some communication issues and there actually is no noteworthy fundamental disagreement. Only a bit of semantic bickering ... right?

-----------------------------------------
Anyway, I try to clarify:

"No. Floating point arithmetic is not really deterministic."

Ie. While floating point arithmetic, as far as actual CPU instructions go, is deterministic (given same FPU state etc) - no-one really writes in CPU instructions. A higher level language is used, like C++ ... which means the compiler is the one writing the instructions and the arithmetic written in such a language is not guaranteed to be deterministic. A'la:
* FPU states do differ for x86, x64, ia64 (two of which everyone should care about).
* different compilers (inc. different versions) are free to do things differently.
* VC specific: "precise", which is absolutely required to get any determinism to begin with, disallows meddling with FPU state and is not a guarantee (plenty of "subject to change" notifications around it).

Also, the statement you made as quoted right here is clearly not factual.

Depends entirely how you choose to understand it. Did my clarification help? Ie. what i meant IS factual. Without listing a bunch of other limitations (specific target architecture / strict compiler restrictions / etc), which i did not, then this "Floating point arithmetic is not really deterministic." is factual.

Sure sounds like you agree with it - leaving the semantic confusion aside.

That's funny, given that the original Perlin noise was specified in terms of byte-size integer values, and it has been implemented on GPUs with 8 bits per component fixed-point colors. It can also be implemented with MMX or plain integer registers in fixed point with good performance. Thus, I have to also say that I don't agree that fixed-point implementations of Perlin noise are slower than floating point.

Like i said, whether or not he uses floating point in Perlin noise implementation is irrelevant. I could not know - i used it only as a basis to give my warning about floating point usage. Your choice of examples is unfair, irrelevant and not agreeable to me, but i am not willing to delve into it as all of this is tangential.

[quote name='tanzanite7']Your list omitted the primary source of inconsistencies (which is odd as i specifically mentioned it in my post): stack vs register usage.

No, I explicitly included it:

What i am referring to has nothing to do with what you quoted here.

I try to explain via example (illustrative):
 float foo(float bar) { float snafu = bar + 17.2f; return snafu / 10.4f; } 
If the intermediate value "snafu" is stored in stack then it must round down (64->32), but rounding is not done if register was used. To work around this problem:
* one must be aware that there is a problem to begin with - which is why i posted my warning.
* (VC specific) use "precise" compiler flag to enforce rounding after every calculation (so, it won't matter whether register or memory is used). Sucks a bit in regards of performance, but is absolutely necessary.
* be aware that meddling with floating point control world is not allowed, you are stuck with whatever is used (compiler is allowed to silently ignore the _controlfp command). Iirc, starting with win64 platforms, changing internal precision from 64 is privileged and not allowed to change regardless ... but, my memory is a bit foggy about the specifics there.
* be aware that you might now be bound to a specific compiler and its version for the lifetime of the program.

PS. obviously, all memory variables are affected not just stack.

----------------
edit: yeah, i remembered correctly, changing internal precision is not allowed on x64 platforms at all and it triggers a crash:
On the x64 architecture, changing the floating-point precision is not supported. If the precision control mask is used on that platform, the invalid parameter handler is invoked, as described in Parameter Validation.[/quote] Edited by tanzanite7

##### Share on other sites
What i am referring to has nothing to do with what you quoted here.[/quote]

Yes, it absolutely does. If you use doubles for your variables, and doubles for your x87 registers, then whether you spill to stack or not matters on Intel x86 CPUs running in 80-bit precision mode, but does not matter on AMD CPUs because they silently truncate 80-bit precision mode to 64-bit precision mode. This is the main reason why intermediate stores/spills matters for determinism, because you can't control which CPU vendor your user chooses. (You can set the internal precision mode to 64 bits to match AMD and Intel, though.)

Spilling a float variable to stack so it truncates is not a problem for determinism, because it will do the same thing every time on all machines that run the same code. Your arguments seem to revolve around the fact that trying to run different versions of a binary together will not generate deterministic results. Similarly, trying to run "the same" program on different architectures, or with different compilers, also will not generate deterministic behavior. Those are true, and my argument is that, even when you control for those circumstances, you will run into trouble because of subtler behavior (the Intel/AMD difference, the DLL-load-sets-fpctlword problem on Windows, the signal-handlers-reset-control-words problem on Linux, etc.)