Game Engine max map sizes - why?

Started by
9 comments, last by superman3275 11 years, 5 months ago
Just curious as to many game engines i've seen as to the upper limit in map sizes. Crytek, Unreal and others have an upper limit on how large a map can be. Some games utilising engines seem to get past this by either loading sections as the player progresses (i think this is a cell loading like approach? is this the correct term?).

I've been googling a bit for the reasoning behind the limits but haven't found any discussions or explanations on why though i could be using the wrong search keywords.

Anyone here able to explain why these limits exist? I've heard in some topics people discussing talking about at larger sizes accuracy of calculations is worse. Anyone able to explain this to me? I'm not too concerned with how technical it gets so go nuts i just want something to start researching from so if you have any topics discussing this theory or weblinks i'd be happy to view them from a learning perspective.

I mean it seems that unless there was a good reason surely all these AAA title engines would have unlimited sized worlds and you wouldn't be constrained in your map editor to a certain land mass size. I'm just after why.
Advertisement
The short answer is that the games that those engines were build for didn't require larger maps. Engines are only built to the requirements of their attached games, after all.

The technical answers as to the precision of large numbers is in What Every Computer Scientist Should Know About Floating-Point Arithmetic.

For some insights in how people have made engines that support large worlds, the "Continuous World of.." presentation is a good read:
http://scottbilas.com/files/2003/gdc_san_jose/continuous_world_paper.pdf
http://scottbilas.com/files/2003/gdc_san_jose/continuous_world_slides.pdf
The accuracy problem will probably have to do with floating point accuracy. Floating point values can store a large range of values, but their precision will deteriorate the more they stray away from 0. This is because floating point values are normally determined by a mantissa and an exponent (Note: I'm simplifying things here for clarity purposes), so they can be a good solution for doing calculations with small numbers (as smaller numbers tend to be 'more precise' in most cases), but they can show some heavy precision errors for larger numbers.

So the larger a map gets the larger your co-ordinate values can become and the more precision issues you'll encounter. If a distance unit in your game corresponds to a small real world unit (eg. 1 unit = 1 cm) your precision will become even worse for larger distances.


Then there are also the issues that a larger map means probably more memory usage, more bookkeeping, longer load times, in-game stalling (when doing streaming) and maybe longer frame update times (although well-designed scene systems shouldn't have too much trouble with this).


It's not impossible to do really large maps, but it will require some trade-offs and some different design decisions, and maybe the engines you mentioned just don't have a need for really large maps.

EDIT: Ninja'd again...

I gets all your texture budgets!

The boiled down version is that, yes, floating point precision limits the ability for the renderer and physics engines to properly perform their tasks without creating problems and various 'artifacts' and erratic behavior.

32-bit floating point values (as in, the XYZ floating-precision coordinates and velocities of game objects) are basically split into two parts, the value, and the 'scale' as I like to think of it. The value can only have so many digits (I think typical 32-bit floats have ~7 digits) but then the decimal point on those digits can be ~100 places to the left or right.. So basically, when your numbers start getting big (pushing the 7-digit boundary) then the ability to perform precise, or less-than-one, calculations and value manipulations is lost, naturally.. it just isn't there anymore once objects/players approach/pass these boundaries, which can cause a variety of glitchy effects and results.
I just read an article on float representation. Lemme see if I can dig it up...

Can't find it. But the Wikipedia article goes into significant depth about the standard formats.

Edit:

Oh, duh. It's the one Hodge posted.
void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.
This series of altdevblog posts also gives some good insights into float behaviour, and is a bit more easily digested than the 'what every programmer should know' article:
http://www.altdevblogaday.com/2012/05/20/thats-not-normalthe-performance-of-odd-floats/
The floating point limit is one reason, but this can be solved by dividing the world in sectors. Sectors are also needed for streaming parts of the world, because memory is also a bottleneck when dealing with large worlds.

I think the main reason is that the games (for which the engine was made) does not need a bigger world, so the engines don't support it.
It's also easier to write implementations when you can rely on assumptions. If you know an array will have space for exactly 12 elements, you can cater specifically to that, as opposed to an array that could possibly have space for only 1 element, or hundreds. You have less cases to worry about. This could also limit your ability to do certain things, but the reason the limit was chosen in the first place was because it allowed the designers to do what they needed to do.

Another example: trying to create the ultimate engine. Most game engines are very specialized. By assuming that a given engine will be used for only FPS games, you can design the program based on these assumptions (e.g. you can't roll the camera upside down when by turning it). Or a multiplayer focused engine, where all logic consists of client/server communications with cheat detection etc since we are under the assumption that we will not be creating singleplayer experiences.
Visit my website! donny.webfreehosting.net
Just to add to what others are saying, there is a great solution to this problem called Floating Origin. The idea behind Floating Origin is to move the world rather than the camera. By doing this, you can maximize the precision of floating point numbers since the models are always oriented toward the origin. If you have a need for large maps, then you should start orienting your mind to begin thinking in this way.

Additionally, you should do all your CPU calculations, such as matrix multiplications, trigonometric functions, ect, in double precision, and this will give your final matrices that you pass to the GPU slightly better accuracy. There are many math libraries that support double precision, my favorite being GLM (http://glm.g-truc.net/)
I mean it seems that unless there was a good reason surely all these AAA title engines would have unlimited sized worlds and you wouldn't be constrained in your map editor to a certain land mass size. I'm just after why.
No game has an unlimited sized world. The game that loads levels in one area at a time, and the game that loads them in chunks at a time still have the same outer limits. Those games will even need a new coordinate system to deal with that large of an area. (Area,X,Y,Z) instead of just (X,Y,Z) to get around the previously mentioned floating point precision issues.

When designing a game that seems unlimited, you have to design everything to be able to load quickly on the fly. That puts lots of trade offs on things that you don't have to worry about when you have 20 seconds to stop and load everything in!

This topic is closed to new replies.

Advertisement