Good far plane distance for space game

Started by
16 comments, last by Hawkblood 10 years, 8 months ago

What is a good far plane distance for a camera in a space game in your experience?

If the unit of measurement is a coordinate = a meter... then things like planets can be hundreds of thousands of units away but still visible... but things like small stations etc would not be.

For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/
Advertisement

You might want to have a look at using a logarithmic depth buffer... here's a blog about it

http://outerra.blogspot.co.uk/2012/11/maximizing-depth-buffer-range-and.html

"Most people think, great God will come from the sky, take away everything, and make everybody feel high" - Bob Marley

You can also use multiple depth ranges, and render the scene in "layers"; starting from the background (furthermost objects) and finally drawing the foreground (nearest objects), changing the near and far plane (and clearing depth) for each layer.

Niko Suni

I second Nik02s suggestion.
You should also look into LOD and skyboxes.

Also you can completely fake the distances if the multiple depth layers doesn't fit you:

You can completely disable depth testing if you just draw in the right order.

You can use impostors for far away stuff.

You can render everything to a spherical map (or a cubemap) and render that. If the objects are far away enough,
it's alright that they don't move when the camera does. (If you have need for both, you can use impostors too.)

There's plenty of options. Good luck! :)

Thanks for the replies everyone.

I am going to look into conditional objects (what I call them, I'm sure they have another name) that is "faking it" like mentioned above.

Meaning the sun is huge, and even if I'm 2 AUs away from the sun and the renderer correclty draws the sun as a small glowing sphere, it will be using a ton of memory because its technically huge, so I need to implement code to say that the sun is really a small sphere until you get to point X, then make it bigger, etc... until it actually starts to matter.

Therein lies the challenge =D

I don't want to skybox the sun in space. I want to be able to actually travel to it from any point in the system (as well as the planets) without having to rely on loading screens and "rooms" or "scenes".

For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/

I think the idea was to render it as a part of the skybox only if it is far enough for that to not produce visible artifacts or such.

You would take it away from the skybox rendering and render it as an actual mesh when it gets closer.

o3o

As Waterlimon says, the idea was to trick the appearance of the objects as impostors and LOD.


I don't want to skybox the sun in space. I want to be able to actually travel to it from any point in the system (as well as the planets) without having to rely on loading screens and "rooms" or "scenes".

The user won't be able to see the difference if you do it right. - And you won't need loading screens or rooms, either.

Just render a cube map once in a while, or shoot a few rays every frame (you may even be able to use high-detail models), and plaster that on the inside of a cube. Let the cube be drawn around the camera, always with the camera in the absolute center, so everything appear to be infinitely far away. Afterwards, enable depth testing and draw the rest of the scene.


Meaning the sun is huge, and even if I'm 2 AUs away from the sun and the renderer correclty draws the sun as a small glowing sphere, it will be using a ton of memory because its technically huge, so I need to implement code to say that the sun is really a small sphere until you get to point X, then make it bigger, etc... until it actually starts to matter.

I think LOD applies here (small sphere, resolution-wise at one instant, the next it will be a higher resolution).

As for the memory usage, I'm not really sure I can follow. Where is this memory taken up?

As for the memory usage, I'm not really sure I can follow. Where is this memory taken up?

As I am still a novice 3D developer, I'm not exactly sure.

I know that in my testing last night, putting the sun at its actual size and then pushing it out 1 AU away rendered its size how I'd expect it to look from earth-space, but my performance took a crash. When I reimplemented the sun as a small glowing sphere off in the backdrop, performance returned to normal. However, the small glowing sphere was relatively close and if you turned and aimed for it, you would reach it in no time and that's obviously not the effect I want. I can keep pushing the small glowing sphere back so that it stays at a constant distance from the ship (origin) which is something I may play around with.

My initial plan was to render planets at real size and distance in a system and everything else is rendered as needed (using rays and triggers around the objects that detected cross over to render / not render). That plan looks like it took a hit and I need to figure out how to put these objects as part of the backdrop until I get to a certain area, then render them as needed by their actual size.

For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/

I'm not much of a 3D guy, but let me take a shot at explaining it anyway. smile.png

My initial plan was to render planets at real size and distance in a system and everything else is rendered as needed (using rays and triggers around the objects that detected cross over to render / not render). That plan looks like it took a hit and I need to figure out how to put these objects as part of the backdrop until I get to a certain area, then render them as needed by their actual size.


Try mentally separating the space and measurements and objects that you are drawing from the actual space and measurements and objects of your game world. At the same time, recognize that when people are talking about skyboxes, they aren't talking about a skybox you render outside the game and then use forever inside the game, they are talking about skyboxes generated on-the-fly, used for a minute or two, then discarded and re-generated as the player's viewing angle changes.

Everything farther away than a certain distance from the player (millions of objects) could be scaled down to an appropriate size and rendered to the skybox in high detail, and then the skybox is only re-generated when the player shifts his location far enough to require the skybox to be updated. Then, the skybox (a mere twelve textured triangles) is drawn every frame instead of millions of objects with millions of triangles each.

Picture a tree in a game like Skyrim. Up close, it's millions of polygons and taller than the player. It fills up almost the entire screen. The farther you get from it, it fills up less of the screen. And less. And less. Until it only takes up a single pixel. And then no pixels.

When it's at 'no pixels', rather than render a million-polygon 3d model that gets rendered down to 0 pixels, just don't draw the tree.
When it's at one or two pixels, rather than render a million-polygon 3d model, just render a tiny green blob.
Whenever it's less than 100 pixels high, just draw a tiny little picture of a tree that is generated once from the giant million-polygon model every so often when the player changes his viewing angle enough.

This is what is meant by 'imposters', similar to billboarding (imposters are billboards that change their appearance when the player views them from different angles, AFAIK).

You have your "world", but the data you feed to the videocard is a visual representation of the world (and not a copy of the world) from where the player is currently standing. Every polygon of your world shouldn't be sent to the videocard for drawing. Every polygon of your world shouldn't even be loaded in memory on the software side either - it either needs to be generated only when needed, or else loaded only when needed, and then destroyed when no longer needed.

When rendering the sun onto the skybox, you don't even need to load the billion-polygon sun - you can load a lower quality 100-polygon sun. It's still only going to appear as a few pixels on the player's screen until he gets closer to it.

Your game world might be a bajillion lightyears large, but your draw-distance shouldn't even be half of one lightyear across. Everything outside your draw distance is painted onto a skybox and the skybox is drawn instead. There is zero difference graphically or gameplay wise (it's pixel-perfectly identical), but there is a huge difference performance wise (billions of polygons saved from being rendered). Your skybox is just updated once the player's viewing angle (relative to a distant point bajillions of miles away) changes enough to make a difference. Because it's bajillions of miles away, the amount the player needs to physically move to change his angle relative to that object is a very large amount (more than just a few feet - in a space game, it'd be like travelling for a minute or more, depending on your travel speed and the distance from the object).

my first big hit game was a starship flight simulator.

you've taken the first step.

you've defined the scale of the simulation. 1 d3d unit = 1 meter.

the next step is to define the maximum near plane distance you can get away with when drawing up close, perhaps something like 10 meters.

then you have to determine the size of the biggest nearby thing you'll draw. i believe a red giant star will be the biggest.

then you have to determine how small it will be before you don't draw it, say 10 pixels wide. smaller than that, its too far, and you cull.

the size of the biggest object, combined with how small you draw it, will determine max visual range. given a 45 degree vertical FOV, and a 16:9 aspect ratio, the apparent size of an object will be approximately width / distance. IE a 500 meter long starship at a range of 500 meters will have an approximate apparent size of 1 meter across (ie about twice as wide as the monitor, say 4000 pixels). its not until you increase the distance to the point that that 1 meter (~4000 pixels wide) shrinks down to 10 pixels or less that you can think about not having to draw the object because its too small. the distance that your biggest object goes below 10 pixels or whatever size you decide on will be your max visual range.

so now you know your near plane (10 meters) and your max visual range, your far plane (a sphere with a diameter of a red giant, drawn far enough from the camera so its about 10 pixels wide - however far that is).

if those 2 numbers fit in a zbuf, you're done, if not, you draw in passes, far to near, setting the clip planes and clearing the zbuf before each pass.

to calculate the max visual range:

lets say you want a red star to be 10 pixels wide at max visual range. 10 pixels wide is about 3 millimeters wide on the screen, or 0.03 meters.

so red giant width / max visual rage = 0.03 meters

or

max visual range (your far clip plane) = red giant width (in meters) / 0.03

as you can see, this will be a very large number. about 33 1/3 times the width of a red giant (in meters).

needless to say, that's too big for a regular float based d3d coordinate system.

so you'll need a world coordinate system to model the game with, and then you'll need to convert to camera relative coordinates just before frustum cull and draw.

a float gives you about 8 decimal digits and one decimal point to work with. so you can represent numbers up to say 9,999,999.9 or so, with one decimal place (10 cm) accuracy. or you could go up to perhaps 10 million meters with an accuracy of 1 meter.

i'd probably use an int. that gets you something crazy like -4 to 4 billion or trillion meters, with 1 meter accuracy.

i've played with far clip planes as large as 100,000 or more. i think i tried 1,000,000, but i'm not sure.

you may find that your scale is too small for drawing your most distant objects, without some LOD type strategy. based on what you do, there's a theoretical max size for a float based 3d coordinate system such as the camera uses. the farthest thing you want to draw must fit within that range to avoid some LOD type fix. so if you have to draw things out to 1 billion meters, and d3d's coordinate system can only do stuff out to about 1 million d3d units, you'll need a scale like 1 d3d unit = 1Km, or go with a fixup to draw objects closer and proportionally smaller.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement