Jump to content

  • Log In with Google      Sign In   
  • Create Account

Good far plane distance for space game


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 IcedCrow   Members   -  Reputation: 264

Like
1Likes
Like

Posted 28 July 2013 - 08:07 PM

What is a good far plane distance for a camera in a space game in your experience?  

If the unit of measurement is a coordinate = a meter... then things like planets can be hundreds of thousands of units away but still visible... but things like small stations etc would not be.


For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/

Sponsor:

#2 Paradigm Shifter   Crossbones+   -  Reputation: 5379

Like
1Likes
Like

Posted 28 July 2013 - 08:28 PM

You might want to have a look at using a logarithmic depth buffer... here's a blog about it

 

http://outerra.blogspot.co.uk/2012/11/maximizing-depth-buffer-range-and.html


"Most people think, great God will come from the sky, take away everything, and make everybody feel high" - Bob Marley

#3 Nik02   Crossbones+   -  Reputation: 2838

Like
5Likes
Like

Posted 28 July 2013 - 09:30 PM

You can also use multiple depth ranges, and render the scene in "layers"; starting from the background (furthermost objects) and finally drawing the foreground (nearest objects), changing the near and far plane (and clearing depth) for each layer.


Edited by Nik02, 28 July 2013 - 09:30 PM.

Niko Suni


#4 SuperVGA   Members   -  Reputation: 1118

Like
2Likes
Like

Posted 29 July 2013 - 04:31 AM

I second Nik02s suggestion.
You should also look into LOD and skyboxes.

Also you can completely fake the distances if the multiple depth layers doesn't fit you:

    You can completely disable depth testing if you just draw in the right order.

   

You can use impostors for far away stuff. 

You can render everything to a spherical map (or a cubemap) and render that. If the objects are far away enough,
it's alright that they don't move when the camera does. (If you have need for both, you can use impostors too.)

 

There's plenty of options. Good luck! :)



#5 IcedCrow   Members   -  Reputation: 264

Like
0Likes
Like

Posted 29 July 2013 - 06:23 AM

Thanks for the replies everyone.

 

I am going to look into conditional objects (what I call them, I'm sure they have another name) that is "faking it" like mentioned above.

 

Meaning the sun is huge, and even if I'm 2 AUs away from the sun and the renderer correclty draws the sun as a small glowing sphere, it will be using a ton of memory because its technically huge, so I need to implement code to say that the sun is really a small sphere until you get to point X, then make it bigger, etc... until it actually starts to matter.

 

Therein lies the challenge =D

 

I don't want to skybox the sun in space.  I want to be able to actually travel to it from any point in the system (as well as the planets) without having to rely on loading screens and "rooms" or "scenes".


For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/

#6 Waterlimon   Crossbones+   -  Reputation: 2573

Like
2Likes
Like

Posted 29 July 2013 - 06:38 AM

I think the idea was to render it as a part of the skybox only if it is far enough for that to not produce visible artifacts or such.

 

You would take it away from the skybox rendering and render it as an actual mesh when it gets closer.


o3o


#7 SuperVGA   Members   -  Reputation: 1118

Like
0Likes
Like

Posted 29 July 2013 - 08:43 AM

As Waterlimon says, the idea was to trick the appearance of the objects as impostors and LOD.

 

 


I don't want to skybox the sun in space.  I want to be able to actually travel to it from any point in the system (as well as the planets) without having to rely on loading screens and "rooms" or "scenes".

The user won't be able to see the difference if you do it right. - And you won't need loading screens or rooms, either.

Just render a cube map once in a while, or shoot a few rays every frame (you may even be able to use high-detail models), and plaster that on the inside of a cube. Let the cube be drawn around the camera, always with the camera in the absolute center, so everything appear to be infinitely far away. Afterwards, enable depth testing and draw the rest of the scene.

 

 


Meaning the sun is huge, and even if I'm 2 AUs away from the sun and the renderer correclty draws the sun as a small glowing sphere, it will be using a ton of memory because its technically huge, so I need to implement code to say that the sun is really a small sphere until you get to point X, then make it bigger, etc... until it actually starts to matter.

I think LOD applies here (small sphere, resolution-wise at one instant, the next it will be a higher resolution).

As for the memory usage, I'm not really sure I can follow. Where is this memory taken up?


Edited by SuperVGA, 29 July 2013 - 09:14 AM.


#8 IcedCrow   Members   -  Reputation: 264

Like
0Likes
Like

Posted 29 July 2013 - 10:29 AM

As for the memory usage, I'm not really sure I can follow. Where is this memory taken up?

 

As I am still a novice 3D developer, I'm not exactly sure.  

 

I know that in my testing last night, putting the sun at its actual size and then pushing it out 1 AU away rendered its size how I'd expect it to look from earth-space, but my performance took a crash.  When I reimplemented the sun as a small glowing sphere off in the backdrop, performance returned to normal.  However, the small glowing sphere was relatively close and if you turned and aimed for it, you would reach it in no time and that's obviously not the effect I want.  I can keep pushing the small glowing sphere back so that it stays at a constant distance from the ship (origin) which is something I may play around with.

 

My initial plan was to render planets at real size and distance in a system and everything else is rendered as needed (using rays and triggers around the objects that detected cross over to render / not render).  That plan looks like it took a hit and I need to figure out how to put these objects as part of the backdrop until I get to a certain area, then render them as needed by their actual size.


Edited by IcedCrow, 29 July 2013 - 10:30 AM.

For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/

#9 Servant of the Lord   Crossbones+   -  Reputation: 19658

Like
2Likes
Like

Posted 29 July 2013 - 11:21 AM

I'm not much of a 3D guy, but let me take a shot at explaining it anyway. smile.png 
 

My initial plan was to render planets at real size and distance in a system and everything else is rendered as needed (using rays and triggers around the objects that detected cross over to render / not render).  That plan looks like it took a hit and I need to figure out how to put these objects as part of the backdrop until I get to a certain area, then render them as needed by their actual size.


Try mentally separating the space and measurements and objects that you are drawing from the actual space and measurements and objects of your game world. At the same time, recognize that when people are talking about skyboxes, they aren't talking about a skybox you render outside the game and then use forever inside the game, they are talking about skyboxes generated on-the-fly, used for a minute or two, then discarded and re-generated as the player's viewing angle changes.

Everything farther away than a certain distance from the player (millions of objects) could be scaled down to an appropriate size and rendered to the skybox in high detail, and then the skybox is only re-generated when the player shifts his location far enough to require the skybox to be updated. Then, the skybox (a mere twelve textured triangles) is drawn every frame instead of millions of objects with millions of triangles each.

Picture a tree in a game like Skyrim. Up close, it's millions of polygons and taller than the player. It fills up almost the entire screen. The farther you get from it, it fills up less of the screen. And less. And less. Until it only takes up a single pixel. And then no pixels.

When it's at 'no pixels', rather than render a million-polygon 3d model that gets rendered down to 0 pixels, just don't draw the tree.
When it's at one or two pixels, rather than render a million-polygon 3d model, just render a tiny green blob.
Whenever it's less than 100 pixels high, just draw a tiny little picture of a tree that is generated once from the giant million-polygon model every so often when the player changes his viewing angle enough.

This is what is meant by 'imposters', similar to billboarding (imposters are billboards that change their appearance when the player views them from different angles, AFAIK).

You have your "world", but the data you feed to the videocard is a visual representation of the world (and not a copy of the world) from where the player is currently standing. Every polygon of your world shouldn't be sent to the videocard for drawing. Every polygon of your world shouldn't even be loaded in memory on the software side either - it either needs to be generated only when needed, or else loaded only when needed, and then destroyed when no longer needed.

When rendering the sun onto the skybox, you don't even need to load the billion-polygon sun - you can load a lower quality 100-polygon sun. It's still only going to appear as a few pixels on the player's screen until he gets closer to it.

Your game world might be a bajillion lightyears large, but your draw-distance shouldn't even be half of one lightyear across. Everything outside your draw distance is painted onto a skybox and the skybox is drawn instead. There is zero difference graphically or gameplay wise (it's pixel-perfectly identical), but there is a huge difference performance wise (billions of polygons saved from being rendered). Your skybox is just updated once the player's viewing angle (relative to a distant point bajillions of miles away) changes enough to make a difference. Because it's bajillions of miles away, the amount the player needs to physically move to change his angle relative to that object is a very large amount (more than just a few feet - in a space game, it'd be like travelling for a minute or more, depending on your travel speed and the distance from the object).


Edited by Servant of the Lord, 29 July 2013 - 11:26 AM.

It's perfectly fine to abbreviate my username to 'Servant' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal

[Fly with me on Twitter] [Google+] [My broken website]

[Need web hosting? I personally like A Small Orange]


#10 Norman Barrows   Crossbones+   -  Reputation: 2158

Like
0Likes
Like

Posted 29 July 2013 - 11:33 AM

my first big hit game was a starship flight simulator.

 

you've taken the first step.

 

you've defined the scale of the simulation. 1 d3d unit = 1 meter.

 

the next step is to define the maximum near plane distance you can get away with when drawing up close, perhaps something like 10 meters.

 

then you have to determine the size of the biggest nearby thing you'll draw. i believe a red giant star will be the biggest.

 

then you have to determine how small it will be before you don't draw it, say 10 pixels wide. smaller than that, its too far, and you cull.

 

the size of the biggest object, combined with how small you draw it, will determine max visual range. given a 45 degree vertical FOV, and a 16:9 aspect ratio, the apparent size of an object will be approximately width / distance. IE a 500 meter long starship at a range of 500 meters will have an approximate apparent size of 1 meter across (ie about twice as wide as the monitor, say 4000 pixels). its not until you increase the distance to the point that that 1 meter (~4000 pixels wide) shrinks down to 10 pixels or less that you can think about not having to draw the object because its too small. the distance that your biggest object goes below 10 pixels or whatever size you decide on will be your max visual range.

 

so now you know your near plane (10 meters) and your max visual range, your far plane (a sphere with a diameter of a red giant, drawn far enough from the camera so its about 10 pixels wide - however far that is). 

 

if those 2 numbers fit in a zbuf, you're done, if not, you draw in passes, far to near, setting the clip  planes and clearing the zbuf before each pass.

 

to calculate the max visual range:

 

lets say you want a red star to be 10 pixels wide at max visual range. 10 pixels wide is about 3 millimeters wide on the screen, or 0.03 meters.

 

so red giant width / max visual rage = 0.03 meters

 

or

 

max visual range (your far clip plane) = red giant width (in meters) / 0.03 

 

as you can see, this will be a very large number. about 33 1/3 times the width of a red giant (in meters).

 

needless to say, that's too big for a regular float based d3d coordinate system.

 

so you'll need a world coordinate system to model the game with, and then you'll need to convert to camera relative coordinates just before frustum cull and draw.

 

a float gives you about 8 decimal digits and one decimal point to work with. so you can represent numbers up to say 9,999,999.9 or so, with one decimal place (10 cm) accuracy. or you could go up to perhaps 10 million meters with an accuracy of 1 meter.

 

i'd probably use an int. that gets you something crazy like -4 to 4 billion or trillion meters,  with 1 meter accuracy.

 

i've played with far clip planes as large as 100,000 or more. i think i tried 1,000,000, but i'm not sure.

 

you may find that your scale is too small for drawing your most distant objects, without some LOD type strategy. based on what you do, there's a theoretical max size for a float based 3d coordinate system such as the camera uses. the farthest thing you want to draw must fit within that range to avoid some LOD type fix. so if you have to draw things out to 1 billion meters, and d3d's coordinate system can only do  stuff out to about 1 million d3d units, you'll need a scale like 1 d3d unit = 1Km, or go with a fixup to draw objects closer and proportionally smaller.


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#11 SuperVGA   Members   -  Reputation: 1118

Like
0Likes
Like

Posted 29 July 2013 - 11:35 AM

Yes, Servant - that is exactly what I was talking about. See I didn't know that mr. Crow wasn't familiar with those concepts.

Now that I think of it, perhaps it's better to resort to just using spheres with more or less polygons. (Render targets and FBO's can be a mouthful, but as Servant mentioned, they really will deliver in terms of performance, if you require many "objects" to show at once.)

 

Most Graphics APIs (or their extensions) allow creation of spheres given a number of slices.  Those can be difficult to texture map, so perhaps you can benefit from doing a little shader programming. It shouldn't take too much effort, though. As with the skybox, those can also be positioned relative to the camera in a manner that make them seem much further away than they really are. (Barrows has some insightful comments on measurements)

 

You could divide it's size by a certain amount, and then scale its position by the inverse. (I'm not sure the two measurements will be proportional, you'll probably have to square one of them.) It may be necessary to disable drawing to the depth buffer, so nothing drawn afterwards will reveal the trick.

 

There's so much smoke and so many mirrors available to use, it can be difficult to pick between them. ... But remember to pick.

 

Once you have set up a wrapper for rendering to textures, the extra work isn't really that bad, and you will benefit from it in the long run.

(Using FBOs in OpenGL)

http://www.gamedev.net/page/resources/_/technical/opengl/opengl-frame-buffer-object-101-r2331

http://www.gamedev.net/page/resources/_/technical/opengl/opengl-frame-buffer-object-201-r2333

 

(Using render targets to render to texture in DX10)

http://www.rastertek.com/dx10tut22.html

http://www.two-kings.de/tutorials/dxgraphics/dxgraphics16.html

http://www.gamedev.net/topic/576213-directx-10-render-to-texture/


Edited by SuperVGA, 29 July 2013 - 11:41 AM.


#12 Norman Barrows   Crossbones+   -  Reputation: 2158

Like
2Likes
Like

Posted 29 July 2013 - 11:46 AM

you'll never draw more than one solar system at once. everything else is skybox, distant 3d billboards of galaxies etc, and a particle system star / rock / debris field around the ship. plus your targets (other ships, star ports, etc). as the ship approaches a system, you add it to your list of stuff to draw (active targets / scene graph). when its goes beyond max visual range, you remove it from the list. you'll need a world map that tells you where the stars are. as the player flies through the world map (a star cloud), you simply draw the skybox, distant galaxies, star field, etc. and occasionally the odd star system if they happen to pass close enough.


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#13 Norman Barrows   Crossbones+   -  Reputation: 2158

Like
0Likes
Like

Posted 29 July 2013 - 11:51 AM

a low poly sphere will be sufficient for drawing, perhaps 8 to 12 to 16 sides. for orbital closeups you may want to switch to a 20 or 24 sided sphere, maybe more. just enough so it looks round.


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#14 IcedCrow   Members   -  Reputation: 264

Like
0Likes
Like

Posted 29 July 2013 - 07:27 PM

Thanks for the links guys... appreciate the feedback!


For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/

#15 Hawkblood   Members   -  Reputation: 723

Like
0Likes
Like

Posted 30 July 2013 - 02:15 PM


You can also use multiple depth ranges, and render the scene in "layers"; starting from the background (furthermost objects) and finally drawing the foreground (nearest objects), changing the near and far plane (and clearing depth) for each layer.

That's EXACTLY what I am using in my game.....

 

You're going to need a new coordinate system. I found that if you use the normal floating point system, when you get past the 10th digit, things start to go freaky..... The floating point can't handle very large or very small numbers. If you are interested, I already developed a struct for that.

 

This is not a small task you are undertaking. Don't be discouraged, though. It will be a good experience for you.

 

(this is not a shameless plug) http://youtu.be/fukDH9jzD1U

This is a short YouTube vid showing a little of what I have done. It doesn't show everything, but maybe you could get some inspiration....



#16 IcedCrow   Members   -  Reputation: 264

Like
0Likes
Like

Posted 30 July 2013 - 04:36 PM

Awesome thanks for the link.


For more on my wargaming title check out my dev blog at http://baelsoubliette.wordpress.com/

#17 SuperG   Members   -  Reputation: 526

Like
0Likes
Like

Posted 04 August 2013 - 04:51 AM

I second Nik02s suggestion.
You should also look into LOD and skyboxes.
Also you can completely fake the distances if the multiple depth layers doesn't fit you:
You can completely disable depth testing if you just draw in the right order.

You can use impostors for far away stuff.
You can render everything to a spherical map (or a cubemap) and render that. If the objects are far away enough,
it's alright that they don't move when the camera does. (If you have need for both, you can use impostors too.)

There's plenty of options. Good luck! :)

I would still keep a Z depth values.

Why if your want your game be 3D view or Oculustrift ready you could choose for a way that supports this.

If you render order would be sky box then imposters that are far far away. Ignore the Z value but set it to max. As it would be close to a infinity setting.

A example would be the X- series game X3 where 3D mode the skybox is renderd just like hud objects in the 3d Z value forground. That look like dogfighting with a starship where the stars are as close as spacefly's on cockpit with double target box with offset. Like they are close to the cockpit. Very weird looking.

But a battle cruiser fly by you see a nice 3D effect wenn it fill out the whole viewport. Those things are 1 to some Km large. Very usefull for flyby.

#18 Hawkblood   Members   -  Reputation: 723

Like
0Likes
Like

Posted 04 August 2013 - 05:43 AM

The trick I use is multiple  D3DTS_PROJECTION calls for various depths. If you just use 1 projection matrix with 1 FCP, you will have a LOT of z-fighting. Another thing you might need is:

			HUGEVECTOR3 V=Ship[i].V_Location-GE->MyCamera.CraftLocation;
			D3DXVECTOR3 v=V.Normalize();
			V.Length/=10.0;
			float S=1.0f;
			if (V.Length>10000.0){
				float asd=(20000.0f*57.289961630759f)/float(V.Length+5000.0);
				asd/=180.0f;
				S=asd*D3DX_PI/2.0f;
				v*=10000.0f-(5000.0f*S);
			}
			else v*=float(V.Length);
			S/=10.0f;

I use this for my ships and stations. First I divide all the distances by 10. Then I use a distance formula for anything beyond 10000. I use this formula to scale the object and place it in space before the FCP.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS