Setting FOV?

Started by
12 comments, last by Bacardi34 21 years, 2 months ago
I am working on getting my first 3D space up and running with a camera and a simple mesh. I am using CD3DCamera for the camera class, and there is a FOV paramater in it. I know it is the field of view, between the near and far clipping planes, but i am unsure what i need to set this as. Do i just make something up, or is there a formula like for the aspect( width/height )? [edit] Also i noticed that there is a near and farplane, do i just make these up too? Thanks! [edited by - Bacardi34 on January 29, 2003 3:45:29 PM]
Advertisement
A run down:
D3DXMATRIX *D3DXMatrixPerspectiveFovLH(D3DXMATRIX *pOut,
FLOAT fovY,
FLOAT Aspect,
FLOAT zn,
FLOAT zf
);

fov is how much you can see in the y part. The aspect determines how much you can see on the other axis. The smaller it is, the closer things are. This can be used as a zoom effect. The near and far planes are used to find the z coordinate (or w) for the pixel rendered. Anything closer than the near plane is culled and visa-versa.
a good value to set fovY to is D3DX_PI/4
Cool, thanks guys
FOV
===
This "does what it says on the tin" ;o) this is the angle that the camera lens can see. A diagram (in overhead plan form):

_____________\           / \    P    /  \       /   \     /    \___/     \a/      E 


Say your eyes are point E, and you are looking at something (P), you can see other things surrounding P in a sort of cone shape. Same with a camera, its lens sees stuff in a cone. The angle of that cone is the field of view (marked as ''a'', its meant to be a semicircle ). You may have heard of camera lens names such as "wide angle" etc - the angle referred to is the field of view.

Widening the angle of view has the side effect of acting like a zoom, imagine what happens to the horizontal line just above the ''a'' in my diagram when you increase the field of view - it isn''t going to get any further away from your eye/camera, but is going to ''look'' bigger. The FOV you choose tends to depend on what your application needs to view (a 3rd person game has different requirements than 1st person one for example).

Try starting with a value between 60 and 90 degrees (caution:your camera function may take radians so remember to convert if necessary)


zNEAR, zFAR
===========
The "near and far clipping planes" again are what they sound like. Things outside them get clipped. However you don''t specify them as planes, just values which represent the distances from the camera lens (E) in the direction the camera is looking at which should be used for clipping . Another diagram:

\     O     / \_________/ zf   \   P   /   \     /    \___/    zn     \ /      E 


E is once again the eye or camera and P is something the camera is pointing at. zn is the near z clipping plane, zf is the far z clipping plane. Any polygons which are further away from the camera than zf won''t be rendered, they''ll be clipped.

O represents an object made out of polygons - because its further away from E than zf is, it will be clipped.

The same would happen if O was nearer to E than zn. The space in between zn and zf is what will be seen by the viewer, anything outside of that space will be clipped.


So how do you set values for zn and zf ?
----------------------------------------
It mostly depends on YOUR application. The values for zn and zf are of the same scale as any coordinates in your world, to whatever scale you choose.

Say you have a 3D cube whose sides are all "500" long, what that 500 means is entirely up to you - it could be 500millimetres, 500feet, 500miles etc... The values for zn and zf are the same, they only make sense once you know the scale of other things in your world.

If you''re working with pre-made 3D models (such as the .X files which come with the SDK), a good thing to do is work out what units those models are using - if the human model you load seems to have vertices with values ranging from 0 to 1.5, then its likely that the model is defined in metres - so set your near and far Z clipping planes in metres. Say to yourself "I can see stuff BLAH2 metres away from my eyes before I can''t make out certain details, and I can''t see stuff BLAH metres in front of my eyes" and use your BLAH and BLAH2 values as znear and zfar - or scale if you need to.

Something important: YOU CANNOT specify 0 for zNEAR or zFAR or horrible stuff happens (when you take a picture with a camera you don''t see the lens do you - same deal - you see a little way outside the lens)

If you''re still unsure, set zNEAR to 0.1 and zFAR to 100.0, get something working and play with the values to see what I mean (experimentation teaches you much more than my text!)


The big caveat
--------------
So cool you''re making a space game and you decide the camera can see from 0.1 metres to say 1,000,000,000,000.0 metres , so you''ll just plug those values into zNEAR and zFAR... WRONG! - your Z buffer only has 16 or 32 bits of precision, and what''s worth, 80% of that resolution is stacked up in the first 20% of the space between zNEAR and zFAR, so the greater that distance between them, the more you get nasty Z buffer artifacts.

The higher the value of zNEAR the better too

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

...continued [had to leave the office just as I was posting the above]

The higher the value of zNEAR the better too. Distance in eye space when represented in screen space (perspective) is what causes the Z buffer to be non-linear - with floating point Z buffers it can be almost logarithmic!
Moving zNEAR away from the eye helps make the relationship between eye space Z (the world seen through your camera lens before perspective) and screen/Z-buffer space Z (after perspective is applied, what goes into the Z-buffer) much more linear so making the Z buffer behave as you perceptually expect, how far you can move zNEAR depends on what you need, and you can lose some range in Z buffer.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

So, will a Z-Buffer of floating-point values be better than today''s integer values?......

In my opinion it would......wouldn''t it(staring motionless at monitor like a bear out of a long hybernation...)?
where would be the difference? if both have x bit you will always have 2^x different values it can take and i really dont care if the z-buffer stores 4 and 5 or 4.000001 and 4.000002 as long as it can tell which one is bigger.
f@dzhttp://festini.device-zero.de
Trienco -

Open your nearest programming book and you''ll find:
unsigned int - 4 Bytes = 0 - 4294967295
float - 4 Bytes = 3.4E+/-38 which means 3.4x10+/-38

This all float thing has to do with the way bits are treated in the float structure according to some IEEE-blahblah standard.

Anyway - I think floating points for the z buffer would increase the resolution for the values further away from the camera - plus maybe eliminate the need for z biasing...?
quote:Original post by Anonymous Poster
So, will a Z-Buffer of floating-point values be better than today''s integer values?......

In my opinion it would......wouldn''t it(staring motionless at monitor like a bear out of a long hybernation...)?



It seems not!. Jim Blinn''s W Pleasure W Fun article mentions floating point Z buffers - and shows how they make the artifacts worse!

Whether whole integer, fixed point or floats are used by hardware is one of those things each manufacturer tends to keep secret.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

This topic is closed to new replies.

Advertisement