Archived

This topic is now archived and is closed to further replies.

Death buffer

This topic is 5746 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I''ve a problem with the z-buffer ... I''m drawing a landscape like in NeHe''s tutorial 35, with a sort of sea (a plane that is cutting the landscape). But .... the intersections of the landscape and the sea are very bad-looking ! I tried to change the values of z-far / z-near but I have not a good result ... the landscape'' coordinates are : z = [-128,128], x = [-128,128], y = [0,256] the sea'' : x = [-2000,2000], x = [-2000, 2000], y = [0,10]

Share this post


Link to post
Share on other sites
And what color depth do you use ? (16bpp or 32bpp)

And how many bits does your depth buffer have ?
(call glGetIntegerv(GL_DEPTH_BITS, &depth_bits) and print ''depth_bits'')

Share this post


Link to post
Share on other sites
depth_bits => 16 bits
bpp => 16 bits
3d card : Radeon VE (boooo)
gluPerspective() => I didn''t use it ... Should I ?

glDepthRange() : I tried a lot of values ... for z-near, I usually puts it to 0, but I heard that I must set it to 1 ... but when it is set to 1, the z-buffer doesn''t work at all (the pixels that appears are the last ones that I draw).
for z-far, I tried some values from 1 to 30000 ...

Thanks for your help

Share this post


Link to post
Share on other sites
quote:

depth_bits => 16 bits


First part of the problem.

quote:

gluPerspective() => I didn't use it ... Should I ?


Not necessarily. You use glFrustum instead ? What values do you use there ?

quote:

glDepthRange() : I tried a lot of values ... for z-near, I usually puts it to 0, but I heard that I must set it to 1 ... but when it is set to 1, the z-buffer doesn't work at all (the pixels that appears are the last ones that I draw).
for z-far, I tried some values from 1 to 30000 ...


Any value for znear below 1 is usually deadly for the zbuffer. 0 isn't even defined, and will yield unpredictable results.

You should try something like znear=1.0f and zfar=10000.0f . That should work pretty well with a 16bit depth buffer. You can then gradually adjust the far plane, to get the results you want. But don't make znear <1, if not absolutely necessary ! You can even try to make it a bit larger, if that does not introduce rendering problems. The zfar value isn't that critical, but a good znear value is crucial, especially with only 16bit depth buffer precision.

Edit: just to prevent misunderstandings: the above values for znear / zfar are for use as frustum planes for glFrustum/gluPerspective, and not for glDepthRange !

[edited by - Yann L on March 19, 2002 4:46:47 PM]

Share this post


Link to post
Share on other sites
ouch !
16-bits depth buffer is not enough

Try with 24-bits or 32-bits depth buffer and that should be better !

About depth range, use glDepthRange(0,1) if you can. (You can''t if you use specific depth effects, which I personnaly doubt about)

And don''t set the z-near to zero !
btw, what''s "z-near" for you ? is it the parameter you use into glDepthRange ? If so, then you''re wrong.

Share this post


Link to post
Share on other sites
glDepthRange values are clamped to [0,1]
Yes you have to use glFrustum for a perspective projection matrix.
Use glOrtho for an orthographic projection.

Share this post


Link to post
Share on other sites
I think you're asking too much for a 16bits depth buffer.
Set the z-near to 1 or even to 0.2 or 0.5 if 1 is too much.

Which unit do you use ? the meter ?
[0.1f -> 10000.0f] represents [10cm -> 10km] ?
If so, are you sure you will see objects/terrain 10 kilometers away, and are you sure that you'll need to see objects close to the camera at 10cm ?

And please note that it gives better result to multiply the z-near by 10 than dividing the z-far by 10

[edited by - vincoof on March 20, 2002 7:16:01 AM]

Share this post


Link to post
Share on other sites
I was using this values because I didn''t know this values was dealing with the depth buffer range !

I was thinking that this values was only there to select the range in which we want to draw objects (just a simple clipping).

My program doesn''t need so mush values ... I need to see objects from 10 to 1000 units maximum

I was using 0.1 for z-near because I started my program from the NeHe tutorial 35, where there is this value ... and for 10000, it''s beacause I changed it

Share this post


Link to post
Share on other sites
Yep it''s ok

One more time : thanks a lot
I do like NeHe web site so much ! I already was crazy about the tutos, and now I also can see that the people of the forum is very nice

Only one thing ... how can I change the number of bits of my Depth Buffer ?

Share this post


Link to post
Share on other sites
You''re more than welcome
Thanks for the compliments. We sure appreciate them !

With NeHe''s base code you can set some parameters for the creation of the window (such as window width/height and fullscreen mode) and there''s a parameter for the bit planes of color buffer, and for the bit planes of depth buffer.
I think that NeHe sets the depth buffer to 16bits in his tuts. You can try different values.

But remember that it is just a request. You may ask for 64bits and probably the driver will switch to 32bits.
Some cards/drivers force the number of bit planes in the depth buffer according to the number of bit planes in the color buffer. For instance, if you have a GeForce2 card and your color buffer is 16bits, then you''ll always have 16bits depth buffer even if you ask for a 32bits depth buffer.

Share this post


Link to post
Share on other sites
Ok ... so if I set a 32 bits Depth Buffer, the application will use a depth buffer of 32 bits or less, on every computer that use the program ?

Share this post


Link to post
Share on other sites
yes that''s it.

Obviously, if a card only support 64bits for depth buffer and you ask for 32bits, then it will result in a 64bits buffer.
So you will exceptionnally have more bits than wanted.
But I doubt any manufacturer would be silly enough to produce such a card

Share this post


Link to post
Share on other sites
The GeForce family doesn't support 32bit depth buffers. 24bit is the maximum (because they pack the 8bit stencil into the same 32bit quadword).

I thought, I just throw that in, in case Mikvix desperately tries to get a 32bit buffer...

quote:

But I doubt any manufacturer would be silly enough to produce such a card

SGI makes 3D boards that support 128bit depth buffers only...

[edited by - Yann L on March 21, 2002 9:47:26 AM]

Share this post


Link to post
Share on other sites
And if I need a 1024 bits depth buffer ?
Ok ok ... I''ll wait for the GeForce 12

Anyway, thanks for all this info ... Now I nearly understand 1% about how depth buffer works ... arf

Share this post


Link to post
Share on other sites
128 bits ?
LOL
I knew that some SGI boards supported 64bits which is monstruous.
128bits is just... monstruously monstruous


and AFAIC the GeForce family supports 32bits depth buffer (if you don''t use the stencil buffer).

Share this post


Link to post
Share on other sites
quote:

128 bits ?
LOL
I knew that some SGI boards supported 64bits which is monstruous.
128bits is just... monstruously monstruous


I know, but they have a programmable framebuffer, means you can assign as much components you like to each channel, as long as you don''t run out of framebuffer memory. So you could even get a 256bit depth buffer, but I don''t know if there would be much room left for your r,g,b image On the other hand, with 1GB framebuffer memory, there is some room to play around. I *love* those machines, unfortunately only at work, I would like to have such a thing at home...

quote:

and AFAIC the GeForce family supports 32bits depth buffer (if you don''t use the stencil buffer).


Hmm, I think it will still return a 24bit depth buffer + 8 unsused bits, AFAIK it''s hardwired on the board. I''m sure the GF2 does that, pretty sure the GF3 as well. But I''m not sure about the GF4, could be that you can get a real 32bit buffer there.

Share this post


Link to post
Share on other sites
quote:
Hmm, I think it will still return a 24bit depth buffer + 8 unsused bits

Wow, I''ve just tried it and you''re right ! (I''ve tested it on GF3)
I can''t believe it. So many ppl told me not to use the stencil buffer in order to have better depth precision with GeForces. Darn''em.

Share this post


Link to post
Share on other sites