Jump to content

  • Log In with Google      Sign In   
  • Create Account


Depth problem : small object in big scene


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 polar01   Members   -  Reputation: 114

Like
0Likes
Like

Posted 16 January 2012 - 03:54 AM

Hi,

Imagine I have a small object in a big scene, by example a car and a floor (it is just an example).
The car has normal dimensions, but the floor is a plane that is 10000 x greater (It can be something more complex than a plane of course).

So, if I set the zNear/zFar based on the plan, then the car is not displayed !
So, if I set the zNear/zFar based on the car, then the plane is flickering and is not correctly displayed !

I'm searching for a solution ... so, if someone can help because I have no idea to solve this problem.

Thanks
--------------------------------------------------------------------------------Aurora Studio - PureLight - Animation & rendering software and API.http://www.polarlights.net

Sponsor:

#2 mrjones   Members   -  Reputation: 612

Like
0Likes
Like

Posted 16 January 2012 - 05:50 AM

Move zNear a bit farther away if possible, it helps. Such zNear/zFar pairs have worked best for me: 0.1-1000, 1-10000, 10-100000 etc on most graphics cards. Result depends on depth buffer bits. These values should usually work quite well for 16 bit depth buffer or above.

#3 polar01   Members   -  Reputation: 114

Like
0Likes
Like

Posted 16 January 2012 - 08:03 AM

Thanks a lot,

I already re-compute zNear and zFar for each frame depending of the scene, camera etc... but for some extreme scenes (imagine a microprocessor in a mountain) it will not work because the scale between the 2 objects (microprocessor & mountain) are really big !!

So, I'm searching for a solution...
--------------------------------------------------------------------------------Aurora Studio - PureLight - Animation & rendering software and API.http://www.polarlights.net

#4 mrjones   Members   -  Reputation: 612

Like
0Likes
Like

Posted 16 January 2012 - 08:16 AM

Does it happen a lot? I mean, is it a practical problem? Microprocessor in a mountain would probably be too small to be visible anyway unless you have very high resolution. In that case it might be possible to split rendering into two separate parts, first render with zNear=1000.0, zFar=100000.0, clear depth buffer and then render everything with zNear=0.1, zFar=1000.0. Not entirely sure, if it would work though.

#5 polar01   Members   -  Reputation: 114

Like
0Likes
Like

Posted 16 January 2012 - 08:47 AM

Thanks,

When I will try to zoom on the microprocessor I will not see the mountain correctly, I will got a flipping effect !

And playing with several layers of objects is complex... it is difficult to handle interesection between objects ! :-P

Not sur that this problem has a real solution.... ;-)
--------------------------------------------------------------------------------Aurora Studio - PureLight - Animation & rendering software and API.http://www.polarlights.net

#6 dpadam450   Members   -  Reputation: 842

Like
0Likes
Like

Posted 17 January 2012 - 02:18 AM

As stated what is your depth buffer precision? What is the size of openGL units for the microprocessor? zNear=1000.0 does not make much sense to me. You are allowing almost nothing to be seen, and should not be adjusting the near/far plane every frame, if ever at all during the application.

"When I will try to zoom on the microprocessor"
Are you zooming by changin the near plane? That is entirely wrong if that is what you are doing, to make a zoom effect you change the lense Field of View which is the first variable of gluPerspective.

#7 Hodgman   Moderators   -  Reputation: 27626

Like
0Likes
Like

Posted 17 January 2012 - 03:06 AM

The quick short answer is: increase your near-plane. Small near-plane values are very bad for depth buffer precision.

It might help to think in terms of a real camera here. When you make a projection matrix, you can visualize it as a pyramid with the point cut off. The top of the pyramid is your near-plane, and the base is your far-plane. The point where the top would be (if it wasn't cut off by the near plane) is the "eye".
However, it's bad to think about the camera as being located at the "eye" position -- insead think about the camera's "film" as being located at the rectangle where the near-plane is.
In your "virtual camera", when you reduce the FOV, the area of the near-plane gets smaller --- however, in a real-camera, it's impossible to shrink the film! So to be realistic, when you reduce the FOV, you should also increase the near-plane distance so that the area of the quad is the same size as it was originally (so you're still projecting onto the same size "film"). In order to do this, you should also move the "eye" position backwards so that the "film" is in the same location as it was before...



Some alternative short answer are: render in multiple passes, use an inverted floating-point depth buffer, or implement a logarithmic depth buffer.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS