Shaking object on near object

Started by
6 comments, last by Lord_Evil 14 years, 8 months ago
I am writing a space model program when i go near the same object in space, the same objects shake when i rotate the view. but when i move far away from it, it looks okay again. someone told me that it is related to the z buffer precision in opengl. am i right? and how can i solve it?thx
Advertisement
It might be a precision issue, but I'm not sure it's the precision of the depth buffer.

Where is your object located?
How is your projection set up?
How do you calculate and set the transformations?

Some code (and maybe some screenies) would really be helpful.
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!
Quote:Original post by Lord_Evil
It might be a precision issue, but I'm not sure it's the precision of the depth buffer.

Where is your object located?
How is your projection set up?
How do you calculate and set the transformations?

Some code (and maybe some screenies) would really be helpful.

I am drawing a very big city.
when i look from the sky ,there is no problem.
but when i go inside the house, which has some furniture,
the furniture shakes.
On the other hand, i also want to see the environment through the windows of the house , so i can not just draw the house when i am inside it.


Well, it could be a precision issue with the coordinates that isn't noticable from far distances. Where are the furniture located (coordinates)?
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!
Quote:Original post by Lord_Evil
Well, it could be a precision issue with the coordinates that isn't noticable from far distances. Where are the furniture located (coordinates)?

so close to eye and with small difference in their vertex coorindates.

Yes, close to the eye if you move the camera next to them. That's why depth precision almost can't be the problem.

But what are the absolute coordinates? If the coordinates are very far from the origin, floating point precision in the vector/matrix math might be the issue.
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!
Quote:Original post by Lord_Evil
Yes, close to the eye if you move the camera next to them. That's why depth precision almost can't be the problem.

But what are the absolute coordinates? If the coordinates are very far from the origin, floating point precision in the vector/matrix math might be the issue.


yes. the coordinate was very large, becuase it is the real coordinates (around 8000000) ,but with tiny difference between the furniture in coordinates (~0.001-0.0000001).
Someone told me that this make the z buffer has a very large range( due to the very far object and very near object), together with the small differnece between the furnitures in coordinates , opengl will have some precision error on the obj coordinates. So how can i solve it?
No, that isn't a problem with the z-buffer.

The depth resembles the distance of a fragment to the camera, so if you are close to the objects depth precision is generally good enough.

The z-buffer distributes depth values non-linearly between the near plane (close to the viewer) and the far plane (maximum visible distance to the viewer), with highest precision being close to the viewer. Thus if you view the objects from a large distance you might see z-fighting, which might however be almost not noticable if the objects are small (i.e. have a small on-screen size).

You problem most likely results from lack of floating point precision. Your very large coordinates cause heavy rounding and you might not even be able to distinguish between 800000.0 and 800000.1.

Now, arithmetic operations worsen that problem since small rounding errors add up. This is likely to happen with your transformations.

One possible solution is to use sectors with integer sector coordinates, i.e. the sector at the origin has 0/0/0 and the sectors next to it could have -1/0/0, 1/0/0, ...
Each sector coordinate represents a certain unit, e.g. 1 km or 100m, so the center of the sector with coordinates 2/0/3 might have a distance of x=200m, y=0m and z=300m from the origin.

Objects are now places in relation to a sector, i.e. their coordinate stores the sector coordinate and an offset. For example, if your table is at x=800000.5, y=15.0 and z=4021.7 and a sector coordinate represents 1000 units the table's coordinate would be:
xsector=800, ysector=0, zsector=4 and
xoffset=0.5, yoffset=15, zoffset=21.7

When rendering and transforming you calculate the sector coordinates of the camera and use that as your local origin, i.e. if the camera is located in sector x=799, y=0 and z=4 your table would be rendered at x=(800-799)*1000 + 0.5=1000.5, y=(0-0)*1000 + 15 = 15 and z=(4 - 4) * 1000 + 21.7 = 21.7


Edit: have a look at this thread for some more information, especially the links in the second post.

Edit: another note, just to make it clear: Although I think your problem is not a depth precision one, you should not forget that a high far/near ratio will kill your depth precision anyways. It's not the object coordinates but the settings for the near and far plane of your projection matrix that matter in this case. Near should be as far away as possible (often 1.0 is a good value) whereas far (the maximum viewing distance) should be as near as possible (values between 500 and 1000 should be sufficient). Have a look here.
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!

This topic is closed to new replies.

Advertisement