zbuffer accuracy

Started by
0 comments, last by genghis 20 years, 8 months ago
i''ve begun reading the opengl programming guide and came to a point where the book mentioned that values closer to the near clipping plane have greater depth accuracy than those nearer the far clipping plane (with perpspective projections). the reasoning behind this was because during the perspective divide the z values were scaled non-linearly. i looked online a little further into the subject, and what i figured is that the z-values are scaled in such a way because that''s the only matrix transformation that will produce the desired change of the perspective canonical view volume to parallel canonical view volume (i don''t know any linear algebra except how to multiply matrices so i might be wrong). my question is then, why must we HAVE to use a matrix tranformation. why can''t we just alter the x, y (w?) values and leave the z value unaltered?
Advertisement

That is a good question. I''ve tried of thinking of different reasons for it but so far I haven''t been able to come up with one. That is until I was working on some textures today.

It occured to me that the texturing needs to be taken into account for the distance and the width and height of the object in order to look correctly. I am assuming this is probably a same factor for many other things such as fogging.

My other ideas is that it maybe faster to deal with the matrix if you are doing things uniformly to the matrix. Since square matrices are used, many tricks can be used to increase performance.

I guess a really good test of both ideas would be to code something up where you do all work in software without ever using a 3D API. Although it will probably run slowly, you can at least test for the basic idea.

My name is my sig.

This topic is closed to new replies.

Advertisement