Sign in to follow this  
Tipotas688

Why do we do things the way we do?

Recommended Posts

It was presented to me as this question: "Why we draw everything in computers the way we do?" I don't know whether they meant how do we create 3D effects in a 2D monitor or how do the drawing techniques we use in programming relate to the way hardware works but I certainly could not reply with an answer that had a strong backbone. Is there any website, book or even if you can tell me because I am really interested to learn why things work the way they do. Thanks! [Edited by - Tipotas688 on March 6, 2010 6:44:17 PM]

Share this post


Link to post
Share on other sites
This book will answer all of your questions: http://www.amazon.com/Computer-Graphics-Principles-Practice-2nd/dp/0201848406

The hardware referenced is very outdated and it spends a great deal of dealing with a graphics library that never caught on (and you've probably never heard of), but it also covers graphics programming theory and practice in excellent detail.

Share this post


Link to post
Share on other sites
The way we do things is constantly evolving.





First, you should know that the problem of computer rendering has been solved. That is, we know how to perform photo-realistic rendering.

There were two papers on the Rendering Equation back in 1986. It is a comprehensive formula for rendering. It captures all but a few things like florescence and some volume-based effects like subsurface scattering. The high-quality rendering models (radiosity, ray tracing, photon mapping) are just specialized forms of the equation.

The difficulty is that there is a lot of math behind it, and some of the formulas can be very complex. It requires multiple recursion passes. The full rendering equation is best with infinite recursion and infinitely detailed models, so obviously some approximations are required.

There are projects for real-time ray tracing, but the "ultimate" goals of the full rendering equation are still out of reach of current hardware.

Intel made a specialized version of Quake Wars a year or so ago with real-time ray tracing -- but it required 16 processors and ran at about 15 FPS. Ray tracing only represents about 1/3 of the Rendering Equation.




We use a series of projection matrices because it is much more efficient.

It is easy to approximate a model with triangles. It is easy to manipulate the points in a model in space using some matrix math. After going through a few transformations, you have a point cloud of the scene. It is easy to rasterize the points of a projected triangle. It is easy to perform optimizations like back-face culling and occlusion tests. Once you know the raster positions it is simple to iterate over each point to apply the functions for texturing and shading.




We have seen may changes in the way we do things, and will continue to see changes in the future. We have gone from hardwired texturing and lighting functions to certain fixed functions, today we are at programmable shaders run before transformations and also after rasterization.


Processing power continues to increase. Eventually it will become cost effective to shift from our current form of projected polygons to a style similar to ray tracing. Then later to radiosity-type methods. Then someday we'll have enough processing power for the full rendering equation, with the volumetric difficulties solved, as well.


Hopefully that answer has a little more backbone.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this