• Advertisement
Sign in to follow this  

OpenGL Rendering

This topic is 4789 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This may sound stupid but I came to a point where I really wanna know how the opengl rendering process goes, something like what calculations does it do that I see a quad in front of the camera when I just send 4 Vertices as 12 floats, the rendition process, what happens with the depth/frame buffer and all, can someone please point me to some documentation or the like ? I`ve only seen images and stuff like that up until now but I really wanna see the exact calculations, and basicly how things get so cool sometimes :)

Share this post


Link to post
Share on other sites
Advertisement
what a question, the only things i can think about is the openGL specs.
http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/
and the book "Computer graphics using open gl" (there are probobly other books to, like the "openGL programing reference" and so on).
But other than that i havn't seen any colected information about this.
Personaly i know alot of how the process works, so if you ask a specific quesion about how one thing works i might be able to ansver.

Share this post


Link to post
Share on other sites
I imagined you wore going to point me that, that`s about what I find on the net too...
I cam across some calculations at some point that wore saying why the z-buffer has some glitches or something...
For example what`s the rasterization process and how does OpenGL generates 2 D shapes and sorts them in the depth buffer ?

Share this post


Link to post
Share on other sites
The rasterization proscess is a simple procedure, it basicly fills Your polygons(after they have been converted to screenspace from world space) with pixels.
the pixels are then stored in the different buffers you use(color, depth, stencil).
for the depth buffer, the hardware interpolates the tree different z coordinates to create the current z coordinate (BTW,this is the wrong way, and it is the cause of z fighting).
then it compares it with the old depth value and if it's closer it overwrites it, if not, then it does not over write or out put the pixel to the color buffer.

Share this post


Link to post
Share on other sites
Are you asking how model-space vertices are transformed to eye/camera space points?
I don't really understand the question.
If you need to know about the physical rasterization then I always taken it for granted, a look at some code snippets from ARB_FP could help however.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement