Jump to content
  • Advertisement
Sign in to follow this  
Cybermat

3D Graphics pipeline advice.

This topic is 4169 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Folks , im very new to this 3D graphics pipeline business and have been given an assignment at college. My assignment basically requires me to pick an image from a game (I chose half life 2) and describe how this image is converted from 3D to a 2D image using the 3D graphics pipeline. What I am really looking for is a simple explanation of the process involved, I am currently using a good book (3D computer graphics 3rd edition by Alan watt) it has some great explanations but i get a bit lost in the text, as I was taught about the pipeline in a slightly different manner and reading back through the lecture notes I find they are not very explanitory. do you guys by any chance know of a fairly good dummies guide to the 3D graphics pipeline and perhaps an explanation of the process involved in conversion from 3D to 2D that I can have a browse through. Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
There is a good technical article on the Direct3D Transformation Pipeline here that takes you right up to screen coordinates being passed to the rasterizer.

As for rasterization itself, you could have a look here, but there may be better explanations of it out there [smile]

Regards,
ViLiO

Share this post


Link to post
Share on other sites
Whilst I don't particularly rate the technology side of the Source engine you might be better off picking a simpler image to get started on [smile]

Fundamentally there are three sections of the pipeline that you need to understand:

1. Pipeline setup - this is where the application determines what to draw, what order and with what configuration (e.g. finer details of the effects/processes used)

2. Geometry transformation - this maps the raw geometric data from its original storage format through to a format that can be displayed on the screen. This is very mathematics heavy - lots of matrix transformations in here. Typically this will be the process of Model->World->View->Projection->Screen. Note that it isn't really transformed from 3D to 2D as the final screen-space position will include a Z value stored in the depth buffer.

3. Rasterization - the projection of triangles onto the screen allows us to determine which pixels in the final image 'belong' to which geometry. Here we do texturing and shading.


The above areas can be repeatedly broken down, but they should give you some high-level structure to go with.

Try these excellent diagrams if you want the low-level pipeline details.

hth
Jack

Share this post


Link to post
Share on other sites
thanks for the fast response guys.
I had a choice of 6 images, i think Alan Wake was one,Max Payne 2,Mafia,possibly C&C 3 and Half Life 2 I cant remember the others tbh.
I will take a look through the articles you mentioned and see if i can get my mind to figure the basics , once I am there I can usually get on with things, its just starting out is always a bit confusing.
its more the theory behind the transformation of a 3D game into a 2D image in real time that we have to concentrate on. rather than any specific engine e.g. Direct X or Open GL.
stuff like
Model transformation,lighting,viewing transformations , projection transformation,clipping and scan conversion/rasterization etc..

i need to apply all this theory for the image that i have chosen and explain how it has been converted.
Its a pretty cool part of the course but sadly we had presentations which took about 4 weeks of the time then the lecturer was away for another 3 or 4 lectures due to meetings in Germany etc so we had to cram it all into about 4 or 5 weeks which was a pity as I found this part one of the best things we have done, also this means we crammed in a pile of notes quickly and looking back quite poorly and now assignment 2 has arrived I find myself thinking now what was it he said lol.

Share this post


Link to post
Share on other sites
One helpfull thing I can point out is that Half-life 2 uses defered rendering - This means that a quick pass that only writes to the depth buffer is performed. The benefit of this is that the only pixels that get rasterized, textured or run pixel shader programs are exactly those that will appear in the final image. Without it, a pixel may go through all that processing only to be covered up by another pixel later.

Imagine those fancy glass doors with the wavy refraction pattern with a guy standing in front, covering most of it.

Basically, they wager that the cost of the Z-pass (which is very fast) is smaller than the cost of the waisted texture/shading effort, which is generally true.

I relize this doesn't answer your broader question, but a mention of defered rendering with a brief explaination of what it is and why its usefull may be worth some extra points if you cover the rest of your bases well [wink]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!