Having problems with 3D pipeline

Started by
3 comments, last by sab7 8 years, 1 month ago

Hi,
I have a question about the 3d pipeline. I'm not sure which space the backface culling and polygon face normals should be calculated in for lighting?

A little background: Just for educational purposes I currently have a program that I wrote in C++ that just displays/projects a 3d cube in the middle
of a window and allows me to rotate it around using w,a,s,d keys. Within this program I have a function called TransformWorld() that takes this cube
and the current yaw,pitch,roll as specified by the user and then transforms it from object -> world -> camera -> projected -> window/screen coords.
Finally I place a point light somewhere in the world and colour the faces using the angle between the face and the light as a constant shade value.

So I have all these coordinate spaces. I would have thought logically that backface calculations and surface normals would happen in world space where
everything still makes sense. But after trying a few different things I could only get it to look correct by calculating them after projection. This
makes me sad.

I can post some code for this TransformWorld() function if it will make things clearer, its a little long though heavily commented.

Thanks for any feedback or pointers to specific/good places to find this information.

Advertisement

World space is too early - consider that the camera transformation can (and likely will) rotate the geometry so that faces which were "backfacing" (if that even makes sense without a camera) are no longer oriented in their original directions. Projection can also affect the orientation of the faces with regard to the screen. Backface culling should be done, at the earliest, after projection.

Niko Suni

As you say, world space is too early, I neglected to say that in my toy system the transform from world to camera space is the identity matrix, so they're the same thing really.

Would projection make a poly that is facing away from you warp around so that it could then be facing you or vice versa though? Intuitively I didnt think projection would do that?

As you say, world space is too early, I neglected to say that in my toy system the transform from world to camera space is the identity matrix, so they're the same thing really.

Would projection make a poly that is facing away from you warp around so that it could then be facing you or vice versa though? Intuitively I didnt think projection would do that?

It depends on the projection matrix that you provide, but in general if you're using the usual orthogonal/perspective cameras no.

It might help to think about different spaces as different coordinate systems or "different views of perspective" rather than thinking that transforms 'move' or 'rotate' your objects.

First, you have a cube in model space coordinates. When you apply the model to world transform(known by many as the world transform), you get a cube in world space coordinates. The world transform takes the cube in model space to world space. Apply the same logic for the world to camera(known as the view) transform, and the projection transform, which takes the world from view space coordinates to normalized device coordinates(imagine a -1 to 1 box). From that -1 to 1 box you can scale it up to whatever your window size is.

Backface culling can be applied even after projecting the triangle points on your 2d viewport. In fact, since you have only 2d coordinates to bother about, it's much simpler to do backface culling in that 2d space compared to a 3d world space.

Thanks for the replies guys, I appreciate it, I think what I've failed to do here is to sit down and draw some pictures of possible scenarios. What you say about the different views of perspective is helpful.

Reading this http://twimgs.com/ddj/abrashblackbook/gpbb61.pdf this morning helped a little too.

Cheers.

This topic is closed to new replies.

Advertisement