Eye space vs screen space

This topic is 2441 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hey,

I'm a bit worried this isn't the correct place to post this, but here goes:

What is the difference, if any, between eye space and screen space?

I've sort of made the mental assumption that eye space is the 'world space' represented in an ON-basis placed in the 'viewport' plane, with the normal of that plane as the third vector.

In contrast, in my mind, screen space is spanned by the plane of the viewport, with geometry projected onto the screen plane. So, screen space is no longer 3D, but a 2D representation ready for rasterization / shading and stuff.

Does anyone care to tell me I'm stupid? Or pleasantly correct me? Or tell me I'm kind of right?

Share on other sites
View / eye / camera / screen - space are sometimes used interchangeably in a lot of literature, which confuses things...
Though yes, I usually use 'screen-space' when I'm referring to a 2D coordinate system --- either in integer pixel coordinates ([0,0] to [width-1,height-1]), normalized pixel coordinates ([0,0] to [1,1]), or NDC ([-1,-1] to [1,1]).

If someone said 'screen-space' when they actually meant 3D view-space, I'd get a bit confused.

View / eye / camera - space are all the same to me, consisting of the 3D world-space points that have been transformed by the view matrix (inverse camera matrix).

Post-projection space / NDC-space are the 3D view-space points that have been transformed by the projection matrix and have undergone perspective division. From here you simply drop the z component to get screen-space.

Though, I've also seen people refer to post-projection-space to be the 3D view-space points that have been transformed by the projection matrix but have not yet undergone perspective division... And once that division occurs, you're in NDC-space (screen-space with z).

Share on other sites
Thanks, that clears it up.

Another question:

The above page says:

"A fragment is basically a pixel before it is rasterized."

This confuses me a bit. I thought rasterization was the operation vector -> dot / pixel. But I've learned that a fragment is basically a 'possible' pixel for a certain location, that can be blended together with other 'possible' pixels in the same location, which means that a fragment is actually something that is rasterized, which doesn't seem to mesh with that quote from opengl.org. Can somebody clear this up?

Share on other sites
fragment vs pixel is kind of inconsequential. As far as I understand, a in OGL pixel shaders are called fragment shaders because a fragment does not automatically correspond to a pixel. For example, in the pixel/fragment shader you could do a texkill, which would result in no pixel being rasterized. I could be wrong though, never spent too much time mulling it over...

1. 1
Rutin
40
2. 2
3. 3
4. 4
5. 5

• 18
• 19
• 12
• 14
• 9
• Forum Statistics

• Total Topics
633362
• Total Posts
3011531
• Who's Online (See full list)

There are no registered users currently online

×