# What is Camera space ?

This topic is 946 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi everyone !

I am trying to understand a paper about Scalable Ambient Obscurance.

They are referring to multiple type of spaces. But I am wondering what is camera space and what's the difference between screen-space.

Here are all the sources and here is the part referring to camera space.

float sampleAO(in ivec2 ssC, in vec3 C, in vec3 n_C, in float ssDiskRadius, in int tapIndex, in float randomPatternRotationAngle)
{
// Offset on the unit disk, spun for this pixel
float ssR;
vec2 unitOffset = tapLocation(tapIndex, randomPatternRotationAngle, ssR);

// The occluding point in camera space
vec3 Q = getOffsetPosition(ssC, unitOffset, ssR);

vec3 v = Q - C;

float vv = dot(v, v);
float vn = dot(v, n_C);

const float epsilon = 0.01;

// A: From the HPG12 paper
// Note large epsilon to avoid overdarkening within cracks
// return float(vv < radius2) * max((vn - bias) / (epsilon + vv), 0.0) * radius2 * 0.6;

// B: Smoother transition to zero (lowers contrast, smoothing out corners). [Recommended]
float f = max(radius2 - vv, 0.0); return f * f * f * max((vn - bias) / (epsilon + vv), 0.0);

// C: Medium contrast (which looks better at high radii), no division.  Note that the
// contribution still falls off with radius^2, but we've adjusted the rate in a way that is
// more computationally efficient and happens to be aesthetically pleasing.
// return 4.0 * max(1.0 - vv * invRadius2, 0.0) * max(vn - bias, 0.0);

// D: Low contrast, no division operation
// return 2.0 * float(vv < radius * radius) * max(vn - bias, 0.0);
}

Is it possible to convert a camera space position (Q in this case) to screen space ? How ?

In extension to this, are there some papers, blog posts, articles, etc. explaining all spaces available and how to switch from one to another (world space, tangent space, object space, screen space, light space, camera space, etc.) ?

Thank you very much !

##### Share on other sites

Camera space (or View Space) is the space of the entire world, with the camera or viewpoint at the origin -- every coordinate of everything in the world is measured in units relative to the camera or viewpoint, but its still a full 3D space.

Screen space, is basically the pixels you see on your screen, its a 2D space, but you might also have buffers other than the color buffer (the pixels you see), like the depth buffer. You can reconstruct a kind of 3D space since you have an implicit (x,y) point that has an explicit depth (essentially z), and that's what gets used in these SSAO techniques, if I understand them correctly.

##### Share on other sites

I'm not sure if the Vertex transformation pipeline is still taught but the coordinate spaces go from

Vertex -> Object Coordinates -> ModelView Matrix -> Eye Coords -> Projection Matrix -> Clipping coords -> Perspective matrix -> normalized device coordinates -> viewport transform matrix -> Window Coordinates.

Esentialy it's an effect of matrix multiplication. When you move everything to camera space, you're projecting your global coordinates onto a grid that assumes the Camera as the center.

Following the transformation pipeline, you can locate an object on screen space under the condition that your object is within the camera's view. Technically speaking... your geometry shader should handle something like that for you.

##### Share on other sites

Thanks @Ravyne I understand now. Thanks for your explanations too @Tangletail.

Edited by fire67

1. 1
2. 2
Rutin
21
3. 3
4. 4
A4L
15
5. 5
khawk
14

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633737
• Total Posts
3013613
×