Your "view" matrix is the
inverse of the camera's transformation matrix: it brings coordinates from world-space into the camera's local frame of reference. So if you took the camera's position in world space and transformed it by your view matrix, you would get a result of (0, 0, 0). If you were to create a matrix that contained the camera's transformation (the combined rotation and translation), then the last row would be the camera's position.
As for the different depth values, it looks like you're choosing between two different possibilties here. The first one, which you currently you have in your code, is the distance between the camera and the surface point being shaded:
Pos = mul(Pos, WorldMatrix);
float depth = length(Pos - CamPos);
This works, and is totally fine as long as "Pos" and "CamPos" are in the same coordinate space. So if they're both in world space, then everything is working fine. Note that it doesn't matter if you do "Pos - CamPos" or "CamPos - Pos" here: they're both equivalent for the purposes of computing the distance. The above calculation is also equivalent to the following:
Pos = mul(Pos, WorldMatrix);
Pos = mul(Pos, ViewMatrix);
float depth = length(Pos); // CamPos is (0, 0, 0) in view space
So if you transform a position to view space, then it's implicitly local to the camera. As I mentioned earlier the camera is at (0, 0, 0) in view space, so you can just compute the length of the view space position to get the distance from the camera.
Now let's talk about using just the "z" component of your view space position:
Pos = mul(Pos, WorldMatrix);
Pos = mul(Pos, ViewMatrix);
float depth = Pos.z;
This is valid, but it gives you a different value than what we computed earlier. The previous value was the distance between the camera and the surface, this time we've computed the result of projecting the camera->surface vector onto the camera's local Z axis. Here's a quick diagram:
[attachment=34082:Depth_Projection.png]
So in this diagram the blue point at "C" is the camera, and the blue arrow is the direction that it's facing (AKA its local Z axis). The light blue point at "P" is the surface being shaded. The length of the red line represents the first depth value: it's the absolute distance between the camera and the surface. The length of the green line is the other depth value: it's the projection of the camera->position onto the camera's local Z axis. The length of this projection is proportional to the cosine of the angle between the C->P vector and the local Z axis, which makes it easy to compute with a dot product. In world space you would do it like this:
Pos = mul(Pos, WorldMatrix);
float3 camToPos = Pos - CamPos;
float projectedDepth = dot(camToPos, CamZAxis); // assuming that CamZAxis is normalized
If you do the same thing in view space, then CamZAxis is implicitly (0, 0, 1) since it's the local Z axis. So the dot product ends up giving you camToPos.x * 0 + camToPos.y * 0 + camToPos.z * 1, which ends up just being camToPos.z. And since camToPos = Pos - (0, 0, 0) in view space, you can just do Pos.z.
This might seem obvious, but it's worth pointing out that the projected depth doesn't depend on the surface's position with respect to the camera's local X and Y directions. What this means is that you could move "P" anywhere along the orange line and you will get the same depth value. This is not true if you use absolute distance.
So now that we've gone through all of that, which one should you use? That really depends on what you're going to do with this depth value. Either one is useful for different scenarios, and you can always compute one from the other with a bit of math. The depth value that's stored in a GPU's depth buffer is a projected depth and not an absolute distance, however it's also exponentially warped as part of the perspective projection.