# Screen space size of an AABB

This topic is 1209 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi folks,

I have data binned into a sparse grid of axis-aligned voxels, and I was trying to think of how best to calculate the LOD needed for each voxel. The obvious approach in my mind would be to transform the corners of the voxel's AABB into screen space, and then use that pixel size to determine the LOD. This mostly works great, except that the voxel could be large, and be some combination of the following edge cases:

- some of the corners may be behind the clipping plane (so they might contribute a larger pixel size than the visible part of the voxel)

- some of the corners may be behind the focal point of the camera (and so they are reflected in the final image)
- some of the corners might actually be in the visible frustrum, so I can't just ignore the voxel in the above cases

So, is there a good way to handle the above case? Or is there a better way to calculate some LOD for a voxel? I've seen hackish methods that make all kinds of assumptions about the projection matrix. However, I need a solution that is pretty general, and would work with pick matrices and things with shear, etc.

Thanks for reading. Any ideas are greatly appreciated.

Dave

##### Share on other sites

You could calculate the LOD based on a percentage of the screen's size.

For example a mesh that would occupy 50% of the screen's space, if it was fully in view, would have a value of 0.5, and a mesh too big to fit on screen as it is sligtly larger than the screen dimensions might have a value of 1.2 (these calculations of course account for the distance from the plane, as further away items are smaller and this would trigger the correct LOD). You could calculate these from the AABB of the mesh as you said but a normalised comparison makes more sense as it is screen resolution independent.

You can then have a list of LODs each with a 'minimum screen size' value. For example:

LOD 0 (highest detail): value 1.0

LOD 1: value 0.3333

LOD 2: value 0.1666

LOD 3: value 0.0888

Let me know if this helps at all!

Edited by braindigitalis

##### Share on other sites

Taking the projection space resulting matrix and transforming AABB points with it will give you solid first idea of where the corners are in normalized projection space (-1,1x;-1,1y,-1,1z) , from here you can very well advance on how far AABB is in manner of LOD, if you also know its world space corners sizes (even a mesh covering entire screen can be of minimal LOD, like a planet for example).

##### Share on other sites

Taking the projection space resulting matrix and transforming AABB points

I would yet add up that to retrieve normalized device coordinates, you have to yet perform w division on the transformed vector

IntoProjectionTransform(V)=[x,y,z,w]

normalizeddevicevector=[x/w,y/w,z/w,w]

##### Share on other sites

@braindigitalis, I need screen space, and not NDC, because I will want to take my viewport size into account. If I have a large viewport, I will want to display a higher LOD.

@JohnnyCode, maybe I am missing something, but I don't see how your suggestion differs from mine. Though, I don't understand what you mean when you say "if you also know its world space corners sizes". I know the scale of the data in the data in the AABB, if that's what you mean. I just need to know what the pixel-space size of the AABB is, so I can decimate the data in that AABB accordingly. The core problem is: my approach has some funky cases when the AABB is partially behind the near clipping plane and/or behind the camera position itself.

I'm open to other ways of computing LOD, if there is one that does not involve transforming all the points.

##### Share on other sites

I need screen space, and not NDC, because I will want to take my viewport size into account.

normalized device coordinates are what is visible, it is "screen space" as you apoint, what is visible are values in range of (x,y) (-1,1) from left to right, top down,  with zero in the middle, z 0.0 as near clip, z 1.0/-1.0 far clip

I just need to know what the pixel-space size of the AABB is, so I can decimate the data in that AABB accordingly. The core problem is: my approach has some funky cases when the AABB is partially behind the near clipping plane and/or behind the camera position itself.

Do I understand you render those AABB? I suggested to find out projection space points of AABB trivialy to see what they occlude on screen (compute bounding pixel if you want or analyze anything else)

##### Share on other sites

normalized device coordinates are what is visible, it is "screen space" as you apoint, what is visible are values in range of (x,y) (-1,1) from left to right, top down, with zero in the middle, z 0.0 as near clip, z 1.0/-1.0 far clip

OK, when I say "screen space" I mean "pixel space", not NDC. Perhaps I am not using the right terminology. Regardless, finding the size of the object in NDC does not tell me how many pixels it occupies on screen. So, I have no way of telling the difference between an object taking up N percent of a small viewport vs the same N percent of a large viewport. The former would require a different LOD than the latter.

Do I understand you render those AABB? I suggested to find out projection space points of AABB trivialy to see what they occlude on screen (compute bounding pixel if you want or analyze anything else)

Yes, I am doing exactly as you state. I don't "render" the AABB, in the sense that I don't send its verts to the GPU or anything. I simply transform each of the 8 corners of the AABB into pixel space (using the model-view-projection matrix, and then transforming from NDC to pixel space). The bounding box of those 8 transformed points do indeed give me the pixel size of the object. However, I am still running into the weird problems that I mentioned in my original post, when the AABBs occupy some part of the visible frustum, but also go way behind the near plane. THAT is the problem I need to solve, or work around.

##### Share on other sites

Not directly what you're looking for, but you could treat the AABBs as bounding spheres instead, and then compute the projected area of the 2D ellipse they form on the screen: http://iquilezles.org/www/articles/sphereproj/sphereproj.htm

However, that technique does also break down when the box/sphere is mostly off the screen but still visible... The above functions will produce incredibly large negative numbers for the pixel-area in that case :(

##### Share on other sites

Not directly what you're looking for, but you could treat the AABBs as bounding spheres instead, and then compute the projected area of the 2D ellipse they form on the screen: http://iquilezles.org/www/articles/sphereproj/sphereproj.htm

However, that technique does also break down when the box/sphere is mostly off the screen but still visible... The above functions will produce incredibly large negative numbers for the pixel-area in that case

Thanks Hodgman. I actually found a decent hack of a solution. If I find any corner point that is behind the near plane and project it onto that near plane, I get a decent result. There is a bit of high-order discontinuity in the pixel size (i.e., as I move the camera forward, the change in pixel size slows down a bit when its corners start crossing the near plane). But this seems better than the degenerate case when they get closer to the plane that the camera is on.

1. 1
2. 2
3. 3
Rutin
14
4. 4
5. 5

• 9
• 9
• 11
• 11
• 23
• ### Forum Statistics

• Total Topics
633674
• Total Posts
3013275
×