Partial rendering in a small box

Started by
5 comments, last by Mr_Fox 7 years, 3 months ago

Hi,

I have a big volume/mesh and I only want to render part of it.

The part is defined in a small 3D box and only volume/mesh inside this box is rendered.

There are two situations: one is the data is too big to load into video memory and the other one is the data can be loaded into video memory.

For both cases, I just want to render the volume/mesh inside the box.

How to do this?

Thanks in advance.

YL

Advertisement

For data can be fully loaded, just create proxy spirit (based on 3D box world pos) as RT for volume rendering (uv also depend on 3D box pos), and place this spirit on your 3Dbox (or map the RT texture on this box)

For data too big, then is trickier:

If your 3Dbox movement is predictable, and you have streaming system running in background then there are 2 choices I can think of:

1. the 3D tiled resource (maybe only latest GPU could support it), if you are satisfied with only support latest GPU, I think that does exactly what you want: only needed tile is in vram.

2. brick your huge Texture3D into tons of smaller ones (handmade tiled resource)

then based on your visibility prediction stream in/out tiles/small Texture3D objects accordingly. and do the aforementioned rendering.

If you 3Dbox is not predictable, then I have no idea.....

However, if your volume is sparse (like SDF), take advantage of the sparsity, create spacial structure (like octree) and you could have much smaller proxy Texture3D (store offset into the typedbuffer) along with small typedbuffer (store actual voxel data). And in most case you could fit it into vram. And modify your volume renderer to go another indirection from sample the proxy Texture3D into actual voxel data (or just return if voxel is empty)

I am not quite clear about what you said for the first case, i.e., the data can be loaded into vram.

Would you please guide me to some articles talking about details of this?

Many thanks!

unfortunately I don't have links for that, but the idea is straightforward:

for render standard mesh within 3dbox, use custom cllipping plane (6 of them, one for each box surface) to clip mesh to the part only inside 3Dbox no change needed for shader file

for render volume (Texture3D):

1. rasterize your 3Dbox as normal but keep all their 3d pos per vertex (so in your vs, output world space pos)

2. standard raycasting algorithm in ps (need view pos, and pos from vs, then for each ray cast step calculate the Texture3D coordinate and use that to sample your volume and accumulate result) probably you need to put this after rendering all you opaque object so in your raycasting ps you could query depth buffer for correct occlusion and sample color buffer for final color merge

I have a big volume/mesh and I only want to render part of it.

Do you mean you have a big mesh that occupies a large volume? or something different?

The part is defined in a small 3D box and only volume/mesh inside this box is rendered.

So you're saying you only want to render a part of the big mesh and that part is defined by a small 3d box. Also I assume that that 3d box can move around and make different parts of the big mesh visible right?

Look into dynamic index buffers and in the case of the mesh being too big for video ram look into dynamic vertex buffers. Also I'd think you'd need some sort of BVH (bounding volume hierarchy) to accelerate figuring out what parts of the big mesh collide with the box.

-potential energy is easily made kinetic-

Thank you so much for both of you reply.

For volume rendering, based on Mr_Fox answer, I need view pos and pos from vs to create a vector and then advance from pos from vs along the vector to get accumulation.

I am not quite clear about the occlusion and merge. Would you please explain a little bit about why I need to render the opaque objects first?

BTW, for volume rendering do I have to write HLSL code for line and plane intersection and for mesh rendering have to write c++ mesh clipping?

I want to know if there exist some HLSL intrinsic functions and Directx11 functions to facilitate these operations.

Thanks.

I am not quite clear about the occlusion and merge. Would you please explain a little bit about why I need to render the opaque objects first?

without knowing depth, your volume rendering will always be visible even if some opaque object get rendered in front of the 3Dbox or raycast will be incorrect if some opaque object is partial or totally inside your 3Dbox. And without color buffer available your volume rendering won't take background color into consideration.

Even though with alpha blend you need render volume last since handle semitransparent volumetric stuff is tricky and we have no idea how you could update depthbuffer in your accumulative raycasting ps (it is possible for iso-surface raycasting through) so you have to render volume last.

I need view pos and pos from vs to create a vector
viewpos should be store in constant buffer

for volume rendering do I have to write HLSL code for line and plane intersection and for mesh rendering have to write c++ mesh clipping?
for volume raycasting intersection handling is only needed for iso-surface raycast though. But that is implied by your transfer function (you detecting surface by sampling continuous voxels, so for example: sign distance field: surface is detected by finding two neighboring voxel with opposite sign)

for mesh I guess you won't need raycasting ps based on my understanding of your case. you just as usual rasterize you mesh with customized clipping planes

This topic is closed to new replies.

Advertisement