Sign in to follow this  

PVS for triangle soup?

This topic is 4155 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was wondering if anyone had any practical experience with generating Potentially Visible Set data for scenes of arbitrary polygons (ie. created in Maya or Max) NOT a Quake3 solid BSP etc. I've looked at numerous papers on 'From Region Visibility' and the like but nothing really appropriate... any pointers would be very much appreciated. I guess the first step is dividing space into visible regions seeing the PVS would encode per-region visibility - these I guess could be artist placed but I'd like to avoid that if possible. Its mainly the region-to-region visibility that I don't know how to move forward with.

Share this post


Link to post
Share on other sites
I can generate/make portals and determine their visibility at runtime, traverse through them etc, but they're not very suitable for outdoor type scenes or at least scenes with quite a bit of open space. I'm more after a PVS where, based on the region the camera is in, I get a list of other regions that are potentially visible. Thanks though.

I guess I'm more after techniques for building the PVS, that is determining the region to region visibility.

Share this post


Link to post
Share on other sites
So, you have the regions already, right?
Well, checking the visibility can be made easily (with the tradeoff between error and speed of course) via GPU. Render each region with marked color, retrieve backbuffer, and check if region is visible. Or even do it with occlusion queries per region each frame.
Do as many render as you like from each region, and construct its PVS. Occlusion queries can be dropped for regions, that are already marked as visible from current one....

Share this post


Link to post
Share on other sites
Here are a few papers that might help you (you can found them with google) :

Conservative Volumetric Visibility with Occluder Fusion
Hierarchical Visibility in Terrains
Hierarchical Visibility Culling with Occlusion Trees
Hardware Accelerated Visibility Preprocessing using Adaptive Sampling
Conservative Visibility Preprocessing using Extended Projections

Share this post


Link to post
Share on other sites
Yes but occlusion isn't really region-to-region as it's image space and view-point dependant (no full preprocessing).
But if you're interested in that, there was some interesting thread by Yann L. about hierarchical occlusion mapping with software rendering.

Share this post


Link to post
Share on other sites
I appreciate the feedback and reference to the papers. Occlusion is not the problem. The problem is I don't want to traverse a hierarchy at runtime - I have measured and proven that this is actually slower than brute force testing a largish set of nodes due to both data and code cache coherency. I really would like to get this set from a precomputed PVS. The problem is how to generate the PVS. First thoughts were to random position and orientation of a camera within a region and rendering faces into cube maps, but this has been done before and is problematic not to mention random camera positions won't necessarily get all the cases. Surely there's a deterministic solution to this?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
We do this by rendering the scene from several pseudo-random points within the region and ORing the visibility results. Empirically for a city scene, I think we needed about 8 sample points per 8m square region to avoid dropout. There is a tradeoff between number of sample points, rasterisation resolution and speed. There may be a more deterministic solution, but I wouldn't be surprised if floating point errors etc. cause enough headaches to prefer the rasterisation/multisample approach.

I think we made the PVS tool work by vertex colouring (flat shaded, no interpolation) which defined the index into an array of every poly in the scene, so we would render the entire scene and then read the one backbuffer back. By iterating over every pixel we could set the PVS data per poly quite quickly. This data is then stored per tristrip as a compressed bitfield. (The per poly pvs is also used as a hint to the tristripping).

Its interesting you have found the hierachy so slow, we have found this especially true on the next gen console cpus. Its MUCH faster to design algorithms around the idea that the cpu is essentially infinitely fast, and the bottleneck is memory access and hence very cache dependant.

Share this post


Link to post
Share on other sites
Thanks for that feedback - how did you choose your camera orientation, random pitch and yaw etc? As a last resort I was going to implement something similar (as I hinted at) but I thought I'd tri-strip first and colour the strips with a colour representing the ID then do as you do. Did you try this rather than stripping based on vis? Thanks again for your comments.

In regards to the hierarchy, yeah I measured it several times with several different scenes to be sure. The bounding volumes for the objects were stored linearly in memory but I didn't even prefetch them. I build the world by spatially sorting leaf nodes and then parenting them in pairs recursively, including the handling of orphan nodes when you have an odd number. The advantage of this is that each node ends up with a linear range of node indices and can store them as 2 small integers. I figure I'll probably end up with just a few levels of hierarchy by selectively combining certain levels in the same manner.

I think some guy from Sony spoke at the GDC 2 years ago (Mike Acton from memory??) about data flow driving the code not coding driving the data flow. All so true - great if you can manage to do that. As with anything games related, profile, profile, profile. Then optimise :-)

Share this post


Link to post
Share on other sites
Ressurecting the dead thread, I was curious how one would determine what is a valid location for the camera to render the PVS info? I mean, its just a polygon soup, so really there's no inside/outside like there is implied in a BSP tree, so I don't know whether the (random) position I've chosen for the camera is inside a wall or not. Any clever suggestions for this scenario?
Thanks in advance.

Share this post


Link to post
Share on other sites
If you have a navgrid or navmesh, you can probably use that to determine if a point is in/outside of the world. For instance, you should be able to cast a ray with an even number of intersections between the point in question and any point on the navgrid. If you have an odd # of intersections ( counting tris backfacing wrt the ray ), then you are 'outside'.

Share this post


Link to post
Share on other sites
We have a separate collision mesh, but there's no guarantees that it won't be below or even above the graphics mesh especially where there's a large differential in the detail between the graphics and collision meshes.It's an interesting idea though. Perhaps I could contruct a convex hull of the graphics geometry a ensure my point is within that... but all it would take is for an artist to shove a tree through the ground a bit and the volume would extend under the ground, allowing the camera point to be under the ground. Maybe even rendering double-sided would protect against that case? Maybe even the hull could be used to mask the scene using the stencil buffer to further reduce the effects of these cases?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Ah, yes, this can be a problem. We used this kind of PVS for a racing game, where its a lot simpler. Our artists simply flagged all the polys on the tracksurface as collidable and the PVS tool picked them up.

There are probably all manner of weird and wonderful algorithms you could come up with for this, but I suspect they would all fall down in various cases. How difficult would it be for the artists to create some very simple geometry to define the PVS sample volumes? At least this way if you have errors you can tweak the PVS mesh to fix them.

Remember that you should be doing a number of samples per PVS cell, so if one is inside a tree or something (i.e. can't see very much) then it won't matter too much as you will be OR'ing the results of several samples together.

Share this post


Link to post
Share on other sites
Quote:
Original post by SimmerD
If you have a navgrid or navmesh, you can probably use that to determine if a point is in/outside of the world. For instance, you should be able to cast a ray with an even number of intersections between the point in question and any point on the navgrid. If you have an odd # of intersections ( counting tris backfacing wrt the ray ), then you are 'outside'.


Do you really need the navgrid/navmesh? If you project a ray from the point towards any other geometry and the first triangle you hit is backfacing, then perhaps that would be enough. It would require that you never have backfacing polys exposed to the player, though...

Share this post


Link to post
Share on other sites
Yeah, I think the most robust solution is going to be getting the artists to specify the PVS volume, but not the easiest from an evolving level influencing an evolving PVS point of view (I can't imagine that they're going to be too keen on it since it doesn't make it any prettier in their eyes).

Thanks for all your comments, as always very much appreciated.

Share this post


Link to post
Share on other sites
paic actually mentioned this paper already:
"Hardware Accelerated Visibility Preprocessing using Adaptive Sampling" is actually a PVS technique that works quite well with triangle soups. Special consideration is given to how samples will adapt when determined to be "inside" geometry.

The same authors have papers on a slower, but exact PVS computation that works for triangle soups: See http://www.nirenstein.com/e107/download.php?list.5

Also, this paper at Siggraph this year may be of interest: http://www.cg.tuwien.ac.at/research/publications/2006/WONKA-2006-GVS/

I expect that it will work less well with triangle soups than the paper mentioned earlier, but for other scenes it compares favourably.

Share this post


Link to post
Share on other sites

This topic is 4155 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this