PVS for triangle soup?

Started by
15 comments, last by Rompa 17 years, 8 months ago
I was wondering if anyone had any practical experience with generating Potentially Visible Set data for scenes of arbitrary polygons (ie. created in Maya or Max) NOT a Quake3 solid BSP etc. I've looked at numerous papers on 'From Region Visibility' and the like but nothing really appropriate... any pointers would be very much appreciated. I guess the first step is dividing space into visible regions seeing the PVS would encode per-region visibility - these I guess could be artist placed but I'd like to avoid that if possible. Its mainly the region-to-region visibility that I don't know how to move forward with.
Advertisement
A very interesting paper on automatic portal generation :

Clicky
-----Entropia 3D Engine Project, a next-gen and flexible C# 3D Engine under GPL.
I can generate/make portals and determine their visibility at runtime, traverse through them etc, but they're not very suitable for outdoor type scenes or at least scenes with quite a bit of open space. I'm more after a PVS where, based on the region the camera is in, I get a list of other regions that are potentially visible. Thanks though.

I guess I'm more after techniques for building the PVS, that is determining the region to region visibility.
So, you have the regions already, right?
Well, checking the visibility can be made easily (with the tradeoff between error and speed of course) via GPU. Render each region with marked color, retrieve backbuffer, and check if region is visible. Or even do it with occlusion queries per region each frame.
Do as many render as you like from each region, and construct its PVS. Occlusion queries can be dropped for regions, that are already marked as visible from current one....
Here are a few papers that might help you (you can found them with google) :

Conservative Volumetric Visibility with Occluder Fusion
Hierarchical Visibility in Terrains
Hierarchical Visibility Culling with Occlusion Trees
Hardware Accelerated Visibility Preprocessing using Adaptive Sampling
Conservative Visibility Preprocessing using Extended Projections
Yes but occlusion isn't really region-to-region as it's image space and view-point dependant (no full preprocessing).
But if you're interested in that, there was some interesting thread by Yann L. about hierarchical occlusion mapping with software rendering.
-----Entropia 3D Engine Project, a next-gen and flexible C# 3D Engine under GPL.
I appreciate the feedback and reference to the papers. Occlusion is not the problem. The problem is I don't want to traverse a hierarchy at runtime - I have measured and proven that this is actually slower than brute force testing a largish set of nodes due to both data and code cache coherency. I really would like to get this set from a precomputed PVS. The problem is how to generate the PVS. First thoughts were to random position and orientation of a camera within a region and rendering faces into cube maps, but this has been done before and is problematic not to mention random camera positions won't necessarily get all the cases. Surely there's a deterministic solution to this?
We do this by rendering the scene from several pseudo-random points within the region and ORing the visibility results. Empirically for a city scene, I think we needed about 8 sample points per 8m square region to avoid dropout. There is a tradeoff between number of sample points, rasterisation resolution and speed. There may be a more deterministic solution, but I wouldn't be surprised if floating point errors etc. cause enough headaches to prefer the rasterisation/multisample approach.

I think we made the PVS tool work by vertex colouring (flat shaded, no interpolation) which defined the index into an array of every poly in the scene, so we would render the entire scene and then read the one backbuffer back. By iterating over every pixel we could set the PVS data per poly quite quickly. This data is then stored per tristrip as a compressed bitfield. (The per poly pvs is also used as a hint to the tristripping).

Its interesting you have found the hierachy so slow, we have found this especially true on the next gen console cpus. Its MUCH faster to design algorithms around the idea that the cpu is essentially infinitely fast, and the bottleneck is memory access and hence very cache dependant.
Thanks for that feedback - how did you choose your camera orientation, random pitch and yaw etc? As a last resort I was going to implement something similar (as I hinted at) but I thought I'd tri-strip first and colour the strips with a colour representing the ID then do as you do. Did you try this rather than stripping based on vis? Thanks again for your comments.

In regards to the hierarchy, yeah I measured it several times with several different scenes to be sure. The bounding volumes for the objects were stored linearly in memory but I didn't even prefetch them. I build the world by spatially sorting leaf nodes and then parenting them in pairs recursively, including the handling of orphan nodes when you have an odd number. The advantage of this is that each node ends up with a linear range of node indices and can store them as 2 small integers. I figure I'll probably end up with just a few levels of hierarchy by selectively combining certain levels in the same manner.

I think some guy from Sony spoke at the GDC 2 years ago (Mike Acton from memory??) about data flow driving the code not coding driving the data flow. All so true - great if you can manage to do that. As with anything games related, profile, profile, profile. Then optimise :-)
Ressurecting the dead thread, I was curious how one would determine what is a valid location for the camera to render the PVS info? I mean, its just a polygon soup, so really there's no inside/outside like there is implied in a BSP tree, so I don't know whether the (random) position I've chosen for the camera is inside a wall or not. Any clever suggestions for this scenario?
Thanks in advance.

This topic is closed to new replies.

Advertisement