ABT questions: construction and use

Started by
14 comments, last by Assassin 20 years, 10 months ago
I'll be making a few references to topics discussed in this thread. I've been attempting to write an ABT compiler & renderer for my engine, but I fear that some of the things I'm trying to do aren't in harmony with the original design of the ABT structure. To start with, I'd like to say that I've read through just about every thread on these forums that mentioned an ABT, many of them more than once, and I pretty much understand the concepts behind constructing one from static data. I do have a number of questions though... 1) I'm a little curious about how you'd actually implement the 4 minimizable functions that Yann listed - I've currently got this set of scoring functions: Axis Score = Dimension(ThisAxis)/Dimension(LargestAxis) Volume Score = 2*fabs(0.5 - SplitPercent) Face Score = fabs(FrontBackDiff)/NumFaces Splits Score = fabs(NumSplit)/NumFaces where SplitPercent indicates how far "into" the volume the split is (0.25 at a quarter, 0.5 at halfway, etc), FrontBackDiff is a value that records the difference in how many faces are on each side of the splitting plane (positive means more on maxside of plane), and NumFaces is the number of faces in the current node being processed. Lower scores are obtained from better split locations, and the total score of the split is the weighted sum of those 4 function scores as mentioned in 2) How would you go about getting a successive approximation system to work with these functions, since only the geometry-ignorant functions are continuous? I suppose the face balance function could be somewhat continuous, with jumps instead of smooth transitions, but the splits function seems like you can't predict a value based on any previous samples - you won't have any idea how many are split at a position until you test it. I'm currently using a "best guess approximation" for picking the plane - I take 9 samples (10% 20% ... 90%) along each of the 3 axes, evaluating the score for each of the splits and keeping the one with the highest score. This obviously isn't perfect but it does split my extremely minimal dataset in a reasonable position. 3) I'm a little confused about this paragraph in Yann's building description: "At this point, due to all the overgrowing and optimization, your original node hierarchy will be largely out of sync with the nodes themselves. So you need to rebuild it from the bottom to the top. For each leaf, walk up the tree, and recreate the bounding boxes for each node by unifying the child AABBs to the parent AABB." - If each of the nodes AABB's is shrunk to contain only the geometry inside that node, then why would you need to re-shrink them after the leaves have been determined? 4) Can you get a decent collision detection result from an ABT that's only subdivided to 2000 faces per leaf? That seems like it'd be ridiculously slow to test against unless you further subdivide the leaves specifically for coldet, keeping the renderable geometry together in large vertex arrays though. Would it be possible to have 2 subdivision thresholds - one for renderable geometry, one for collideable geometry? You could have 2000 faces per render-leaf (level that stores renderable geometry) but then keep going from that node until you get down to around 50 faces per collision-leaf. This is also relevant for making a level editor - if I import a 10 million face level from max or maya, and then decide that a certain area should be textured differently (using a different texture, not adjusting the actual texture coords) then I'd have to be able to select those faces in order to change them. 5) I see that the ABT is good for doing pre-processed static goemetry with objects possibly moving around, but what about adding and removing geometry (small amounts, not half the level) at runtime? I'd like to be able to support CSG in the main engine structure, but I'd rather not compile a quake-style BSP in order to do so. I suppose this is somewhat linked to the previous question regarding collision detection, since it involves the CPU rather than the video card. But apart from doing CSG on the level data, it would be nice to be able to add new static geometry to the ABT without having to completely recompile the whole tree - would you need to keep the split statistics from the compilation around in the nodes in order to compound the previous results with the newly added geometry (instead of throwing the node hierarchy out the window and recompiling with a raw polygon soup again)? This is sort of another thing related to doing the level editor - importing multiple distinct pieces of a level or stitching separate levels together by importing them separately, and not doing all the work in 3dsmax. 6) Can you keep "objects" higher up than a leaf? If they are simply represented as a bounding volume (an AABB should fit in OK, right?) occupy most of the (minimized) volume of a non-leaf node, and can't reasonably be split into 2 or more pieces, then why not store them in the best-fitting non-leaf node... The kinds of objects that would occupy a non-leaf node when a leaf contains 2000 faces would be kind of large, admittedly, but in that vein I'm thinking about things that have precomputed LOD that can't really be properly processed by the ABT - large pieces of terrain specifically. Could the ABT structure effectively support a blend of LOD terrain with static geometry lying around on it - houses, caves, etc? That's about all I can think of right now, hopefully someone can help me out on some of these issues though before I encounter some more . I saw that Yann was planning to write a paper for siggraph 2003 regarding ABT's but haven't heard anything of it since then - any word on when it might be available (if ever)? [edited by - Assassin on June 16, 2003 4:40:52 PM]
Assassin, aka RedBeard. andyc.org
Advertisement
Hmm, it seems nobody has been adventurous enough to respond to any of my questions yet. Perhaps it was a bit overwhelming with all those questions . I''m primarily interested in the 2nd and 4th questions - the ones regarding successive approximation and collision detection - since they will impact the manner in which I build the ABT structure the most.

I would like to have some input from other people before I go off and do something silly though...
Assassin, aka RedBeard. andyc.org
Beforehand, please confuse some of my ignorance on abt''s in general, Im commenting from a somewhat removed standpoint.
3) - He says growing, you say shrinking. Is it possible you implemented the algorithm in somewhat a different menthod? Does the abt grow predetermined aabb''s to fit a object that wouldnt otherwise easily fit? or are you shrinking it down to provide the smallest reasonable volume of an object? If I remember correctly though, abt splits an object, and thus would involve shrinking abt''s (they''d fit the split peices better) so Im a bit confused by his statement too...
4)Do you have any reason to believe this wouldnt be possible? with other partitioning schemes it would be. Though the cutoff zone might be different.
5) question on your question- such as how halflife and morrowind load other sections of the world as needed?
<O'-=V=- ^
I still haven't resolved any of the previous questions on my own, so they all still stand. I've also got a new issue since getting the ABT working and testing it on some data (A Quake3 level).

First, to elaborate on my question #2 involving the successive approximation: if I use a scoring method as listed under question #1, then the iterations through the split-picking algorithm would need to take the weights into account. For instance, if I set the weights to worry 99% about splits and 1% about volume, then there really isn't a good way to decide where to try a sample next, as the # of splits in the specific geometry is unpredictable even after you've taken several samples. However, the volume is completely predictable - it maps linearly to the percent value used to split with, and the axis score can't really be "approximated" since the split-planes are axis-aligned anyway - you just go with the axis that scores best (taking the best composite score from each axis alignment, not just the axis score), right? In fact, it seems the only thing that really benefits from a successive approximation system is the face balance equation, since it's somewhat continuous but non-linear. So should I just do some successive approximation on the face balance until I get as close to a perfect balance as I can, keeping track of all the composite scores along the way, and then keeping the best-scoring split?

7) But enough of the approximation. I have another issue now, and it's a bit of a pain: sparklies. Tiny gaps arise along the edges of polygons that were split while their neighbor wasn't, resulting from floating point roundoff error around the T-junction there. Since the ABT nodes can overlap a bit to avoid splitting all the geometry, you end up splitting the big faces and not splitting the little faces, and get sparklies along the borders between them. It's rather annoying, and since the ABT's design inherently calls for this behavior, I'm not quite sure what to do about it. Should I keep track of all the edges that got split, and then go hunt down all the other faces that didn't need splitting but have the same edge that got split, and then insert the split-edge-vertex into those faces?

Hopefully someone will help me out... I'd like to make use of the ABT structure but it's not working out perfectly.

[edited by - Assassin on June 19, 2003 3:28:14 PM]
Assassin, aka RedBeard. andyc.org
Somewhat this thread slipped through my radar. I''ll try to address your points in greater detail this evening, as it will probably take some time.
Hey Assassin,

Amazing... After coding a BSP for the past few weeks, I''ve decided that it isn''t exactly what I need and so I got into the ABT. I''ve been at it for a few days now and I''m almost done... the amazing thing is that I ran into more or less the exact same problems/questions as you.

As for collision detection, the best thing I can come up with is actually doing exactly what you suggested, which is building a ''planes only'' BSP tree to control collisions. You can of course also use that same tree to access the actual poly data in the ABT and use it to dynamically adjust the geometry (CSG). The additional memory for the BSP tree (assuming ''not'' a 10m poly level) is negligable I believe), besides how otherwise would you do collision exactly, even if you had the ABT localized to 50 polys per leaf? I thought about it, even to create mini BSP''s per leaf or maybe another structure, but you don''t have convex hulls there so you can''t rely on just the leaf geometry. Please share any ideas you might have on that.

As for LOD... just so you''d know I''m using the ABT as a replacement/enhancment for meshes. The goal I''m aiming at is a space sim of sorts, so the only static geometry I have is big *big* bases and ships (the kind where you''re fighter get''s really up-close and personal with ). I want the ABT for those guys (no it''s not an over-kill I have some very good resosns for doing so). What was I talking about? ohh yes, LOD... each leaf holds localized chunks of geometry right? From what I remember about mesh optimizations and progressive meshes (ala Hoppe) what''s done is basically enforcing conservation of an energy function on *localized* areas of the mesh (i.e. vertex in relation to it''s neighbors). What if we simplify each leaf independantly, supplying it with it''s own edge collapse map, meantime also insuring the cracks don''t occur with neighbor leaves (or maybe skirt masking the cracks like in terrain). An enhancment where nodes higher in the tree''s hierarchy have ''regional'' collapse maps or maybe even different index buffers altogether that access the same geometry data as the leaves but disregarding detail to some degree.

I just finsihed reading what I wrote and there is a hole in my progressive mesh theory. A solution can probably be found but I''ll go into it later if anyone''s interested. I have to get back to work and this is a long post as is.



There aren't any stupid questions,Only stupid people.
I''d also be interested to see how you approach LOD with a ABT tree. Do you have complete ''objects'' in the tree that change LOD based upon the cam distance, or is there some other way by changing vertex/index buffers in the tree at runtime? I suppose to some extents this is expanding a little more upon question 5 above.

Many thanks
James
A few more questions to add, concerning state-switch optimization and occlusion culling.

8) How can you avoid putting too many materials into a node, to keep the number of drawprimitive calls down? For example, if my split function decides to slice the geometry right on the border of where a material is being used, thus including about 1 or 2 polygons of that material in a node while the node next to it has 500 polygons with that material, you end up having to make a dp call to render 2 polygons, which doesn't seem so hot to me. I'm building the vertex buffers for my ABT nodes (partition spaces) by the following process:
foreach face foreach vertex on face  record index in "seen indices" list  record material ID in "seen materials" listmake one vertex buffermake enough index buffer for each of the "seen materials"dump all vertices listed in the "seen indices" list into the vertex data streamforeach seen material foreach face with matching material ID  foreach vertex on face   record translated index into material-matched index buffer 

which should end up with one vertex buffer per node and one index buffer for each material used in the node. This works properly, I think, but on my test data (a Quake 3 level that probably isn't tessellated enough to see many benefits from splitting, but I'm testing with it anyway) which has a total of 75 materials, the average ABT node gets 40 of them (using a 2000-face vbuffer threshold... a 500 threshold gets an average of 20 materials per node). Since the whole level only has about 22,000 faces, I end up with 14 or so nodes that each need 40 calls to DP (for a total of 560). By comparison, brute-forcing the level only takes 75 DP calls and runs faster when more of the ABT nodes come inside the frustum (with no occlusion culling for the time being, which may resolve this issue a bit). Is the Quake 3 level just using too many materials near each other, or is this a common breakdown of the data in other setups also? Is there a function that can be added to the split evaluator to minimize chopping up materials?

9) Yann, you've mentioned that you use image-space occlusion to the exclusion of all other occlusion methods, such as portals and occlusion frustums... might I ask how you build occlusion hulls from the level geometry? Also, is it worthwhile using that approach for enclosed geometry (like Quake 3 levels) or is it better-suited to open-area geometry? I'm guessing that each ABT node gets an "occlusion mesh" built from the geometry contained in that node, which can then be used to occlude further-away nodes. What kind of mesh simplification can be done to break the geometry up into a valid low-poly occlusion mesh that is "contained inside" the geometry of the node? It seems that plain-vanilla VIPM can't really do that effectively on non-convex geometric sets, so some insight into how to achieve the desired result would be appreciated.

10) On the note of portals for occlusion, could it be possible to auto-generate portals using the ABT-splits? I'm thinking of achieving it by finding all the faces that lie across the split plane, taking the 2D convex hull of them on that plane, and making a portal from that. Alternatively, an edge-connectivity graph could be constructed from the intersection points of the spanning face edges and the split-plane, welding vertices with the "same" position but from faces with different materials, and then tracing loops to make portals. This would of course only work on 2-manifold closed surface geometry, but that shouldn't be too hard for an artist to enforce, right?

Some feedback would be appreciated...

[edited by - Assassin on June 20, 2003 10:42:25 PM]
Assassin, aka RedBeard. andyc.org
I can take a crack at number 9).
If I recall correctly, Yann is using a low-poly occlusion skin (auto or hand generated I don''t know) for his set of occluders that is software rasterized with depth info (and opacity?). The rendered image is used to create a hierarchical structure (it''s shrunk down into smaller and smaller maps) to speed up the comming visibility checks.
For the visibility checks, the given node (set of geometry to be tested) has it''s AABB tested against the occlusion info for, well, occlusion. I think he''s mucked around with these checks a little bit, but I think it''s the basic idea and it should let you start thinking about it.
Here''s the link he gave about HOM: http://www.cs.unc.edu/~zhangh/hom.html
This paper also implements screen space projection in addition to the depth method that Yann uses. Never got around to asking why he ditched that test. So Yann, why''d you ditch the projection (overlap) test?

Karg
Thanks Karg, but I already have an image-space occlusion rasterizer that works decently for me. Question 9 was more directed at how to get the low-poly occlusion skin that''s used for doing the occlusion. You can''t just simplify the geometry by any old method, you need to make sure it won''t be occluding anything erroneously by keeping it entirely "inside" the rendered geometry... that''s the issue I''m trying to resolve.
Assassin, aka RedBeard. andyc.org

This topic is closed to new replies.

Advertisement