Sign in to follow this  
scarypajamas

Algorithm CSG collision and visibility in 2017

Recommended Posts

scarypajamas    409

I'm looking for a broad phase algorithm for brush-based worlds.  I also need an algorithm for determining which areas of said world are visible from the view frustum.

The Id tech engines in the 90's used BSP's for broad phase collision and PVS for visibility.  Doom 3 in 2004 kept BSP's for collision but switched to portals for visibility.

Its now 2017.  I'm wonder what approach should be taken today?

Share this post


Link to post
Share on other sites
irreversible    2862
1 minute ago, scarypajamas said:

I'm looking for a broad phase algorithm for brush-based worlds. 

Broadphase generally operates on bounding boxes and spheres. There is no difference to it in a brush-based environment in this regard.

For production-quality examples of these examples, look at the source code of physics libraries, such as Box2D and Bullet.

4 minutes ago, scarypajamas said:

I also need an algorithm for determining which areas of said world are visible from the view frustum.

This is literally a google a way.

5 minutes ago, scarypajamas said:

The Id tech engines in the 90's used BSP's for broad phase collision and PVS for visibility.  Doom 3 in 2004 kept BSP's for collision but switched to portals for visibility.

Not only broadphase, but collision in general. The term you might be looking for is narrowphase.

Quake et al used point-vs-brush collision, meaning that for collision, world geometry was extruded by the player's size. This meant that you could run a point or a pair of points along the collision mesh thus reducing the complexity of the test even further.

Note that brushes, while fast, have a few inherent drawbacks. Trouble with turning corners smoothly comes to mind first.

6 minutes ago, scarypajamas said:

Its now 2017.  I'm wonder what approach should be taken today?

Things have, indeed, changed dramatically. For narrowphase, read up on GJK

Share this post


Link to post
Share on other sites
scarypajamas    409
28 minutes ago, irreversible said:

Broadphase generally operates on bounding boxes and spheres. There is no difference to it in a brush-based environment in this regard.

I meant broadphase in the sense of narrowing down the number of solids needing to be checked for collision.  In the Quake 3 BSP format that meant tracing down the BSP via hyperplanes until a leaf was reached.  The leaf stores the possible brushes that can be collided against.  Testing against that small set of brushes is the narrow phase.

28 minutes ago, irreversible said:

This is literally a google a way.

My question is not about frustum culling or GJK.  Let me be more specific: I need a way of determining what parts of the world are visible without unnecessary overdraw.  Quake did this with PVS and Doom 3 did it with portals (and I'm aware of how both of those algorithms function).  I'm wondering, in 2017, if there is a better approach.

Edited by scarypajamas

Share this post


Link to post
Share on other sites
JoeJ    2586
34 minutes ago, scarypajamas said:

I need a way of determining what parts of the world are visible without unnecessary overdraw

Occlusion Culling or Visibilty Determination might be good terms to search.

Since the old times there is hardware occlusion tests on GPU which is new: You can render low poly occluders to a low resolution z-Buffer, build a mip-map pyramid and test bounding boxes against that. This approach can work for dynamic stuff and of course you can do it in software on CPU as well. Probably artists need to make the occluders by hand.

 

 

 

Share this post


Link to post
Share on other sites
irreversible    2862
40 minutes ago, scarypajamas said:

I meant broadphase in the sense of narrowing down the number of solids needing to be checked for collision.  In the Quake 3 BSP format that meant tracing down the BSP via hyperplanes until a leaf was reached.  The leaf stores the possible brushes that can be collided against.  Testing against that small set of brushes is the narrow phase.

As noted above, broadphase accepts bounding boxes and/or spheres and performs basic preliminary overlap checks to determine potentially colliding pairs. For moving objects, the bounding box becomes a composite of the objects bounding boxes at the start and the end of the update.

To minimize potential pairs (eg the input set to the broadphase), you'll likely want to use something as simple as you can get away with that best matches the nature of the game you're working on. Generally speaking this can be as basic as a regular grid, unless you're dealing with heavily asymmetric object placement, which can really benefit from something like an octree. It really depends on what kind of a game you're working on...

 

43 minutes ago, scarypajamas said:

My question is not about frustum culling or GJK.  Let me be more specific: I need a way of determining what parts of the world are visible without unnecessary overdraw.  Quake did this with PVS and Doom 3 did it with portals (and I'm aware of how both of those algorithms function).  I'm wondering, in 2017, if there is a better approach.

So you mean what are "the cutting edge" level partitioning schemes these days?

Is it safe to assume you're working on an FPS game? Is it indoors/outdoors or has hybrid environments? Are the levels large? Is the world detailed and graphically heavy? Is geometry being procedurally generated/loaded or are you dealing with static levels that are loaded once?

 

PS - regarding use of the acronym "CSG" in your topic title. This is something you don't generally come across outside of an editor. Boolean operations are usually performed prior to generating collision meshes, unless you're generating something procedurally. Though even then you're likely setting yourself up for a world of hurt performance-wise.

Share this post


Link to post
Share on other sites
scarypajamas    409
8 hours ago, JoeJ said:

Since the old times there is hardware occlusion tests on GPU which is new: You can render low poly occluders to a low resolution z-Buffer, build a mip-map pyramid and test bounding boxes against that. This approach can work for dynamic stuff and of course you can do it in software on CPU as well. Probably artists need to make the occluders by hand.

I have researched occlusion culling and I worry about the accuracy of a hardware solution since there will be latency between whats currently visible versus what was last reported visible by the GPU.  The player might turn around quickly or teleport causing brief popping artifacts.  A software solution might work.

I also have to consider the visibility of the player from the perspective of the NPC's so this algorithm may run, not just for the players view frustum, but the NPC's as well.

8 hours ago, irreversible said:

So you mean what are "the cutting edge" level partitioning schemes these days?

Is it safe to assume you're working on an FPS game? Is it indoors/outdoors or has hybrid environments? Are the levels large? Is the world detailed and graphically heavy? Is geometry being procedurally generated/loaded or are you dealing with static levels that are loaded once?

Lets say its a fast paced first-person stealth game with indoor environments connected by doorways and/or hallways.  The levels are static and loaded once, they can be small or large.  The world itself will be detailed by brushes and many low-poly model props.  If an area is visible, then its content may also be visible.

Based on these requirements I was leaning towards BSP's and portals, but before I go that route I want to be sure there isn't a more modern solution.

Edited by scarypajamas

Share this post


Link to post
Share on other sites
JoeJ    2586
25 minutes ago, scarypajamas said:

I have researched occlusion culling and I worry about the accuracy of a hardware solution since there will be latency between whats currently visible versus what was last reported visible by the GPU.  The player might turn around instantaneously or might teleport causing brief popping artifacts.  A software solution might work.

I also have to consider the visibility of the player from the perspective of the NPC's so this algorithm may run, not just for the players view frustum, but the NPC's as well.

You can reproject the depth buffer from the previous frame. (There should be an article here on gamedev.net. Quite a lot games use GPU occlusion queries. IIRC Cryengine uses it and there should be a paper.)

You can also render occluders for the current frame at first and do frustum and occlusion culling on GPU to avaid a read back.

 

Personally i implented a software approach using low poly occluders:

Put occluders, batches of world geometry, character bounding boxes etc. in an octtree and render coarsely front to back.

Raster the occluders to a framebuffer made of span lists (so no heavy per pixel processing). Test geometry BBox against framebuffer and append to drawlist if it passes.

Advantage: If you are in a room, not only geometry but also occluders behind walls will be rejected quickly because the octree BBox already fails the visibility test and the whole branch gets terminated. Very work efficient. Can cover dynamic scenes or open / closed doors.

Disadvantage: Because the whole system relies on early termination parallelization makes no sense.

Unfortunately i don't know in which situations my method can beat others because i did no comparisions, but you can think of it.

 

I also remember a paper or article about how an older version of Umbra worked, but can't give any link. They managed to use something like a low resolution framebuffer by rastering portals instead occluders to ensure correctness. The diffulicty is to extract those portals as a preprocess... IIRC

 

For line of sight tests you sould use simple raytracing. A full blown visibility determination is demanding even for a single camera and no good option for NPC vs. Player even if first person.

I don't think any recent game still uses the same system for collision detection and visibility - Quake was really a special case here.

 

 

Share this post


Link to post
Share on other sites
scarypajamas    409

Based on these comments I'm going to attempt to implement an occlusion query design and benchmark it.  Using portals would make things like lightmapping faster since I could eliminate lumels from sectors the light cannot see reducing the overall number of rayscasts.  I could hook the occlusion query system into the lightmapper which might help.

It seems like occlusion queries are a good general solution for polygon soup, but since the maps I'll be dealing with are more room oriented my gut still says portals might be more efficient.  It also seems intuitively easier to me where portals should be placed (in doorways, hallway entrances) versus where an occluder should be placed.

And for collision I'm thinking a BVH might be good enough.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By ramirofages
      Hi, I came across this udk article:
      https://docs.unrealengine.com/udk/Three/VolumetricLightbeamTutorial.html
      that somewhat teaches you how to make the volumetric light beam using a cone. I'm not using unreal engine so I just wanted to understand how the technique works.
      What I'm having problems is with how they calculate the X position of the uv coordinate, they mention the use of a "reflection vector" that according to the documentation (https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/ExpressionReference/Vector/#reflectionvectorws ) it just reflects the camera direction across the surface normal in world space (I assume from the WS initials) .
      So in my pixel shader I tried doing something like this:
      float3 reflected_view = reflect(view_dir, vertex_normal); tex2D(falloff_texture, float2(reflected_view.x * 0.5 + 0.5, uv.y)) view_dir is the direction that points from the camera to the point in world space. vertex normal is also in world space. But unfortunately it's not working as expected probably because the calculations are being made in world space. I moved them to view space but there is a problem when you move the camera horizontally that makes the coordinates "move" as well. The problem can be seen below:

      Notice the white part in the second image, coming from the left side.
      Surprisingly I couldn't find as much information about this technique on the internet as I would have liked to, so I decided to come here for help!
    • By Outliner
      Consider how one makes terrain using marching cubes. By having a grid of floats we can represent a continuous field that marching cubes will interpolate and turn into a nice smooth isosurface for the player to walk around on. This is easy and excellent for creating mountains and valleys and so on, but what if we want more variety in our game? A game is not normally made of just grass and sky. Maybe some places should be sand, or water, or road. How could that be worked into the mesh that we're getting from marching cubes?
      The obvious approach seems to be to have multiple fields, so each point on the grid has a certain level of sand, soil, rock, water, and so on. Then we modify the marching cubes algorithm to look for transitions between materials, so it puts a surface between areas of mostly one material and areas that are mostly other materials. We'd also want to keep track of when these surfaces touch the air, because that's the only time when we'd actually want to triangulate and render the surfaces.
      Suddenly the delightfully simple marching cubes algorithm is looking a lot less obvious. Has anything like this ever been done? Does anyone have any tips? Is this the right approach?
      Edit: Embarrassing mistake! I didn't think of phrasing the problem as "multiple materials" until I went to post this question, but now that I have I see there are plentiful google results for marching cubes with multiple materials. I'm still interested in any tips and advice, but now I have other resources to help with this problem.
      From the Google results, this paper looks especially interesting: Automatic 3D Mesh Generation for A Domain with Multiple Materials
    • By 51mon
      Hey
      I want to try shade particles by compute a "small" number of samples, e.g. 10, in VS. I only need to compute the intensity of the light, so essentially it's a single piece of data in 2 dimensions.
      Now I want to compress this data, pass it on to PS and decompress it there (the particle is a single quad and the data is passed through interpolators). I will accept a certain amount of error as long as there are no hard edges, i.e. blurred.
      The compressed data has to be small and compression/decompression fast. Does anyone know of a good way to do this?
      Maybe I could do something fourier based but I'm not sure of what basis functions to use.
       
      Thanks
    • By Outliner
      I'm being plagued by a desire to make a game where the player has a level editor that allows the player to draw a 2D level map and then it will pop up into a 3D level. Of course that sounds much like a height map, but sadly I have ambitions beyond what a height map alone can offer. I want the player to be able to draw a curve on the map and have that curve become a vertical cliff. I want the player to be able to draw a thick line and have that line become a road, its vertices lined up with the vertices of the surrounding landscape, but horizontal from side-to-side and with UV coordinates set to allow its texture to follow the direction of the road. If the player draws a road across a chasm, I want that to become a bridge.
      That may seem like it's asking too much, and it's true that for as long as I've been thinking about this problem I have yet to find an approach that works to my satisfaction, but there are limits to the goals of this project. Just like a height map, this project doesn't attempt any sort of cave or overhang. The final level needs nothing that cannot be represented in a 2D map. Aside from bridges, no part of the level ever needs to cross over itself. Aside from vertical cliffs, the landscape is restricted to being smooth slopes or flat land; there is no desire for the kind of jagged detail that's possible in a height map for this project. Aside from the cliffs that are specifically drawn in the level editor, there should be nothing blocking the player from moving around the level, so everything except the cliffs ought to be relatively smooth.
      I've tried starting from a regular mesh of equilateral triangles and adjusting the positions of the vertices to match the player's map. I appreciate the regular mesh because it makes it easy to give every vertex, triangle, and edge a number and store the level in an array. It also forms a graph structure that makes it easy to create smooth slopes and know when those slopes ought to be interrupted by cliffs. Unfortunately I have never been able to overcome the technical challenges of making the mesh and the player's drawings line up.
      I've tried starting from the player's drawn map and building a mesh around it. Unfortunately, computational geometry has never been one of my strengths, so figuring out where to put the vertices and edges to smoothly fill out the rest of the map is daunting. I've considered simulating the vertices as if they were electrons so they can form a minimum energy distribution around the fixed vertices specified by the player, but I'm not sure how to maintain the smoothness of the slopes if the vertices keep moving as the player draws.
      The bottom line is that I'm really not sure how to even begin solving this problem. I'm willing to put effort into implementing a complicated system, but first I need an idea for how that system ought to work. I really need the wisdom of someone more experienced than myself.
    • By hyyou
      I have recently read T-machine's Entity Component System (ECS) data structure. (link)
      Its "best" version (4th iteration) can be summarized into a single image (copy from T-machine's web) :-

      This approach also appears in a VDO https://www.youtube.com/watch?v=NTWSeQtHZ9M  , with slide and source code ( https://github.com/CppCon/CppCon2015/tree/master/Tutorials/Implementation of a component-based entity system in modern C++ ) .
      There are many people (including me) believe that the above approach lacks data locality, because Position-1,Position-2,Position-3, ... are not stored in a contiguous array.
      However, many attempts failed to elegantly make it contiguous.  (e.g. stackoverflow's link) 
      I also realized that, to make it contiguous, the cost of query Entity<-->Component is unavoidably significantly higher. (I profiled) I would like to hear that ....
      Is such mega-array (T-machine's 4th) generally better (performance-wise) than storing each type component to its own contiguous array?  (like http://www.randygaul.net/2013/05/20/component-based-engine-design/ )  ? My references (those links) are quite old.   Now, are there any others better approaches? (If you happen to know) I know that its depend on data access pattern (i.e. cache miss).  I still want to hear more opinion/argument about it, because I may need to rewrite a large part of my engine to make it support 30,000+ entity (personal greed).    
      Thank.
       
       
  • Popular Now