Sign in to follow this  
scarypajamas

Algorithm CSG collision and visibility in 2017

Recommended Posts

I'm looking for a broad phase algorithm for brush-based worlds.  I also need an algorithm for determining which areas of said world are visible from the view frustum.

The Id tech engines in the 90's used BSP's for broad phase collision and PVS for visibility.  Doom 3 in 2004 kept BSP's for collision but switched to portals for visibility.

Its now 2017.  I'm wonder what approach should be taken today?

Share this post


Link to post
Share on other sites
1 minute ago, scarypajamas said:

I'm looking for a broad phase algorithm for brush-based worlds. 

Broadphase generally operates on bounding boxes and spheres. There is no difference to it in a brush-based environment in this regard.

For production-quality examples of these examples, look at the source code of physics libraries, such as Box2D and Bullet.

4 minutes ago, scarypajamas said:

I also need an algorithm for determining which areas of said world are visible from the view frustum.

This is literally a google a way.

5 minutes ago, scarypajamas said:

The Id tech engines in the 90's used BSP's for broad phase collision and PVS for visibility.  Doom 3 in 2004 kept BSP's for collision but switched to portals for visibility.

Not only broadphase, but collision in general. The term you might be looking for is narrowphase.

Quake et al used point-vs-brush collision, meaning that for collision, world geometry was extruded by the player's size. This meant that you could run a point or a pair of points along the collision mesh thus reducing the complexity of the test even further.

Note that brushes, while fast, have a few inherent drawbacks. Trouble with turning corners smoothly comes to mind first.

6 minutes ago, scarypajamas said:

Its now 2017.  I'm wonder what approach should be taken today?

Things have, indeed, changed dramatically. For narrowphase, read up on GJK

Share this post


Link to post
Share on other sites
28 minutes ago, irreversible said:

Broadphase generally operates on bounding boxes and spheres. There is no difference to it in a brush-based environment in this regard.

I meant broadphase in the sense of narrowing down the number of solids needing to be checked for collision.  In the Quake 3 BSP format that meant tracing down the BSP via hyperplanes until a leaf was reached.  The leaf stores the possible brushes that can be collided against.  Testing against that small set of brushes is the narrow phase.

28 minutes ago, irreversible said:

This is literally a google a way.

My question is not about frustum culling or GJK.  Let me be more specific: I need a way of determining what parts of the world are visible without unnecessary overdraw.  Quake did this with PVS and Doom 3 did it with portals (and I'm aware of how both of those algorithms function).  I'm wondering, in 2017, if there is a better approach.

Edited by scarypajamas

Share this post


Link to post
Share on other sites
34 minutes ago, scarypajamas said:

I need a way of determining what parts of the world are visible without unnecessary overdraw

Occlusion Culling or Visibilty Determination might be good terms to search.

Since the old times there is hardware occlusion tests on GPU which is new: You can render low poly occluders to a low resolution z-Buffer, build a mip-map pyramid and test bounding boxes against that. This approach can work for dynamic stuff and of course you can do it in software on CPU as well. Probably artists need to make the occluders by hand.

 

 

 

Share this post


Link to post
Share on other sites
40 minutes ago, scarypajamas said:

I meant broadphase in the sense of narrowing down the number of solids needing to be checked for collision.  In the Quake 3 BSP format that meant tracing down the BSP via hyperplanes until a leaf was reached.  The leaf stores the possible brushes that can be collided against.  Testing against that small set of brushes is the narrow phase.

As noted above, broadphase accepts bounding boxes and/or spheres and performs basic preliminary overlap checks to determine potentially colliding pairs. For moving objects, the bounding box becomes a composite of the objects bounding boxes at the start and the end of the update.

To minimize potential pairs (eg the input set to the broadphase), you'll likely want to use something as simple as you can get away with that best matches the nature of the game you're working on. Generally speaking this can be as basic as a regular grid, unless you're dealing with heavily asymmetric object placement, which can really benefit from something like an octree. It really depends on what kind of a game you're working on...

 

43 minutes ago, scarypajamas said:

My question is not about frustum culling or GJK.  Let me be more specific: I need a way of determining what parts of the world are visible without unnecessary overdraw.  Quake did this with PVS and Doom 3 did it with portals (and I'm aware of how both of those algorithms function).  I'm wondering, in 2017, if there is a better approach.

So you mean what are "the cutting edge" level partitioning schemes these days?

Is it safe to assume you're working on an FPS game? Is it indoors/outdoors or has hybrid environments? Are the levels large? Is the world detailed and graphically heavy? Is geometry being procedurally generated/loaded or are you dealing with static levels that are loaded once?

 

PS - regarding use of the acronym "CSG" in your topic title. This is something you don't generally come across outside of an editor. Boolean operations are usually performed prior to generating collision meshes, unless you're generating something procedurally. Though even then you're likely setting yourself up for a world of hurt performance-wise.

Share this post


Link to post
Share on other sites
8 hours ago, JoeJ said:

Since the old times there is hardware occlusion tests on GPU which is new: You can render low poly occluders to a low resolution z-Buffer, build a mip-map pyramid and test bounding boxes against that. This approach can work for dynamic stuff and of course you can do it in software on CPU as well. Probably artists need to make the occluders by hand.

I have researched occlusion culling and I worry about the accuracy of a hardware solution since there will be latency between whats currently visible versus what was last reported visible by the GPU.  The player might turn around quickly or teleport causing brief popping artifacts.  A software solution might work.

I also have to consider the visibility of the player from the perspective of the NPC's so this algorithm may run, not just for the players view frustum, but the NPC's as well.

8 hours ago, irreversible said:

So you mean what are "the cutting edge" level partitioning schemes these days?

Is it safe to assume you're working on an FPS game? Is it indoors/outdoors or has hybrid environments? Are the levels large? Is the world detailed and graphically heavy? Is geometry being procedurally generated/loaded or are you dealing with static levels that are loaded once?

Lets say its a fast paced first-person stealth game with indoor environments connected by doorways and/or hallways.  The levels are static and loaded once, they can be small or large.  The world itself will be detailed by brushes and many low-poly model props.  If an area is visible, then its content may also be visible.

Based on these requirements I was leaning towards BSP's and portals, but before I go that route I want to be sure there isn't a more modern solution.

Edited by scarypajamas

Share this post


Link to post
Share on other sites
25 minutes ago, scarypajamas said:

I have researched occlusion culling and I worry about the accuracy of a hardware solution since there will be latency between whats currently visible versus what was last reported visible by the GPU.  The player might turn around instantaneously or might teleport causing brief popping artifacts.  A software solution might work.

I also have to consider the visibility of the player from the perspective of the NPC's so this algorithm may run, not just for the players view frustum, but the NPC's as well.

You can reproject the depth buffer from the previous frame. (There should be an article here on gamedev.net. Quite a lot games use GPU occlusion queries. IIRC Cryengine uses it and there should be a paper.)

You can also render occluders for the current frame at first and do frustum and occlusion culling on GPU to avaid a read back.

 

Personally i implented a software approach using low poly occluders:

Put occluders, batches of world geometry, character bounding boxes etc. in an octtree and render coarsely front to back.

Raster the occluders to a framebuffer made of span lists (so no heavy per pixel processing). Test geometry BBox against framebuffer and append to drawlist if it passes.

Advantage: If you are in a room, not only geometry but also occluders behind walls will be rejected quickly because the octree BBox already fails the visibility test and the whole branch gets terminated. Very work efficient. Can cover dynamic scenes or open / closed doors.

Disadvantage: Because the whole system relies on early termination parallelization makes no sense.

Unfortunately i don't know in which situations my method can beat others because i did no comparisions, but you can think of it.

 

I also remember a paper or article about how an older version of Umbra worked, but can't give any link. They managed to use something like a low resolution framebuffer by rastering portals instead occluders to ensure correctness. The diffulicty is to extract those portals as a preprocess... IIRC

 

For line of sight tests you sould use simple raytracing. A full blown visibility determination is demanding even for a single camera and no good option for NPC vs. Player even if first person.

I don't think any recent game still uses the same system for collision detection and visibility - Quake was really a special case here.

 

 

Share this post


Link to post
Share on other sites

Based on these comments I'm going to attempt to implement an occlusion query design and benchmark it.  Using portals would make things like lightmapping faster since I could eliminate lumels from sectors the light cannot see reducing the overall number of rayscasts.  I could hook the occlusion query system into the lightmapper which might help.

It seems like occlusion queries are a good general solution for polygon soup, but since the maps I'll be dealing with are more room oriented my gut still says portals might be more efficient.  It also seems intuitively easier to me where portals should be placed (in doorways, hallway entrances) versus where an occluder should be placed.

And for collision I'm thinking a BVH might be good enough.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628293
    • Total Posts
      2981869
  • Similar Content

    • By sidbhati32
      I am working on a game in which we control a rectangular box at the bottom of the screen. Three sphere which has alphabets in it fall down. When the game starts, a word is generated from the predefined list of words(which I'll give) and we are supposed to touch the correct sphere having the alphabet based on that word. The question is how to detect if I have touched the correct sphere. 
      secondly, if I have touched a correct sphere before and there is no recurrence of that alphabet in that word then during the second wave the game should not proceed if I touch the same alphabet again.
      Looking forward to your answers, i have to submit this project in a couple of days. please help! (Working on Unity 3D)
      Thanks
    • By Yarden2JR
      Hi there everyone! I'm trying to implement SPH using CPU single core. I'm having troubles in making it stable. I'd like some help in order to understand what is wrong and how could I fix it. Please, take a look at the following videos:
      Water inside sphere using Kelager's parameters
      Water inside big box
      Water inside thinner box
      I've already tried using XSPH, the hash method to find the neighbors (now I'm using the regular grid, because the hash method didn't work for me) and two different ways of calculating the pressure force.
      I'm using mostly the following articles:
      Particle-Based Fluid Simulation for Interactive Applications, Matthias Müller, David Charypar and Markus Gross
      Lagrangian Fluid Dynamics Using Smoothed Particle Hydrodynamics, Micky Kelager
      Smoothed Particle Hydrodynamics Real-Time Fluid Simulation Approach, David Staubach
      Fluid Simulation using Smoothed Particle Hydrodynamics, Burak Ertekin
      3D Langrangian Fluid Solver using SPH approximations, Chris Priscott
      Any ideas? Thanks!
    • By Sam Mason
      Hi all,
      I have written a nice(ish) looking game of solitaire in Windows Forms, using VB.NET and Visual Studio 2015. It allows the player to play the game, and if their score is in the top 10, it saves the username and score o a leaderboard (saved in a text file so it is non-volatile). My next goal is to let the user know if the hand they are dealt is playable, but I'm not sure on the most efficient way of achieving this. I'm working on time complexity over space complexity! It's a simple game to code, but not an easy one to find the solution.
      Do you know of any algorithms that may achieve this goal that I would be able to look at?
    • By QuesterDesura
      Hello,
      I'm a trainee for software development and in my free time I try to do various things. I decided I wanted to figure out how "Dual Contouring" works, but I haven't been successful yet. I could find some implementations in code which are very hard to follow along and same sheets of paper "explaining" it written in a way that I have a hard time even understanding what the input and output is. All I know that it is used to make  a surface/mesh out of voxels. Is there any explanation someone can understand without having studied math/computer science? The problem is also that most of the words I can't even translate to German(like Hermite Data) nor I can find a explanation which I can understand. I know how Marching Cubes work but I just can't find anything to Dual Contouring also I don't quite get the sense of octrees. As far I'm aware of this is a set of data organized like a tree where each element points to 8 more elements. I don't get how that could be helpful in a infinite 3D world and why I shouldn't just use a List of coordinates with zeros and ones for dirt/air or something like that.
      Thanks
      A clueless trainee ^^
    • By Outliner
      Lately I've been working on building up a game's terrain from grid-aligned pieces. We might call these pieces metatiles because each piece is made of a collection of grid tiles that work together to create a unified feature in the game world. The beauty of this approach is that it fits neatly within the grid of a height mapped terrain. The difficulty is determining how to maximize the flexibility of the system using very few metatiles. Not only does a metatile require time and effort to design and model, but having a vast library of metatiles to choose from would complicate the world editing process.
      In my experience, most complicated metatiles boil down to arrangements of a few fundamental types of pieces. My idea is to simplify the whole process by restricting the game to features built up out of these six metatiles, plus flips and rotations:

      The red lines represent the feature that is drawn through the grid cells, and the gray lines represent how the cell is triangulated around the feature. For example, the red line might represent a vertical cliff or the boundary between one area and another, like the edge of a road or the shore of some body of water.
      These pieces, plus the flexibility of a height map, should be enough to create a wide variety of game terrains. The question is how does one build build an easy-to-use editor around assembling such small metatiles? For example, you can build up a curve out of these pieces by starting at a point and dragging the mouse in the desired direction. The editor would automatically choose the piece that most closely matches the direction from the current point to the mouse cursor. We can preview the next piece and perhaps wait for a mouse click to accept it, then continue with yet another piece until the curve catches up with the mouse. If the next piece would overlap with an existing piece, we display the preview in an error color and refuse to place the piece.
      Has anyone ever seen a system like that?
      While the above seems technically sufficient, there are a few details that would make editing awkward without additional automation. For example, a road would usually tend to have two parallel edges, and manually creating two parallel edges along a curved road in a system like this would be tedious and tricky. There is also a question of UV-mapping and how the user would specify the UV-coordinates of things like cliff walls and road surfaces. Ideally UV-coordinates would be handled entirely automatically.
  • Popular Now