Occlusion Query for Logic - not Graphics?

Started by
3 comments, last by Ashaman73 11 years, 5 months ago
I've been doing a lot of research on managing "what can each game agent see" in 3D space. The main problems being:

  1. What is the FOV (field of view) of the object? (which in my application MAY be 360 degrees spherical, or not: we're in space)
  2. What objects may be blocking line of sight?

At this point, I am not interested in drawing these objects: I'll handle that later, though it may well be that the solutions here will inform my choice of drawing engine (Mac OS, iOS, Objective-C being my targets). What I am interested in is just the game logic side of things. Pretend for a moment that there is no user interface: we're just letting a computer crunch through the 3D world.

It appears that (as one other poster put it) my problem is one of occlusion. I've researched Occlusion Querying in OpenGL and it seems very promising. All the examples, however, seem to point to "how to use this information to draw more efficiently." What I am asking is "how might it be used to inform game LOGIC?"

For example, take 5 spaceships in our solar system. All arrayed in random places. When it's each object's turn to decide "what do I do," I need to know who they can see. If there's a planet in the way, call that a "major occluder" and it will definitely hide things behind it. If it's another ship, that too could be a major occluder if it were big enough (so say we have a simple isMajorOccluder property). Smaller ships, asteriods, debris, etc. for the sake of argument would all "not count" as occluders.

What I'm trying to arrive at is, for each of the 5 ships, a simple list of what they can see.

Am I over-thinking this? Or is building the universe in OpenGL and using occlusion queries the right approach? Could someone pseudo-code how this might work?

Thanks for any advice,

-Doug
Advertisement

Or is building the universe in OpenGL and using occlusion queries the right approach?

This is valid. For really correct determination I would always test ship pairs. First ship is the camera, the second ship is the target. To test it, draw the whole scene without the target ship (use only approximations of the real ships, no need to render the hi-detail ships) in the zbuffer only. Then render the target ship using the occlusion query. This will work and you will get the number of covered pixels. But you need to be careful to not interference with the rendering pipeline , ie. occlusion queries are a good way to stall the render pipeline, best to check the results of the queries the next frame to avoid flushing the command queue.


Am I over-thinking this?

Do you really need this level of accuracy ? You could use your physics engine and a hand full of ray-tests to determine the visibility too. Maybe you could use both systems, the accurate one for the players ship sensors and an not so accurate, but faster, one for AI ships.
First of all i like the idea, but it has some potential problems.
-Possible problem is to drawing 360 fov view in single pass. If im not mistaking, projection matrix does not allow fov of 360.
-Occlusion query stalls your GPU. Perhaps not important for you, but its good to keep it in mind.
-The resolution will determain the precision of your approximation.
-If to split 360 pass into 2x 180 fov, you will still face precision problems: low approximation precision on the edges.
-also possible to split into 6x 90 fov passe, same way cubemaps are generated. i would defenatly go for this one.

Modern engines use a very simple software rasterizer to solve occlusion problems (has still same problems for your solution) to stay away from gpu.

Basic unoptimized approach would be something like:
-draw all objects that are potential occluders in on cubemap (6x 90 fov). Except the one that will be tested.
-do occlusion query test on object of interest on each of 6 rendertargets.
-sum the result.

In my opinion it will be a heavy process and not consistent.

Have you considered to do any other tests then occlusion query?
I would recommend to try something in direction of simplified object-models and raytracing heuristics.
Like:
Sphere1 - occluder
Sphere2 - object of visibility interest.
-is sphere1 aligned with sphere2 and how much? Can be tested with simple dot-product.
-how does does my current fov effect the corelation of delta distance and sphere sizes.

Good luck!

-also possible to split into 6x 90 fov passe, same way cubemaps are generated. i would defenatly go for this one.


Ah, I like this. Doing 90 fov's would allow me to specify which 90's a particular object can "see" in. Not all will have 360 spherical "vision."


Have you considered to do any other tests then occlusion query?
I would recommend to try something in direction of simplified object-models and raytracing heuristics.


I am fine with simplifying the object models down to bounding boxes: I don't need massive accuracy, just consistency. My concern with doing a simple raytracing was that a large object looking at a large object around a medium object SHOULD be able to see it, but if you used a raytrace from obcenter to obcenter you may get a wrong answer. Or maybe I misundestood? Here's what I do not want to have happen:
GamePost1_Fig1.gif
"Blue Box" is judged to NOT be able to see "Red Box" because a ray cast from center to center passes through "Black Circle." By using a z-buffer or occlusion query I had figured that the result would be to show which objects (and if I understand occlusion query return values correctly) how MUCH of that object is visible.


For really correct determination I would always test ship pairs. First ship is the camera, the second ship is the target. To test it, draw the whole scene without the target ship (use only approximations of the real ships, no need to render the hi-detail ships) in the zbuffer only.


Pretty close to what I was thinking. My process was something like this pseudo-code:
[source lang="plain"]foreach ( object ) {
setCamera ( object is camera );
foreach ( object_that_isnt_me ) {
if ( occlusion_query( me, object_that_isnt_me ) ) {
objectsICanSee[] = object_that_isnt_me
}
}
}[/source]
In other words, build an array for each object that contains every object that isn't occluded.


Do you really need this level of accuracy ? You could use your physics engine and a hand full of ray-tests to determine the visibility too. Maybe you could use both systems, the accurate one for the players ship sensors and an not so accurate, but faster, one for AI ships.


Ah! Yes: I wondered if a physics engine might help somehow. The level of accuracy should be "decent" as long as it's repeatable and relatively good. I confess ignorance on how a physics engine and ray testing might help here. Could you offer a snippet that might help me understand?

Thank you both!

Pretty close to what I was thinking. My process was something like this pseudo-code:
?
foreach ( object ) { setCamera ( object is camera ); foreach ( object_that_isnt_me ) { if ( occlusion_query( me, object_that_isnt_me ) ) { objectsICanSee[] = object_that_isnt_me } }}
In other words, build an array for each object that contains every object that isn't occluded.

Remember, that you need to have a valid zbuffer to do the occlusion query, that is, you need to render the complete scene from the perspective of each object before starting the occlusion_query. Your code should look more like this:

foreach ( object ) {
foreach ( object_that_isnt_me ) {
setCameraPointingToTarget ( object is camera, target is object_that_isnt_me );
hide(object_that_isnt_me );
hide(object); // prevent self occlusion

renderScene();
if ( occlusion_query( me, object_that_isnt_me ) ) {
objectsICanSee[] = object_that_isnt_me }
show(object_that_isnt_me );
show(object);
}}



I confess ignorance on how a physics engine and ray testing might help here.

A raytest would just check if a ray, or line, collides with a object. The simplest way would be to just doing one ray test between the centers of two ships. This would be very fast and could run concurrently to the GPU. Though one ray test would be often not enough, I would suggest to use several rays or even better an adaptive approach (start with one => not visible, use more). This way you have finer control (LOD), take some burden from the GPU.

But eventually it is just an other option, your GPU approach is valid and what is better really depends on a lot of factors (number of obstacles, number of ship pairs, accuracy, rendering demand, mutlicore support etc. )

A naive approach of your GPU solution would be to render the models as they are, this is very similar to shadow mapping and you will not see lot of shadow maps in current gen games compared to the number of lights (handful of shadow maps vs 100th of lights). A much faster way would be to use bounding hulls consisting only of a few polys, putting the static scene into a single batch and using a low resolution map to determine the visibility. The latter would be not really a problem, especially if you only check the visibilty each X seconds.

This topic is closed to new replies.

Advertisement