Preprocessing sniping spots

Started by
2 comments, last by Hodgman 10 years, 7 months ago
I'm going to use navmesh on my convex hull maps for a turn-based tactics game. I need to be able to find "slits" and windows that provide cover while allowing to shoot at a certain area. How would I do that?

I could break the map into voxels (used in making the navmesh) and calculate line-of-sight data buy there'd be too many voxels/data to store.
Advertisement

Sounds like you need to calculate PVS (potential visible set) information, they did that for Quake, it's not (that) hard if you divide the level into a BSP tree first.

"Most people think, great God will come from the sky, take away everything, and make everybody feel high" - Bob Marley

I've got a good amount of experience with such systems.

Many games use hint nodes defined by the designers, or some games with a tile based or waypoint based navigation system, can use those waypoints as seed points to calculate line of sight stuff for.

Calculating this sort of thing without hints is very expensive and pretty tricky.

You should simplify the information you want to pull from the data in terms of the capabilities of the AI. For example, on one shooter I worked on, our soldiers had 3 cover combat animation sets. 1 wall cover, meaning they could only peek/step out from behind a tall wall to shoot(no shoot over), and 2 'low cover' types that had shoot over and step out and shoot. Knowing those capabilities of the AI, I built a cover segment preprocessor that calculate flagged 'cover segments' by looking at all the border navmesh edges and raycasting incremental 'scan lines' at a configurable tight interval from a ground position, and keeping track of how high the probe was blocked before it was not blocked anymore. With that information I can create far fewer and more useful cover segments along walls or cover objects and such than there would be hand placed points, and it could also handle multiple cover heights automatically along edges. Once each scan line was calculated, some more specific collision testing was performed in order to see if the segment supported shoot over, shoot left, shoot right, and flagged the edge accordingly. I was also able to tweak the segment builder so that a certain number of rays had to fail from a scan line in order to terminate a segment, so that it would not artificially cut up a line segment just because an individual raycast or 2 fell through. This could also be reasonably extended to weight the segment in accordance with the coverage it offered, including the material types that it probed, the number of ray casts that made it through, etc.

On another game we attached cover meta data to objects themselves. This was an open world game where preprocessing an entire environment was not feasable. If you control the content, I highly suggest first considering marking up the data with the content. In addition to meta data in the open world game I worked on (Mercenaries 2-3), since the everything in the environment was destructable and dynamic, I also used a similar cover finding technique by looking at the nav mesh edges, but at a much more limited scope than the scan lines. For that since it was runtime, I only probed for step out cover at the extents of the object and a coarse intervals within to check for places where the low cover, fire over testing might succeed.

For a turn based game you can afford much more expensive calculations. Without knowing more detail about your game, I'd be inclined to calculate cover segments as described, and with that you can use those as starting points to calculate places that have firing position on various targets.

I did this back in a Half-Life 1 mod, around 2004, in what's probably a pretty outdated way laugh.png

As the players and bots moved around the level initially, they'd periodically create "waypoints" as they ran/jumped around the place. These waypoints would then be stitched together into the nav-graph and saved for pathfinding purposes.

As a pre-process, I'd perform a whole lot of computations on the waypoints, which involved doing a ton of raytracing from each waypoint to every other waypoint. This would involve tracing rays from the "head"/"eye" location of one waypoint to scattered positions throughout the whole "body" volume of the other waypoint, to see how much of a persons body would be visible if you were standing there.

I'd also run Dijkstra's algorithm and count how many paths went through each waypoint to get an estimate of traffic.

From these results, I'd determine the best direction to face when camping at a waypoint, how visible you are, how much cover you're in, if you're in a choke-point, a wide-open space or a corner, etc... And then from those I'd determine some dependent heuristics, such as how much cover the waypoints that can see you are in, if you're overlooking high traffic areas, if you're overlooking a choke-point, etc...

Many of those heuristics were unbounded (e.g. count the number of visible waypoints from here), so at the end, I'd normalize all the data into the 0.0 - 1.0 range.

From there, I'd use them as fuzzy-logic booleans to calculate some higher-level properties, such as "is this a good sniping spot" -- which might be defined as something like SelfCover * (1-SelfWideOpen) * VisibleTraffic * (1-SelfTraffic) (or you're in cover, not in a wide open area, you're looking at a high traffic area, and you're not in a high traffic area).

There's a visual overview of the results here:

[edit] I've since lost the code in a hard-drive crash, but I found the article that I based this approach off:

http://www.gamasutra.com/view/feature/131447/terrain_reasoning_for_3d_action_.php?print=1

This topic is closed to new replies.

Advertisement